Artificial Intelligence (AI) – What is it, and should we be concerned about it ?

AI development has continued apace since this blog was first published in June, with little sign of a slowdown. Herewith an early autumn update.....

 Introduction

There has been much discussion, and not a little concern, voiced in the press and on social media recently about Artificial Intelligence (AI) and the dangers it might pose for humanity..

How dangerous is it really, and what if anything can, or indeed should, we do about it ?

First, let’s try to pin down what we mean by the term ‘AI’. The OED dictionary definition is:

“..the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

This rather narrow definition focuses on the use of AI as a tool, to be used to enhance what we as humans already accomplish, rather than considering what a new ‘cohort of intelligent machines’ might do themselves in the future.

Apocalyptic visions of a dystopian future apart, if we are to understand the decisions humanity will need to take in dealing with AI, we do need to widen the discussion to take in some of the more sinister and beneficial possibilities it presents.  Notably we should consider how AI might be mis-used -  not so much by its creators, but by other agencies wishing to use it to further their own ends to the detriment of the rest of humanity.

The widespread recognition of this as a possibility amongst AI developers is, I suspect, what’s behind the current wave of alarm. No less than three of the ‘grandfathers’ of the technology voiced concerns publicly last autumn. Their prime concern was the lack of any oversight of big tech’s activities in the AI area which seemed to be progressing  development ‘hell for leather’ to out-compete one another in order to secure maximum market share.

Sounds familiar ? – of course it does – it’s merely capitalism and market forces at work, and we should expect it to continue apace now, given the potential rewards on offer for the winners in the 'AI tech' arms race.

Where are we now ?

In fact if we adopt the wider definition I've proposed, we are quite a way down the AI road already, particularly in areas we might describe as ‘low hanging fruit’.

Everyday applications of AI such as speech and facial recognition have developed into a fine art and are now commonplace and widely used. Amongst other things, speech recognition has allowed automation of online ‘helpdesk’ functionality. This is at best a mixed blessing, and is much disliked by some, who regularly voice the opinion that they would ‘prefer to talk to a human’. 

A more sinister manifestation of AI already with us and perhaps less obvious to most is the extensive use being put to facial recognition in population monitoring. This is occurring worldwide wherever suitable video cameras are situated, but is most prevalent in autocracies such as China and Russia, where it is used to help 'root out' potential opposition to the regime before it has a chance to proliferate. Orwell's 1984 'Big Brother' predictions do appear to be materialising in this area at least, albeit some 40 years later than anticipated.

Much additional development in other areas has already occurred, particularly over the last year. This AI growth’ spurt’ has been driven, as is often the case, by warfare. WW2 inspired one of the most fertile periods in our technological developments in the 1950s-1980s, and the Ukraine war is providing a similar stimulus to AI development today. The political realignment the conflict has generated has convinced many that globalisation is very much on the decline, and we in the West can no longer rely on guaranteed supplies of cheap technology, or indeed the raw materials necessary to manufacture it, from the far east.

It has also promoted a technological ‘arms race’ and a consequent interdiction on sharing new technologies with those seen as enemies – real or potential. In some ways, this could be seen as a ‘silver lining’ to the Ukraine war in that it will hasten development of new technology worldwide. It will only be beneficial, however, if we put any new technology that does develop to good use in solving our current problems, rather than just allowing it to create new ones.

Ukraine has also had a direct stimulatory effect on development of AI-based systems designed for use in battlefield strategic modelling. The Palantir analytical software developed in the 2010s is being applied to advantage by the Ukrainians in their defence against further Russian encroachment by helping them to ‘penetrate the fog of war’ (see previous blog on modern warfare), and will no doubt figure large in their continued counter-offensive when winter eases its grip. The monitoring element of this has already confirmed the Russians were in fact responsible for destroying the reservoir dam, and will be defining strategy for the forthcoming operations in the East and South. The revelation in July 2023 of what appeared to be explosives deliberately placed on the roof of one of the Zaporizhzhia nuclear reactor complex buildings by the Russians to deter any attempt to retake it is another example of  this type of monitoring. The software also has a wide range of other applications, both civilian and military worldwide.

The rapid progress in the application of AI that we've already seen is not confined to battlefield and strategic war scenarios, with many AI applications encroaching on the civilian workplace in the form of new robotic devices and procedures. Another concern much-voiced (particularly by the trade unions) is the mass loss of routine jobs this threatens as a consequence, and could yet provide a powerful backlash against the use of AI.

The jobs most at risk are not just the most tedious manual ones – AI in its true form could replace humans in many clerical jobs, and even in professional roles requiring advanced interpretative and diagnostic skills such as lawyers and doctors – few jobs are genuinely ‘immune’ in practice.  We can see the beginnings of a concerted push-back now, with many of the disputes in the current wave of UK strikes citing the imposition of changes in working practices as the main problem, rather than pay levels themselves – Mick Lynch of the UK's RMT rail union actually went on record as saying just that recently, admitting openly that pay was ‘not the real issue here’. 

Management’s ‘stock’ argument for insisting that pay increases are tied to altering working practice policies in the current round of disputes is, of course, that the existing working practices are inefficient and changes in workplace terms and conditions must be introduced to stave off the competition and avoid the firm going under. This may or may not be true in any specific case, but the battle rages on, and there is no easy end in sight. The onset of rapid inflation during 2022-23 should thus be seen as the trigger of the current strife, not its underlying cause as many believe.

In short, we need to prepare ourselves for an AI-induced upheaval in the way we live and work; I suspect this will make our current agonising over what to do about climate change look like a ‘walk in the park’....

What do we need to do as a society to avoid ‘AI disaster’ ?

As readers of some of my previous blogs will already know, I do have an annoying habit of posing unanswerable questions, and this one is no exception...

AI development will no doubt throw up some notable unintended consequences, and as in all things, we should be careful what we wish for. The best we can do at present is to agree that progress needs to be carefully monitored and controlled, where necessary by direct intervention.

What we shouldn’t do is a) ignore AI or b) try to stifle it.

Option a) would just be downright negligent, and our offspring would be likely to inherit the dystopian future we and they deserved if we did.

Option b) might hold back progress initially and assuage some immediate fears, but like most prohibitions in the past, would quickly drive development underground, and thus away from any form of regulatory scrutiny. This would be much more dangerous than keeping it in plain sight, as we’ve seen with some past (and present) prohibitions. Avoidance of scrutiny would also be much easier to achieve, since the principal players are large multi-national big tech concerns and are already adept at side-stepping national boundaries and laws by using their 'supra-national' status, and hiding behind 'commercial sensitivity'. 

We should therefore recognise the value of AI as a valuable tool for our development as a species and prevent it becoming the force for evil it could turn into if misused.

Assuming we do manage to take a sensible middle course, we need to reach international agreement on the need for regulation and establish monitoring bodies to enforce those regulations. This will be no mean feat in the current nationalistic climate with autocracies becoming ever more prevalent on the world political scene. It will also require considerable financial outlay at a time when worldwide economic recession still beckons.

Could AI Really Cause our Extinction as a Species ?

Possible, but not very likely, seems to be the general consensus on this at present….

There are many other ways in which we could (and may well yet) achieve our own extinction, AI being but one possible contributor. It’s much more likely that an accidental nuclear exchange will occur which then escalates to a full nuclear conflict - this would precipitate a 'nuclear winter', with all its appalling consequences for any unlucky enough to survive the first wave. As I’ve discussed in a previous blog,  this could happen at any time. A lethal pandemic is another ‘favourite’ for triggering Armageddon, assuming we manage to stave off nuclear holocaust for long enough to succumb to one.

The classic sci-fi scenario  where super-intelligent machines decide that humans are an unnecessary drain on planetary resources and should be eliminated, is far-fetched to say the least. It would in any case require many years of continued technological capability and some very misguided and/or malevolent programming on our part to enable such machines to ‘evolve’ the autonomy required to do the job. 

To provide at least some reassurance for those who are still concerned, however, we should consider the practicalities. Every item of computer hardware or software I've ever had dealings with has had a habit of malfunctioning with remarkable regularity, and I'm sure I'm not alone in this..... Humans will always be needed - if only to apply the time-honoured remedy of switching the power off and on again when a member of the super-computer robot 'master race' succumbs to the computer  'gremlins' and grinds to a halt !...

Joking apart, the inherent unreliability of any complex electronic device we humans manage to create suggests that the time it would take for humans to become completely obsolete far exceeds the time we have left before an 'Armageddon event' of some description takes us back to the dark ages and renders us incapable of any further technological progress.

And that somewhat chilling possibility brings us to the ‘crux’ of the matter – AI will develop the way humans want it to –  it is human programmers and their masters who decide how AI is configured and more importantly what rules are built into the programs which actually ‘run’ AI.

This of course implies a huge responsibility on the part of the programmers themselves…and therefore upon us as the payers (directly or indirectly) of their salaries. We are seeing the realisation of the enormity of this responsibility on their part. Some high-level AI developers are even going so far as to abandon their lucrative jobs in big tech and ‘go public’ with their concerns, even comparing the impact of AI to the development of the first atomic weapons.

If we allow malign entities to channel development for their own ends, there is always the possibility that AI might become mis-programmed (from a human point of view) and lead to humans being killed or injured en masse as a direct result of its application. Commercial AI development is perhaps the least likely to generate this – it is simply too closely scrutinised by the media at large.

Arguably, it is military applications which are more likely to develop in the wrong direction and cause problems down the line. The seeds of this may already be germinating behind the 'blackout curtain' of the Ukraine war with the application of AI in battlefield scenario programming. Although few would condemn any use of machine-assisted intelligence on the Ukranian side which might lead to a speedy and just resolution of this particular conflict, both the Russians and the Chinese are known to be making major efforts in this area. The key question therefore is ‘How do we monitor what's actually happening worldwide and stop this type of destructive application of AI becoming its principal use ?’

The first step in the process is to decide on a basic set of rules which must always be built in to any AI program. You could liken this to the keystone of a doctor’s Hippocratic oath – ‘first do no harm’. 

The 20th Century sci-fi author Isaac Asimov stole the march on recent AI pundits many years ago with his ‘Laws of Robotics’, and reviewing these recently, I think we could do worse than use them as a blueprint for our modern ‘AI rulebook’. His book 'I, Robot' is an excellent read and gives a fascinating insight into mid-20th Century expectations of AI and its consequences. It was also the inspiration for the popular 2004 film of the same name.

In short, we must impose effective monitoring and control processes - and before we go much further. The biggest danger is that, in a headlong rush to out-compete the opposition, we fail to curb risky programming and allow a 'free for all' - therein lies potential disaster.

What Advantages are there for us in Developing AI further?

The answer to that question is ‘many’ – from refinement of those applications which have already hit the headlines, to the discovery of  new uses yet to come. The potential benefits to humanity far outweigh the risks, and we should accept this.

One recent important example of an application of AI already having had an impact is the discovery of a new class of chemical structures which have potent antibiotic actions through a novel mechanism. The antibiotic resistance problem has been well documented over the past decade and is becoming ever more acute, with most of our antibiotics now ineffective against some multiply-resistant hospital-acquired bacterial infections. This discovery could just be in the nick of time to stop us regressing into a new ‘Victorian era’ of widespread deaths from sepsis and other opportunistic bacterial infections. The AIDS epidemic of the 1980s and early 1990s prior to the advent of the anti-virals gave us a glimpse of how things might go without new antibiotics. AI can and will also be used to speed up drug discovery in other areas, so we may yet see a new ‘golden age’ of AI-inspired pharmaceutical development.

We’ve already discussed the down-side of replacing human manual labour with machines, but this should be seen as a way of reducing the burden on our lives and allowing us to do other and more pleasurable and useful things with our time.

The key to achieving this is that we adapt our way of living to enable us to stop doing these repetitive jobs, without depriving ourselves of the basic necessities of life. This will of course require a major re-think about how we finance and support all our citizens, while ensuring that those jobs that humans really need to themselves do are adequately resourced…and valued. Arguably, we would have needed to do this anyway even without AI, given current demographic trends and the scarcity of younger workers to fill the existing posts. 

Recent effects of AI apart, the onset of the so-called ‘Gig Economy’ in the mid to late 2010s has itself had an insidious effect in devaluing many jobs and threatening the viability of many more in the pursuit of profit.  Perhaps the most important task in AI’s ‘in tray’ might be to help us work out how to solve this particular thorny sociological issue in the environment of a capitalist world ?

What should we expect from AI ?

This is an important question we should all ask ourselves – AI has been portrayed as the ‘best thing since sliced bread’, and as prospective users we need to assess whether it actually is.

We should therefore beware of raising our expectations of it too much at this early stage. 

As yet, the technology is still in its infancy, and there will be many bumps in the road before it achieves its full potential. As discussed, it’s really up to us how we develop and use it, and like most human endeavours, the process will be subject to many errors and much argument. It’s important that we recognise this and avoid ‘throwing the baby out with the bathwater’ on the grounds of fear-driven opposition and excessive prohibitions on its use. Innovation is the key to its success, but this must have universal oversight, if nothing else to avoid it being stifled through widespread suspicion of its consequences.

Another interesting question relates to the way in which we 'educate' AI. Its rapid development of necessity requires that it draws most of its data from publicly available internet content - we as consumers and voters are, after all, its main targets. Given that a sizeable proportion of the existing content is acknowledged to be either partially incorrect or frankly irrelevant garbage, what does this imply for AI's effectiveness and relevance ? Unless human intervention is able to 'weed out' the garbage and eliminate deliberately erroneous material, AI engines may well become 'agents for  inaccuracies'. The main big tech players realise the limitations and openly admit their current offerings may not be 100% accurate, which gives credence to this idea. By its very nature AI is designed to work in the background autonomously, so corrective human intervention of this sort is unlikely, given the pace of advance. All the more reason for caution on our part when using it.

Can we ‘sample the delights’ of AI ourselves yet ?

As might be expected, both Microsoft and Google have been heavily committed to a virtual ‘arms race’ in the area which now manifests itself in their respective ChatGPT and Bard public offerings. Microsoft committed to a $10 Bn investment in AI technologies’ Chat GPT product in early 2023, thus demonstrating their commitment. Apple have focused heavily on Virtual Reality (VR) in recent years, and have just brought out a new VR product,  but are likely to enter the AI race in earnest soon, particularly since VR revenues haven’t lived up to expectations (not surprising, perhaps, given the £3500+ 'ticket' for their latest offering - I suspect even committed Apple 'obsessives' will balk at that).

The current AI open applications for consumers are effectively free-access Beta version ‘tasters’; as first generation products they are designed purely to whet our appetites for things to come. Rest assured, they will not remain ‘free’ for long – both companies will want to recoup some of their development costs and we can expect the usual introduction of special offers to tempt us when the inevitable hefty subscription charges for access are introduced (probably later this year - so get your free enquiries in before then). 

It’s no coincidence that all products on offer require personal details to be supplied when registering. This is to enable both providers to target their marketing to interested users. The other thing that is happening behind the scenes is the incorporation of AI ‘under the bonnet’ in many of their existing applications to enhance their capabilities and attractiveness to tempt future subscribers.

Whatever the level the subscriptions are eventually pitched at, as prospective users, we need to make sure they are good value for money from the start, and perhaps more to the point, will not be quickly superseded. This is a complex and bewildering area which is changing almost daily, so caution should be the watchword. Those of us 'of  a certain age' will remember the Betamax vs VHS video recording format 'tussle' back in the 80s and early 90s - this particular sales war promises to be just as competitive - and destructive for the loser and its adopters. 

How do the existing 'chat-bot' versions stack up ?

I have to admit that so far I can only claim quite limited personal experience with only one of them – Google’s Bard. I plan to investigate further and include ChatGPT, and will update this blog with my experiences.

Suffice it to say that at present my initial impression is favourable, but with reservations. Bard is undoubtedly very good at collating relevant information in response to a well-worded and specific request. It appears to provide a useful way of eliminating those hours of scrolling through endless Google search citations to find the few items that are really relevant. As such it will be a useful tool.

It does have a tendency to get things wrong, however – its developers freely admit this, and even issue a ‘health warning’ to that effect. It’s essential that we heed this warning and do not rely entirely on its output as ‘gospel’ but instead use the time it saves us to extend our own fact-checking process. The time saved by the bot in filtering out irrelevant search info out for us should be more than enough to compensate for the extra time we might want to spend in more exhaustive verification. We all know how disingenuous some websites can be from experience, so application of some real human oversight and intelligence is still essential to the process of getting our facts straight.

By way of an amusing and topical anecdote to illustrate the point, during my limited researches I rather mischievously presented Bard with the following request: 

“Compose a Sonata for Piano in the Style of  W. A. Mozart”.

I did this primarily as a ‘tall order’ test, and I fully expected Bard to reply that this task was beyond its capabilities, thus confirming that it recognised its own limitations.

Imagine my surprise when it came up with the reply – ‘..sure, I can do that’, and then proceeded to outline the form of the sonata it had already composed in some detail, even finishing with a cheery message hoping that I would ‘enjoy my sonata’. 

Just one problem – there was no sonata !

What I suspect had happened was that the algorithm had trawled the Google indices, found some well-established information on the sonata form as interpreted by Mozart, and then just presented that in order to complete its task. Naturally enough, it wasn’t capable of actually composing a 20-30 min composition of any sort, (let alone a ‘new’ Mozart sonata !) in the space of the 2-3 seconds it took to reply. More to the point, it wasn't able to grasp that it hadn’t quite done all that was asked of it....

It will definitely need to learn to recognise and admit to failure rather than ‘telling porkies’ in an attempt to keep its clients happy.

More amusing and entertaining experiences on the way, I suspect - watch this space.

The message I would give to any would-be users of AI therefore should be: ‘wait and see how things develop, rather than rushing to subscribe to the first AI 'packages' that come along’.

Apart from anything else, progress is incredibly rapid at present, and avoiding ‘locking in’ to a 1- or even 2-year fixed-term sub is really a no-brainer at this stage. Just avoid getting too 'hooked' on the free products before they disappear.

Conclusions

AI is likely to be predominantly a force for good for humanity in that it offers powerful tools with which to speed up discovery and development in a wide variety of areas. Its perceived potential means that it is definitely not going away.

As with every human invention, however, it is capable of misuse as a result of our human ‘frailties’. To try and prevent this happening, we need a robust regulatory framework. This needs to be agreed internationally and applied with universal oversight to the development of any new AI application, commercial or otherwise. Only this will quell the widespread fear that the perceived power of AI already engenders in many of us.

Achieving an effective regulatory framework will in itself be a ‘tall order’, given the nationalistic tendencies hard-wired into the human psyche and the rise of autocracies we’re currently seeing around the world as a result. 

We must however be prepared to embrace the technology, precisely because it is likely to become so widespread. To do this effectively and without causing suffering, we will also need to modify the way our society operates as necessary to avoid mass unemployment and dissent. Integrating AI into our way of life in a controlled way with appropriate safeguards, is the only way that we will avoid widespread misuse of AI, and the fatal consequences it could bring for our species.

The early applications of AI we already have access to as consumers are reassuringly primitive, and show how far the technology would still need to develop to generate a ‘man vs machine’ war scenario of the type featured in the many modern sci-fi offerings.

Although they have much promise as useful and time-saving tools, the first generation AI ‘products’ can in no way yet be regarded as ‘true’ machine intelligence – at least some degree of verifiable self-awareness would be required to qualify for this. Since we don’t yet even understand how our own brains manage this feat, or whether and to what extent it occurs in other animals, there is still a long way to go before we can re-create it in robotic machines. 

Perhaps more to be feared in the medium term are the military applications of AI, which are by their nature covert and likely to remain so. It is here that we are most at risk of harm, due to lack of independent oversight and the destructive motives involved. If AI does ‘go wrong’ it will be humans at fault, not the machines they create to apply it.

Caution in interpreting data generated by AI will be required for some time to come - it remains to be seen whether developers can 'educate' their engines to discriminate between right and wrong when assimilating internet content.

One final thought – our own strategies for dealing with our current problems, let alone a whole host of new ones already on the horizon, have been decidedly lack-lustre so far, and seem to continue lurching reactively from one crisis to the next without any really coherent plan.

Perhaps one of the top priorities we should set for AI would be for it to devise better ways for us to manage our own human existence – and then work out how to persuade us all to implement them.

Now that really would be something useful to aim for….AI developers please note!

First published 3.6.23

 

Comments

Popular posts from this blog

Solar Panels: Are They Right For Me ?

Labour Declares War On Pensioners by Abolishing Universal Winter Fuel Payments – What's Next ?

Pneumonia in Young Children: Is the Chinese epidemic spreading ?