What If AI Goes Bad?

Featured Article : What If AI Goes Bad?

Following “AI Godfather” Dr Geoffrey Hinton’s departure from Google to talk about the dangers of AI, we look at what the threats could be and what this could mean for businesses.

Departure From Google

Dr Geoffrey Hinton was dubbed as the “AI Godfather” because of his pioneering research on neural networks and deep learning which paved the way for current AI systems like ChatGPT. However, his resignation from Google recently was accompanied by some chilling warnings in a statement to the New York Times and subsequent media interviews where he noted that he now regretted his work. Some of the points he’s been reported as making about the dangers of AI are that:

  • The dangers of AI chatbots are “quite scary”.
  • AI chatbots may soon be more intelligent than humans. For example, with digital systems, all copies of them can learn separately but share their knowledge instantly, so they can know much more than any one person.
  • “Bad actors” could use AI for “bad things”, e.g. giving robots the ability to create their own sub-goals.

Recent Open Letter

Dr Hinton’s resignation comes not so long after the recent “Pause Giant AI Experiments: An Open Letter” signed by many high-profile figures in the tech industry including Elon Musk, Apple co-founder Steve Wozniak, and even some DeepMind researchers. The letter called upon “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4″, which is the next and even more powerful incarnation of OpenAI’s LLM.

The letter made the point that, “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.” It highlighted a series of questions to consider about the risks of AI, including:

  • Should we let machines flood our information channels with propaganda and untruth?
  • Should we automate away all the jobs, including the fulfilling ones?
  • Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?
  • Should we risk loss of control of our civilisation?”

Although these questions seem dramatic, the main point of the letter is that a kind of ‘time out’ is needed because of the speed at which AI is developing.

Not everyone agrees, however, that a 6-month moratorium on AI development is feasible or the right way to go and, in fact, Dr Hinton has been reported as saying that he doesn’t think that AI development should be halted.

What Are The Main Worries About The Threats Of AI?

In addition to those risks highlighted by the questions in the open letter and by Dr Hinton’s reported comments in the press, some of the main worries about the potential threats that AI could pose, include:

  • Job displacement. As AI and automation become more advanced, there is concern that they will replace human workers in many industries, leading to job losses and economic instability.
  • Bias and discrimination. AI systems can learn to make decisions based on biased or incomplete data, which can result in discriminatory outcomes, particularly in areas such as hiring, lending, and criminal justice.
  • Privacy and security. AI systems can be used to collect and analyse vast amounts of personal data, raising concerns about how that data is used and protected.
  • Autonomous weapons. There is concern that the development of autonomous weapons powered by AI could lead to the escalation of conflict and the loss of human control over military decision-making.
  • Existential risks. Some researchers and thinkers have raised concerns about the long-term risks of advanced AI, including the possibility of superintelligence that could pose an existential threat to humanity.

These threats could impact individuals, organisations, and society as a whole, which is why many now think that it’s important to carefully consider the ethical and social implications of AI as it continues to develop and be deployed in various contexts.

Already Deployed

The fact is that AI is becoming ubiquitous and is increasingly deployed in many systems in various industries. For example, AI algorithms are used in video-streaming platforms, recruitment (for application filtering), by insurance companies to calculate premiums, and in medicine as part of scanning and diagnosis, to name just a handful.

Some Are More Sceptical

Many IT industry figures, however, are sceptical about the idea that AI algorithms could surpass human intelligence any time soon. For example, some of the points made include:

  • Chatbots are known to draft their responses token by token to predict the next word in a response, whereas when humans speak, they express more fully formed ideas. Therefore, understanding the difference between human and machine intelligence is important when separating a likely future from hype.
  • The fluency of chatbots doesn’t prove that they can reason or achieve understanding in the same way as humans.
  • AI chatbots, for example, are limited to narrow tasks and can’t interact with the physical world to complete more varied assignments as humans can do as a result of their intelligence.
  • We are really still at the starting point for AI and the current ‘constructivist’ approach needs to be developed further so that systems can model causality autonomously, in an effective and efficient manner in order to be more ‘intelligent.’
  • It’s important not to confuse intelligence with sentience, a fact that Google engineer Blake Lemoine discovered when he was sacked for suggesting that the Language Model for Dialogue Applications (LaMDA) AI system bot was somehow sentient. True human intelligence is linked to sentience, which is one of the reasons why AI may not be able to surpass what we know as human levels of ‘intelligence.’

AI Market Domination

It’s worth remembering also that generative AI, for example, is a rapidly growing market where some early players have gained a lead (i.e. OpenAI with ChatGPT) and where it may be in the interests of other big tech companies to slow AI development down so they can catch up and compete. Some also think that there is already the need to review the AI market to make sure that no single firm will dominate the market and that the benefits are available for all. For example, the UK’s Competition and Markets Authority (CMA) has said that it will be looking at the impact of AI on competition, with a view to creating “guiding principles” to protect consumers as AI develops. Also, in the US, the heads of Google, Microsoft, OpenAI and Anthropic have met US Vice President Kamala Harris to discuss similar issues.

What Does This Mean For Your Business?

The rapid growth of AI and the personal experience of many people with generative AI through ChatGPT, coupled with a lack of understanding of how AI actually works, commercial influences, and alarming hype fuelled by reports such as those about Dr Hinton’s resignation, the open letter, and other tech commentators have said, have led to a focus on the threats of AI. The fact is that just as AI could result in job losses, privacy issues and the circulation of misleading information, it will also transform the way businesses compete, drive substantial economic growth, and could deliver many more benefits than negative outcomes. Regulation, the setting of guiding principles, and a degree of collaboration between big players, governments, and other interest groups could all help to minimise the threats and many see the proposed 6-month moratorium as an unlikely solution to what is essentially progress, a new kind of industrial revolution, and a rapidly growing and changing new market that holds exciting potential opportunities for businesses as well as threats.

Sponsored

Ready to find out more?

Drop us a line today for a free quote!

Mike Knight