Would you be afraid of a 3year old?
OPINION |

Would you be afraid of a 3year old?

AI CANNOT DECIDE TO PERFORM TASKS THAT ARE BEYOND ITS CAPABILITIES AND ABOVE ALL IT HAS NO THEORY OF MIND, NO FEELINGS, NO CONSCIENCE, NO MALICE. IF ANYTHING, AI UNCOVERS THE NAIVETE' AND MISTAKES OF ITS DESIGNERS. TRY IT YOURSELF TO GAIN KNOWLEDGE AND FIND OUT

by Dirk Hovy, Associate Professor, Department of Computing sciences

AI news comes in two varieties: hype and doom. Recently, the hype cycle has been prevalent. AlphaGo defeated the best human player, machine translation became a useful tool, and chatGPT now provides witty responses to all questions. But doom is never far away: warnings of job losses, killer robots, and sentient AI. The fear is understandable, but it reflects our perceptions of AI more than its actual capabilities. The main cause is humanizing those models and assuming they have drives, motives, and emotions that would cause them to act evil when they really just do a task.
 
To be clear, current AI technology has numerous flaws, so we must use and develop it with caution. These problems are caused by bias and user discrimination. The automated grading system in the United Kingdom unfairly punished some students, the machine translation system that incorrectly translated "Good morning" as "Attack them," resulting in legal trouble for an innocent person, and the speed camera that issued a ticket to an innocent driver because it mistook a knitted jumper for a license plate are all examples of how AI can cause havoc. The worst example is the Indian man who starved after an automated decision system denied him food rations.
All those tools, though, acted out of design flaws, not malice. The consequences are still dire, but it identifies the problem.
 
Despite these concerning reports, I do not anticipate any lethal AI threats. I'm not alone. According to Andrew Ng, a pioneer in neural networks, "Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!". What evil actions would a sentient machine translation even take? Produce poor translations to irritate you?
 
The concerns are understandable, though, given that we have machines with human characteristics. They play games, answer questions, translate sentences, and identify people in photographs. If they can do all of that, they must be like us, right? So they most likely also have hopes, dreams, and aspirations.  But complex artificial intelligence systems must be tailored to each task, such as Go, sentence analysis, or photo recoloring. These three are each unable to complete the other two tasks. Despite the efforts of many intelligent people, AI tools frequently perform like three-year-olds. AI cannot yet decide to perform tasks that are beyond its capabilities.
 
Even AI researchers are affected by humanization bias, though. An external Google employee, Blake Lemoine, claimed after extensive discussions with Lambda that the model possessed self-awareness and consciousness. A journalist posed slightly different questions to Lambda from Lemoine. And the model denied being conscious.
 
Maarten Sap, a University of Washington researcher, investigated language models' Theory of Mind (the ability to imagine and understand the thoughts and feelings of others). A patient's Theory of Mind can be determined using a variety of question-based psychological tests. They can also be administered to language models. But this logic is flawed. People answer questions based on their complex inner workings. Language models just generate a list of likely words in response. While their behaviors are similar, their motivations and paths to the same goal are not.
 
As a result, asking whether models have these psychological abilities is pointless. They do not have a Theory of Mind, feelings, consciousness – or malice. Why and how would they develop this capability in the absence of explicit programming? Each AI task must be meticulously defined and trained. That never includes giving sentience, emotions, or aspirations. AI models may reflect the naiveté and lack of checks and balances of their designers, but they do not act out of evilness.
 
So should you fear AI becoming evil? No. Should you keep an eye on their design? Definitely. Should you try using it on a daily basis? I invite you to try AI. What we understand cannot scare us.
 

Latest Articles Opinion

Go to archive
  • The Flight of the Honest

    Migrants tend to be more honest than those who stay in their places of origin. As a result, those countries are deprived of social capital, with negative effects on productivity, growth and the quality of institutions

  • The Toxicity Threshold

    On the one hand, platforms and their algorithms appear to accommodate the presence of hateful content in users' feeds; on the other hand, online platforms have moderated toxic content from the beginning, even before steep fines were introduced. Perhaps a profitable strategy for them lies in the middle

  • How the National Living Wage Helps the UK's Poorest Households

    The UK's national living wage has just been raised by 10% and research shows it can be a successful policy tool to benefit poorer households

Browse the magazine in digital format.

View previous issues of Via Sarfatti 25

BROWSE THE MAGAZINE

Events

Mon Tue Wed Thu Fri Sat Sun
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30