Contacts
Opinions

Machines That Learn, But Lack Common Sense

, by Andrea Celauro
The technological evolution of artificial intelligence continues to make great strides, and systems such as ChatGPT today are able to produce far more elaborate texts than in the past. These machines are increasingly cultured but still far from human intelligence, as explained by Luca Trevisan, Professor of Computer Science at Bocconi University

We have Alexa and Siri, the most famous, but also Replika, Sunny, Juliet, Amelia, Andy and many others. These are chatbots, systems based on artificial intelligence that are able to talk with users, usually to answer relatively simple and standardized questions. And then there is ChatGPT, which unlike other bots is able to write elaborate texts with a degree of consistency and complexity never before achieved by a machine. But let's take a step back and try to understand how these systems work and how they might develop, along with Luca Trevisan, Professor of Computer Science at Bocconi.

How do chatbots work and what is the difference between systems like Alexa or Siri and ChatGPT?

AI such as Alexa or Siri must transform natural language sentences into commands to execute ("Turn on the light, set a timer"), so they operate in a narrow and well-defined context, and their ability to interpret text is limited. In this case, the AI component is what allows the program to understand even a misplaced phrase or one that contains a typo. The difficulty for the machine lies in the fact that the same thing can be said in different ways. Therefore, Natural Language Processing (NPL) algorithms are required to resolve these ambiguities, plus a certain machine learning component to allow the system to learn. ChatGPT, unlike other chatbots, is able to produce texts of great complexity, consistency and style in very different areas, from poetry to medicine.

How did OpenAI manage to achieve this result?

By feeding the system a huge amount of texts and documents of all kinds, as usually happens in the training of these machines, but also through an additional task: asking the chatbot to complete sentences based on the documents already processed. It is a task that already includes an advanced ability to understand and rework the text. Previous systems did not have the ability to maintain consistency even in long texts – a bit like Grandpa Simpson, they tended to lose the thread of their conversation. The model on which ChatGPT is based, which in technical jargon is called a Transformer, is able to do this. Most importantly, however, is that no one expected them to achieve such a result in so short a time.

Namely?

The big difference of ChatGPT is that, in responding to requests and in the dialogue that arises with the interlocutor, it is able to give much more generalized answers, that is, not strictly related to the request to execute a command. To make a comparison, it is like the difference between a student who has learned the lesson by heart, but is unable to go further, and students who, including learning what they have studied, are able to make connections. Developing such complex systems is like giving value to an equation with millions, if not billions, of variables. These systems are so large and with so many variables that there is no single way to find a solution. Of course, they are not yet perfect: ChatGPT itself sometimes makes huge mistakes, proving that the system has yet to mature.

How do you think they will evolve?

It is not clear whether our method of training machines is actually the most suitable for them to evolve further in the future. Skepticism arises from the fact that the AI show strange unexpected 'blind spots,' that is, they fail at tasks that you would never expect them to miss. They commit mistakes that humans would never make. Like Tesla's self-driving system that mistakes a truck for a bridge and doesn't stop the car. What machines still lack is something that is difficult to define in mathematical terms: that 'common sense' that allows humans to understand unexpected situations and make do by finding alternative solutions, to avoid making certain big mistakes.

So cultured machines, but not yet intelligent?

What systems like ChatGPT seem to demonstrate is that you can have AI that operate like educated people, that can act as pure executors, maybe even for complex tasks, but not as decision makers.

Will we ever develop an artificial intelligence capable of surpassing human capabilities, of arriving at what is called the 'singularity'?

So far, the technological progress of hardware has proceeded exponentially, but no process can remain exponential for too long and a slowdown at some point is inevitable. Whether technological development will come to a halt before or after the creation of an AI unimaginably smarter than human beings is not known. However, as technological progress is evolving and already showing signs of slowing down, it is perhaps more likely that we will come to have machines with a level of intelligence similar to that of humans.

The debate about the evolution of AI also brings with it the fear that it could somehow become dangerous for mankind

Technological evolution itself is neither good nor bad, of course. However, the implications and ramifications of such profound changes – and consequently their impact on society – become impossible to predict. Rather than thinking of a catastrophic scenario in which an AI decides to annihilate humanity, it is better to consider how these systems can be used to destabilize democracies and create disinformation. It will be increasingly difficult to distinguish texts produced by humans from those created by chatbots, with all the ensuing consequences.

More generally, are international agreements needed to establish regulation of AI?

Yes, for example as is the case with cloning, with the ban on cloning humans. The European Union is working on an AI Act, and it is good to start thinking about regulation. I would like to see a global agreement as a point of arrival. At the moment, however, it is too early to decide what to ban and making a worldwide regulation now would be premature.