When Words Aren't Enough
OPINION |

When Words Aren't Enough

MAKING MACHINES AWARE OF WHO IS TALKING: THIS IS THE REAL CHALLENGE TO MAKING THEM MORE USEFUL TO HUMANS

by Dirk Hovy, Dept. of Marketing, Bocconi

 
We all have a device in our pockets or our homes that we can talk to. It will tell us the weather, or what’s on our agenda, and play songs by Nina Simone for us.

It looks as if they understand us, but really, they don’t. These machines are trained to respond to certain inputs, based on their training, but that is not at all the same as understanding. If we asked ”I went to Rome last week, did my head go to Rome as well?”, they would be stumped.
 
Language is fundamentally a human experience, and it matters just as much who says something as what they say. When we hear the sentence, “That was a sick performance!”, it makes a huge difference whether it was said by a 16-year-old or by a 86-year-old. We do express who we are through language.

We use that knowledge when talking to people, too: within a few sentences, we pick up on where someone is from, how old they are, and what their gender is, as well as other, more subtle cues, like their personality or educational background.
 
Ironically, computers can be trained to recognize these cues, even better than humans. We are much more predictable than we would think, and computers are good at recognizing these patterns. By now, programs exist to decide with high accuracy whether a text was written by a man or a woman, what their age was, and a number of other features. Algorithms can locate a social media user down to a few dozen kilometers.
 
Such tools are an invaluable help for social scientists, for linguists who study language variation, and for commercial applications to detect fraud (is the user who they say they are?). They are a necessary step if we want to teach computers the difference in language between different groups.

However, we need to be aware that such tools can also be used to profile us where we might not want it: our words can give us away online, whether we know it or not. So far, all of these  tools are highly specialized, and so far, no computer can do all of them at the same time, but we need to be aware that they might.
 
And while the way we speak says something about us, what we say is equally important as that we say it. Telling a friend “I am sorry for the loss of your father” is important not just because of the meaning of the words, but because we say it. And this is something computers still can not understand.
 
They only pay attention to what is said. This is not only an impediment to real understanding that limits the usefulness of language technology. It is also a potential problem. Because computers cannot distinguish between different groups of speakers, they don’t understand everyone equally well. As a consequence, all those tools in our pockets and homes only work for some of us. They are the equivalent of scissors made for right-handed people: inefficient, awkward, and potentially dangerous or the rest of us.
 
And as those tools become more widespread – in daily life, in industry, and in decision making – they risk disadvantaging an ever larger group of “left-handed” speakers.
 
However, it is possible – and in fact not too hard – to make computers aware of who is speaking, and take it into account to analyze what is being said. Several papers have shown that a variety of techniques can help computers distinguish who is talking, and thereby get better at analyzing what is being said.
 
This is especially important when it comes to the intersection of language and personality. We know that many mental health conditions are reflected in the way people speak. Psychologists are drawing upon this feature when they interview patients. However, any psychologist would find it ridiculous to only listen to the words. They will always take into account who the person sitting across from them is: men and women, people of different ages are susceptible to different conditions, and they will talk about them in different ways.
 
We were able to show that a computer that pays attention to a patient’s gender is much more likely to correctly recognize a variety of mental health conditions, identifying more than 120 at-risk patients for suicide than a computer that only pays attention to the words. Such a tool could be a valuable aid for psychologists, who cannot be with their patients at all times.
 
This is an encouraging thought, because it means that while we are still a long ways away, we could be able to truly make computers understand who is talking, and not just what they are saying. It will take time and ingenuity, but we might get that much closer to getting computers to understand us.
 

Latest Articles Opinion

Go to archive
  • Will America and China Manage to Escape Thucydides' Trap?

    A cold war between the US and PRC is already underway, with the two great powers engaged in a trade war that could escalate into military conflict. Geopolitical polarization is leading to the friendshoring of supply chains, stagflation and reduction of the global growth potential

  • The Right Protection from Shocks

    Unemployment insurance or shorttime employment? Is it better to protect workers or jobs? The answer may lie in the complementarity of the two policy responses

  • The Flight of the Honest

    Migrants tend to be more honest than those who stay in their places of origin. As a result, those countries are deprived of social capital, with negative effects on productivity, growth and the quality of institutions

Browse the magazine in digital format.

View previous issues of Via Sarfatti 25

BROWSE THE MAGAZINE

Events

Mon Tue Wed Thu Fri Sat Sun
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30