Part II: Speed
An AI Pal That Is Better Than “Her”
The charming automated assistant in Spike Jonze’s new movie isn’t realistic. But if they were designed thoughtfully, computerized interlocutors could make us better people.
By Greg Egan | January 24, 2014
[Time 2] In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.
Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?
Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.
But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?
Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.
[375 words]
[Time 3] Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.
There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.
The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.
But perhaps this is an overoptimistic view of where the market lies; self-knowledge might not make the strongest selling point. The dark side that Her never really contemplates, despite a brief, desultory feint in its direction, is that one day we might give our hearts to a charming voice in an earpiece, only to be brought crashing down by the truth that we’ve been emoting into the void.
[350 words]
Source: MIT Technology review http://www.technologyreview.com/review/523826/an-ai-pal-that-is-better-than-her/
Our Final Invention Artificial Intelligence and the End of the Human Eraby James Barrat By Sid Perkins | October 22, 2013
[Time 4] Computers already make all sorts of decisions for you. With little or no human guidance, they deduce what books you would like to buy, trade your stocks and distribute electrical power. They do all this quickly and efficiently using a simple form of artificial intelligence. Now, imagine if computers controlled even more aspects of life and could truly think for themselves.
Barrat, a documentary filmmaker and author, chronicles his discussions with scientists and engineers who are developing ever more complex artificial intelligence, or AI. The goal of many in the field is to make a mechanical brain as intelligent — creative, flexible and capable of learning?—?as the human mind. But an increasing number of AI visionaries have misgivings.
Science fiction has long explored the implications of humanlike machines (think of Asimov’s I, Robot), but Barrat’s thoughtful treatment adds a dose of reality. Through his conversations with experts, he argues that the perils of AI can easily, even inevitably, outweigh its promise.
By mid-century — maybe within a decade, some researchers say — a computer may achieve human-scale artificial intelligence, an admittedly fuzzy milestone. (The Turing test provides one definition: a computer would pass the test by fooling humans into thinking it’s human.) AI could then quickly evolve to the point where it is thousands of times smarter than a human. But long before that, an AI robot or computer would become self-aware and would not be interested in remaining under human control, Barrat argues.
One AI researcher notes that self-aware,self-improving systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy. Some people hesitate to even acknowledge the possible perils of this situation, believing that computers programmed to be super intelligent can also be programmed to be “friendly.” But others, including Barrat, fear that humans and AI are headed toward a mortal struggle. Intelligence isn’t unpredictable merely some of the time or in special cases, he writes. “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”
Humans, he says, need to figure out now, at the early stages of AI’s creation, how to coexist with hyper intelligent machines. Otherwise, Barrat worries, we could end up with a planet — eventually a galaxy— populated by self-serving, self-replicating AI entities that act ruthlessly toward their creators.
[382 words]
Source: Science News
https://www.sciencenews.org/article/our-final-invention
The computer will see you now
A virtual shrink may sometimes be better than the real thing Aug 16th 2014 | From the print edition
[Time 5] ELLIE is a psychologist, and a damned good one at that. Smile in a certain way, and she knows precisely what your smile means. Develop a nervous tic or tension in an eye, and she instantly picks up on it. She listens to what you say, processes every word, works out the meaning of your pitch, your tone, your posture, everything. She is at the top of her game but, according to a new study, her greatest asset is that she is not human.
When faced with tough or potentially embarrassing questions, people often do not tell doctors what they need to hear. Yet there searchers behind Ellie, led by Jonathan Gratch at the Institute for Creative Technologies, in Los Angeles, suspected from their years of monitoring human interactions with computers that people might be more willing to talk if presented with an avatar. To test this idea, they put 239 people in front of Ellie (pictured above) to have a chat with her about their lives. Half were told (truthfully) they would be interacting with an artificially intelligent virtual human; the others were told (falsely) that Ellie was a bit like a puppet, and was having her strings pulled remotely by a person.
Designed to search for psychological problems, Ellie worked with each participant in the study in the same manner. She started every interview with rapport-building questions, such as, “Where are you from?”She followed these with more clinical ones, like, “How easy is it for you to get a good night’s sleep?” She finished with questions intended to boost the participant’s mood, for instance, “What are you most proud of?” Throughout the experience she asked relevant follow-up questions—“Can you tell me more about that?” for example—while providing the appropriate nods and facial expressions.
[336 words]
[Time 6] Lie on the couch, please
During their time with Ellie, all participants had their faces scanned for signs of sadness, and were given a score ranging from zero (indicating none) to one (indicating a great degree of sadness). Also, three real, human psychologists, who were ignorant of the purpose of the study, analyzed transcripts of the sessions, to rate how willingly the participants disclosed personal information.
These observers were asked to look at responses to sensitive and intimate questions, such as, “How close are you to your family?” and, “Tell me about the last time you felt really happy.” They rated the responses to these on a seven-point scale ranging from -3 (indicating a complete unwillingness to disclose information) to +3 (indicating a complete willingness). All participants were also asked to fill out questionnaires intended to probe how they felt about the interview.
Dr Gratch and his colleagues report in Computers in Human Behavior that, though everyone interacted with the same avatar, their experiences differed markedly based on what they believed they were dealing with. Those who thought Ellie was under the control of a human operator reported greater fear of disclosing personal information, and said they managed more carefully what they expressed during the session, than did those who believed they were simply interacting with a computer.
Crucially, the psychologists observing the subjects found that those who thought they were dealing with a human were indeed less forthcoming, averaging 0.56 compared with the other group’s average score of1.11. The first group also betrayed fewer signs of sadness, averaging 0.08compared with the other group’s 0.12 sadness score.
This quality of encouraging openness and honesty, Dr Gratch believes, will be of particular value in assessing the psychological problems of soldiers—a view shared by America’s Defence Advanced Research Projects Agency, which is helping to pay for the project.
Soldiers place a premium on being tough, and many avoid seeing psychologists at all costs. That means conditions such as post-traumatic stress disorder (PTSD), to which military men and women are particularly prone, often get dangerous before they are caught. Ellie could change things for the better by confidentially informing soldiers with PTSD that she feels they could be a risk to themselves and others, and advising them about how to seek treatment.
If, that is, a cynical trooper can be persuaded that Ellie really isn’t a human psychologist in disguise. Because if Ellie can pass for human, presumably a human can pass for Ellie.
[414 words]
Source: The economist http://www.economist.com/news/science-and-technology/21612114-virtual-shrink-may-sometimes-be-better-real-thing-computer-will-see
|