From Hal to Kismet: Your Evolution Dollars at Work

���������������������

by Steve Talbott

In Flesh and Machines Rodney Brooks writes that January 12, 1992, marked "the most important imaginary event in my life".�On that day in the movie "2001: A Space Odyssey", the HAL 9000 computer was given life. Of course, HAL turns out to be a murdering psychopath, but for me there was little to regret in that.�Much more importantly HAL was an artificial intelligence that could interact with people as one of them....HAL was a being.�HAL was alive. Brooks, who directs the prestigious Artificial Intelligence Laboratory at MIT, goes on to speak of his prot�g�e, Cynthia Breazeal: ��

On May 9, 2000, Cynthia delivered on the promise of HAL.� She defended her MIT Ph.D. thesis about a robot named Kismet, which uses vision and �� speech as its main input, carries on conversations with people, and is �� modeled on a developing infant.� Though not quite the centered, reliable personality that was portrayed by HAL, Kismet is the world's first robot that is truly sociable, that can interact with people on an equal basis, and which people accept as a humanoid creature .... �� People, at least for a while, treat Kismet as another being.�Kismet is alive.�Or may as well be.�People treat it that way.

The Human Response

All this occurs in a chapter entitled "It's 2001 Already" -- which may rouse your curiosity about Kismet's abilities.� The robot's "body" is nothing but a "head" mounted on a mobile platform.� Its dominant feature consists of two realistic, naked eyeballs, which are accompanied by rough indications of ears, eyebrows, and mouth.� These are all moved by small motors.

Kismet (who has been featured in virtually all the major journalistic venues) is widely advertised as a sociable robot.� Brooks tells us that it gets "lonely" or "bored" due to a set of "internal drives that over time get larger and larger unless they are satiated".� These drives are essentially counters that tabulate, in relation to time, the number of interactions the robot has with moving things, or things with saturated colors ("toys"), or things with skin colors ("people").

Kismet also has a "mood", which can be affected by the pitch variations in the voices of people who address it.� Brooks speaks of the automaton as being "aroused", "surprised", and "happy" or "unhappy" -- the emotional state in each case being another name for a numerical parameter calculated from the various environmental signals the robot's detectors are tuned for.� Despite Brooks' easy references to conversation, Kismet is not designed to reckon with the cognitive structure of speech, and its own speech consists of nonsense syllables, pitch-varied to suggest emotion.

So the typical scenario has Kismet patrolling a hallway, detecting motion (probably a person), and approaching the moving object.�Its detectors, software, and motors are designed to enable it to make appropriate eye contact and to engage in emotionally suggestive, if otherwise vacuous, conversation.� First encounters with Kismet tend to be marked by surprise (genuine, at least on the human side), which leads to all sorts of interesting and peculiar human-robot interaction.

This, in turn, seems to provide the developers with great satisfaction; if people respond to Kismet in some way as to a sentient creature, then Kismet must somehow be a sentient creature -- "not quite" HAL, as Brooks modestly allows, but apparently close enough for government (or MIT) work.

We are led back, then, to Brooks' observation that "Kismet is alive.� Or may as well be.� People treat it that way".� This, as nearly as I can tell, is just about the entire substance of his argument that robots are living creatures.� He periodically acknowledges that his own robots currently lack certain creaturely capacities, but, hell, people sure seem to regard them as alive, so what's the difference?

How Do You Simulate Life?
At one point Brooks seems about to launch an inquiry into the reality of the matter.� "It is all very well for a robot to simulate having emotions", he writes,

And it is fairly easy to accept that the people building the robots have included models of emotions.�And it seems that some of today's robots and toys appear to have emotions.�However, I think most people would say that our robots do not really have emotions.

Brooks' response to this line of thought is to draw on a clich� of artificial-intelligence literature:�he compares airplanes with birds. Although planes do not fly in the manner of birds - they neither flap their wings nor burn sugar in muscle tissue - we do not denigrate their performance as a mere simulation of flying.�They really do fly.�So Brooks wonders, "Is our question about our robots having real emotions rather than just simulating having emotions the same sort of question as to whether both animals and airplanes fly?"

He seems reluctant to state his answer directly, but his argument throughout Flesh and Machines makes it clear that he equates "life- like" with "alive", even if that means, rather mysteriously, "alive in a different way".� In speaking of Genghis, a primitive, insect-like robot, he tells us that the software and power supply transform a "lifeless collection of metal, wire, and electronics" into an "artificial creature":

It had a wasplike personality:�mindless determination.�But it had a �� personality.�It chased and scrambled according to its will, not to the �� whim of a human controller.�It acted like a creature, and to me and � �others who saw it, it felt like a creature.�It was an artificial �� creature.

"If it feels like one, it must be one" seems to be how the argument goes. Not much interest in distinctions here.� Nor much timidity.� "Kismet is not HAL", Brooks concedes, "but HAL [who could 'never be your friend'] was not Kismet either.� Kismet gets at the essence of humanity and provides that in a way that ordinary people can interact with it".

The essence of humanity?� Brooks lives in a world of excruciating and embarrassing na�vet� -- a world where a child's doll programmed to say it is hungry somehow has genuine "wants" and "desires", and where a robotic insect programmed to follow sources of infrared can be said to be hunting "prey".� And if any unwelcome doubts should arise, they can be dispelled by all those humans who react to the robots as if they harbored intelligence and feelings.

Missing Authors

Brooks could have risen above this na�vet� had he been willing to reckon with the obvious distinction between artifact and artificer.� Yes, his robots harbor intelligence, and yes, people respond to this intelligence -- just as they respond to the intelligence in a printed text or in the voice output of a radio loudspeaker.� In each of these cases we would be crazy to ignore the meaning we are confronted with.� After all, just as a vast amount of cultural and individual expression lies behind the development of the alphabet and the printing of the text on the page, so also a great deal of analysis and calculation lies behind the formulation of the computational rules governing Kismet's actions. To ignore Kismet would be to ignore all this coherently formulated human intention.� We could not dismiss what humans have invested in Kismet without dehumanizing ourselves.

The problem we face with robots is that the text and voice have now been placed in intimate relation with moving machinery that roughly mimicks the human body.� And whereas the authors behind the words of book and radio can easily be imagined as historically existent persons despite being less concrete and more remote than face-to-face conversants, this is not the case with the robot.� Here the authors have contrived a manner of generating their speech involving numerous layers of mediating logic behind which it is difficult to identify any particular speaker.

What, then, can we respond to, if not the active, gesticulating thing in front of us -- even if the response is only one of annoyance?� The speakers have vanished completely from sight, and yet here we are back in an apparently face-to-face relationship! -- a relationship with something that clearly is a bearer of intelligence.� Far easier to assign the intelligence solely to the machine than to seek out the tortured pathway from the true speakers to the speech we are witnessing.

This, incidentally, captures on a small scale the problem we face in relating to the dictates of society as a whole.� Who is the speaker behind this or that bureaucratic imperative?� It is often almost impossible to say, so we are content to grumble about a personalized "System" that begins to take on a machine-like face.� And the System is personal, inasmuch as intentional human activity lies behind all its manifestations, even if this activity has been reduced according to our own mechanizing tendencies.� In other words, society itself is unsurprisingly assuming the character of our technology.

None of this, however, excuses our failure to make obvious distinctions in principle.� Yes, every human creation is invested with intelligence in one form or another, and it would be pathological for us to ignore this fact in our reactions.� But it is also pathological to fail to recognize the asymetrical relation between artifact and artificer.� (This was the primary point of "Intelligence and Its Artifacts" in NF #148, which was actually begun as a response to Brooks' book.)

For all our difficulty in identifying the authors behind a computer's output, we can hardly say that no authoring has gone on, or that the distinction between the authors and the product of their authoring has somehow been nullified.� Difficulty in tracing authorship does not by a single degree elevate a printed page to the status of author in its own right.� If Brooks wants to argue that Kismet, once spoken by its creators, was somehow transformed from speech into speaker, he needs to make the argument.� Instead he simply ignores the distinction in all its obviousness.

Let me put it this way:� if Brooks acknowledges a difference in kind between the intelligence of an author and that of a printed page, or between the intelligence of an engineer and that of a doorbell circuit, then he owes us an elucidation of how this distinction plays out in his robots.� If there is something intrinsic to the idea of complexity or the idea of moving parts that negates or overcomes the distinction -- something that transforms text into author, designed mechanism into designer -- then we need to know what this something is.� What is the principle of the transformation?

Learning from Kismet
In an interview with a New York Times reporter (June 10, 2003), Kismet's creator, Cynthia Breazeal, remarks that "human babies learn because adults treat them as social creatures who can learn".� Her hope for Kismet was that "if I built an expressive robot that responded to people, they might treat it in a similar way to babies and the robot would learn from that". The Times reporter then asked the obvious question:� "Did your robot Kismet ever learn much from people?"� This was Breazeal's answer: ��

From an engineering standpoint, Kismet got more sophisticated.�As we �� continued to add more abilities to the robot, it could interact with people in richer ways.� And so, we learned a lot about how you could design a robot that communicated and responded to nonlinguistic cues; we learned how critical it got for more than language in an interaction - body language, gaze, physical responses, facial expressions. But I think we learned mostly about people from Kismet.�Until it, and another robot built here at MIT, Cog, most robotics had little to do with people.�Kismet's big triumph was that he was able to communicate a kind of emotion and sociability that humans did indeed respond to, in �� kind.�The robot and the humans were in a kind of partnership for learning.

I'm glad Kismet taught Breazeal and her engineering colleagues that bodily expression plays an important role in human communication.� But as for the issue at hand:� her answer tells us nothing about any actual "partnership for learning".� With an all too characteristic slippage between points of view, she answers a question about Kismet's learning by citing only the engineers' learning.� This would be all to the good if she could keep the two perspectives distinct and get clear about them.� But the whole enterprise depends upon confusion.� And so Breazeal concludes the interview by mentioning that she is now working on a new robot, Leonardo. But Kismet, who has been retired to the MIT museum, "isn't gone; it's just now taking the next step in its own evolution through Leonardo".

But what does this mean, "its own evolution"?� Presumably Kismet is sitting on a shelf in the museum, or else moving about and pestering visitors.� The one thing it's not busy doing is evolving.� That is the engineers' task.� Apparently, the grotesque illogic of saying that Kismet is evolving is a small matter for someone who has already managed to convince herself that a handful of numerical parameters are signifiers of emotion.

It seems to me profoundly significant that so many people today can routinely characterize the engine rather than the engineer, the design rather than the designer, the speech rather than the speaker, as the subject of evolution.� Here is a refusal to face ourselves as creative spirits or as anything more than machines, followed by a projection of our missing selves onto our machines.� Such a refusal and projection can only lead, not to the evolution of machines, but to the end of our own evolution.

Related articles:
"Flesh and Machines: The Mere Assertions of Rodney Brooks" in NF #146:
�� http://www.netfuture.org/2003/Jun2403_146.html
"Intelligence and Its Artifacts" in NF #148:
�� http://www.netfuture.org/2003/Aug0503_148.html
See also the articles listed under "Artificial intelligence" in the NetFuture topical index:
�� http://www.netfuture.org/inx_topical_all.html

© Steve Talbott
[email protected]

This article was originally distributed as part of NetFuture: http://www.netfuture.org. You may redistribute this article for noncommercial purposes, with this notice attached.