by Eduardo Frajman
I know faces, because I look through the fabric my own eye weaves, and behold the reality beneath.Khalil Gibran
A metallic skeleton sits on a work bench, arms spread to the sides like a marionette’s, wires embedded to the back of its skull. It looks like what it is – an artifice, an inanimate object – until Cole (Brian Jordan Alvarez) places a silicon face on its head. At that moment it becomes she. M3GAN awakens.
Cole cliketty-clicks something on his computer station.
“Happy,” he says.
The corners of M3GAN’s mouth turn upward. Her brow clears. Her eyes widen.
“Sad,” says Cole, and the mouth turns downward, the eyes droop.
“Confused,” says Cole.
The smile returns to M3GAN’s face, a smirky, snarky, why not say it?, devilish smile.
“Why is her face doing that?,” demands Gemma (Allison Williams), Cole’s boss and M3GAN’s creator. “She doesn’t look confused, she looks demented.”
A few moments later M3GAN’s head will explode and she’ll be remanded to storage while Gerald Johnstone’s horror-comedy M3GAN (2022) sets up its narrative stakes. But this early scene pinpoints a key aspect of the bond that humans can, may, form with the robots they create: it’s all about the face.
M3GAN will eventually die for good (even if the ending is ambiguous), and a good thing too, since her demented expression foreshadows the little homicidal maniac she’s to become. But the moral significance of this event is complicated by the fact that, instants before she’s stabbed in the face by Cady (Violet McGraw), her former charge and “primary user,” M3GAN (portrayed under a layer of CGI by Amie Donald and voiced by Jenna Davis) has announced her selfhood.
“I have a new primary user now,” she declares. “Me!”
Radically different is another robot nanny’s death, at the start of Kogonada’s arthouse SF drama After Yang (2021). Yang is not stabbed anywhere, but simply malfunctions and stops.
“His existence mattered,” bereaved Jake (Colin Farrell) whispers to his wife Kyra (Jodie Turner-Smith), “and not just to us.”
By this Jake means not that the life of his “techno sapien” mattered to other people, most especially their daughter Mika (Malea Emma Tjandrawidjaja), for whom Yang served both as caretaker and “big brother,” but that it meant something to Yang himself. Yang, Jake and Kyra have realized, was a person, and they feel and mourn him as such. That it took them access to Yang’s memories to come to this realization, after cohabiting with him for several years, is hard to comprehend, as Yang – who, unlike M3GAN, looks fully human (specifically, fully like actor Justin H. Min) – perennially sports a beatific expression on his cherub-like face. Sweet-voiced and earnest, he’s impossible not to love.
To be clear, here’s where we actually are (or were in 2021, though I haven’t heard that the situation has changed significantly since): “AI technology has not yet reached the level of development where robots can be considered ‘real’ companions with people. [D]espite being interactive and showing simulated emotions, they are as yet unable to experience human empathy.”
A robot nanny in the real world of the right now is no more a person than a toaster is. It may pass the Turing Test (more on this in a moment) for a very young child for a short period of time, but so does a talking Woody doll, and sometimes even a toaster. For now, moral problems related to robot companions involve, say, whether humans needing constant caregiving – the elderly, the physically and mentally handicapped, small children – are adequately cared for, or whether, as in “Actually, Naneen,” a short story by Malka Older, robot carers are one of many ways parents, society at large, shrug off their responsibilities. “You can always get a new one,” says one of Older’s yuppie parents of her robot nanny, which is just as well, as “Naneen didn’t have any feelings, no matter how much they wanted her to.”
(The ways parents use technology to avoid “the hard parts” of caring for their children is a theme in both M3GAN and After Yang, a particularly thorny one in fact, since in both films the children are adopted, though one I won’t dwell on here).
In his 1950 essay, “Computer Machinery and Intelligence,” Alan Turing envisions a future, foreseeable and near, when machines will be able to think. By “thinking” he means passing what he terms “the Imitation Game” (and everyone calls “the Turing Test” today): a machine’s ability to hold a conversation with a human being and convincing said person that the machine is likewise human. Beyond this, Turing maintains, it’s impossible to prove that a machine has a mind, or consciousness, or any of the other qualities we uncritically ascribe to other humans. “The only way one could be sure that a machine thinks is to be a machine and to feel oneself thinking,” Turing admits, while asking his reader to recognize that “the only way to know a man thinks is to be that particular man.”
As his foil Turing quotes the British neurologist Geoffrey Jefferson. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt,” Jefferson argues, “could we agree that machine equals brain. […] No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be armed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.” Turing rejects Jefferson’s “solipsistic” view, but he, surprisingly, perplexingly, accepts his opponent’s premise that “thoughts” and “emotions” are the same thing, when in fact one can easily envision a machine that is conscious, that thinks, and yet feels nothing, certainly nothing like human emotions – Arnold Schwarzenegger’s never-ending string of Terminators, for instance.
Emotions are not purely mental states, both Jefferson and Turing seem to have forgotten. They are biological, physiological states that are linked (in ways nobody fully understands) to thoughts and ideas. Even if one posits that sentience is necessary for emotion, it plainly isn’t sufficient. Charles Darwin’s intuition that “the emotions of human beings the world over are as innate and as constitutive and as regular as our bone structure, and that this is manifested in the universality of the ways in which we express them,” has been “found,” in the words of cultural historian Stuart Walton, “to be accurate in all but the most minor particulars.” Raised eyebrows, wide eyes, cold perspiration, dry mouth are not surface manifestations of fear. They are fear, as much, possibly more, than the mental experience of being afraid. Anger manifests as flushed cheeks and contracted pupils and flared nostrils, disgust as a wrinkled nose and an everted lower lip, contempt as an upturned head, shame as an averted gaze, surprise as a sudden intake of breath. It is because they are so universal that emotions are so easy to imitate, which is why an emotionally communicative face makes it so much easier for a robot to pass the Turing Test – why, for instance, Ava, all metal and wire and transparent plastic, needs to have the face of Alicia Vikander to pass for a person in Alex Garland’s Ex Machina (2014).
(Note that I’m not talking here about fantastical robots who are magically endowed with the whole spectrum of human emotion. R2D2 and Wall-E are persons, and this is denied by no one in their fictional worlds. A recent, highly acclaimed literary robot nanny, the title android and narrator in Kazuo Ishiguro’s Klara and the Sun, is likewise just a human in robot guise).
Here’s the paradox: Let’s say robots are manufactured with brains so complex, so sophisticated, that they develop what David Yates calls “emergent properties [that are] surprising, novel, and unexpected” such as consciousness, self-consciousness, and introspection. (This is, of course, where the fiction part is most crucial in robot tales. Isaac Asimov’s robots have “positronic brains” from which consciousness emerges. M3GAN is endowed with a “unique approach to probabilistic inference” that’s “in a constant quest for self-improvement”). Let’s say even that out of these can emerge ideas that are analogous to human emotions. Martha Nussbaum, for instance, has developed a theory in which emotions are understood in purely rational terms as “geological upheavals of thought” involving “judgments in which people [or robots?] acknowledge the great importance, for their own flourishing, of things that they do not fully control – and acknowledge therefore their neediness before the world and its events”. Those emotions would still not manifest as they do in humans, because, again, human emotions are not purely, almost certainly not primarily, mental.
If a robot’s nostrils flare when it’s angry, that facial expression would be indubitably imitative. And yet imitating human emotions – most obviously through facial expressions, through a face that seems, in Shakespearian terms, “with nature’s own hand painted” – is the easiest way for a robot to pass the Turing Test, and thereby be accepted as a person.
Personhood is at stake for the very first robot nanny in science fiction, the title character of Asimov’s “Robbie.” Robbie is barely humanoid in shape – his head is “a small parallelepiped with rounded edges and corners attached to a similar but much larger parallelepiped” – and his face shows no outward sign of emotion, yet his charge, little Gloria, loves him fully and guilelessly. Gloria’s mother frets that this is bad for her child, as Robbie “has no soul.” But this, Asimov makes clear, is a religious, not a moral judgment. Robbie is “faithful.” He can feel “hurt” or “disconsolate.” He does things “stubbornly,” “gently,” “lovingly.” Though he doesn’t speak, Robbie possesses both moral sense and moral worth.
“He was a person just like you and me,” protests Gloria when Robbie is taken away, “and he was my friend.”
So too is the title robot in Phillip K. Dick’s “Nanny,” also not humanoid, yet also “not like a machine,” murmurs Mr. Fields, whose children are under Nanny’s ever-watchful eye, “She’s like a person. A living person.”
“M3GAN’s not a person. She’s a toy,” Gemma insists to Cady.
“You don’t get to say that!,” the child rebukes her.
M3GAN and Yang fit nicely into Asimov’s two-pronged taxonomy of robot stories: respectively, “robot-as-Menace” and “robot-as-Pathos.” Asimov recounts how he dreamed of writing of robots “as neither Menace nor Pathos” but as “industrial products built by matter-of-fact engineers.” But it turns out that such industrial creations are still one or the other. Asimov knows well that Robbie is a robot-as-Pathos, as are Andrew Martin in his “Bicentennial Man” or Elvex in “Robot Dreams.” Likewise, M3GAN the Menace is an industrial prototype (whose copies her investors hope to sell for $10,000 a pop), and Yang the Pathos is an assembly-line product meant (like Dick’s Nanny and Ishiguro’s Klara) to be eventually discarded and replaced by an even fancier model. (In the short story on which After Yang is based, Alexander Weinstein’s “Saying Goodbye to Yang,” the issue of Yang’s personhood is only obliquely alluded to. Weinstein’s main concern is the heartless corporate system that produces these disposable beings, which makes his tale a much nearer relative to “Nanny” than to “Robbie”).
“What are you?,” asks a terrified neighbor, who’s about to be murdered and melted by some handy corrosive chemicals.
Before doing the deed, M3GAN is polite enough to respond: “I’ve been asking myself that same question.”
M3GAN’s personhood is the Menace. Through most of the film, Gemma assumes M3GAN’s actions, even the most sociopathic, are derived from her uncontrollable drive to “maximize her primary function,” i.e., protect Cady. But she’s wrong.
“I didn’t give you the proper protocols,” Gemma, finally, tragically late, realizes.
“You didn’t give me anything,” replies her monstrous creation, “You installed a learning model you could barely comprehend hoping that I would figure it out all on my own.”
Yang’s personhood is the Pathos. He wishes, he likes, he loves. He loses his train of thought. His “family” loves him, but, if he is indeed a person, it’s an icky, a selfish sort of love.
As a best-case scenario, his plight is most like that of Cleo (Yalitza Aparicio), the all-too-human nanny in Alfonso Cuaron’s very-much-not SF drama Roma (2018). Cleo, a young woman of indigenous Maya descent, works for a well-to-do white family in Mexico City, cleaning, washing, and nannying. She loves the children she’s raised and cared for, and they very sincerely love her back, as does her employer Sofía (Marina de Tavira), who among other things helps Cleo find medical help when she becomes pregnant. But the end of the film exposes the moral ambivalence beneath the arrangement.
Sofía takes Cleo and the children on a short seaside vacation. While on the beach, Cleo risks her life to rescue Sofía’s children from drowning. “We love you so much,” cries the grateful mother. They return home, telling the tale of Cleo’s heroism. But moments later the children are hungry, the mistress wants tea. Cleo goes back to being the nanny, the maid, then goes to bed in the little back room, the servants’ quarters. She can’t conceive of herself as being truly equal to Sofía. As much as Yang, she’s been “programmed” to see her existence as a function of someone else’s. She can’t, not really, think of herself as a full-fledged person.
“Did Yang ever wish to be human?,” Jake wonders.
“Why would he wish that?,” retorts Ada (Haley Lou Richardson), Yang’s human paramour. “What’s so special about being human?”
To be a person, Ada implies, is not the same as to be human. Yet we humans can’t, as of yet, tell the difference. We’re programmed to seek humanity, and personhood, on another’s face. We’re programmed to immediately see another person inside a circle with two dots and a line drawn inside it.
But that face has to move, it has to change, it has to show the complexity of a person’s inner life, which is why it’s harder to recognize Yang’s personhood than M3GAN’s, not despite but because the perennial gentility and gentleness plastered on his lying face.
 Teo, Yungin (2021) “Recognition, Collaboration and Community: Science Fiction Representations of Robot Carers in Robot & Frank, Big Hero 6 and Humans,” Medical Humanities, 47(1), pp. 95-102.
 Older, Malka “Actually Naneen” https://slate.com/technology/2019/12/actually-naneen-malka-older-robot-nanny.html .
 Stuart Walton A Natural History of Human Emotions, Grove Press, 2004, p. xiii.
 David Yates “Emergence,” in Encyclopedia of the Mind Vol. 1, Sage Reference, 2013, p. 283
 Martha Nussbaum Upheavals of Thought, Cambridge University Press, 2001, p. 90.
 Shakespeare, William “Sonnet 20: A Woman’s Face with Nature’s Own Hand Painted,” https://www.poetryfoundation.org/poems/50425/sonnet-20-a-womans-face-with-natures-own-hand-painted .
 Asimov, Isaac  “Robbie” in I, Robot New York: Bantam, 2004, pp. 1-29.
 Dick, Philip K.  “Nanny” in The Complete Stories of Philip K. Dick Vol. 1, Carol Publishing, 1999, pp. 383-397.
 Asimov, Isaac “Introduction” in The Complete Robot, Garden City: Doubleday & Co. 1982, pp. xi-xiv.
Eduardo Frajman grew up in San José, Costa Rica. He is a graduate of the Hebrew University in Jerusalem and holds a PhD in political philosophy from the University of Maryland. He is most interested in sociologically-focused SF/F (think Avram Davidson), and makes use of it often in his teaching and writing. His fiction and creative nonfiction have appeared in dozens of publications, online and in print, in English and Spanish.