Netflix's Lost in Space and Artificial Friendship

Netflix

“Help...friend.” 

Those two enigmatic words spoken by the Robot of Netflix’s Lost in Space encapsulate its second season’s theme. Season 1 revolves around exploring and protecting the new friendship that Will and the Robot forge after being mysteriously stranded on an alien planet. Released on Netflix right before the start of 2020, the season 2 shows the two friends finding one another after being separated at the end of the previous season.

Friendship, and teaching others the true nature of friendship itself, is at the heart of the series. A common line uttered by several characters as the newest season reaches its climax is, “That's not what friends do.” Coworkers, parents and children, and married couples struggle with this issue in every single one of the twenty episodes thus far produced. The show's core, however, is the fascinating bond between teenage Will and his giant alien Robot, as Paul Tassi for Forbes points out:

The second season dives into the robot race which is mainly made up of murderous machines, rather than the “nice robot” that is friends with Will Robinson, and is one of the most compelling storylines of the series as it continues to grow and evolve adopting human-like emotion and affection for other living things. 

My wife still laments that the Robot hasn't been given a name! Alas--he's merely called The Robot. Nevertheless, he's still endeared himself to fans in both hemispheres.

Will's bond to the monosyllabic machine in the narrative brings this thought to my to mind: Can you be friends with an artificial intelligence? 

Defining AI

Before we can approach the intriguing concept of AI friendship, we must define artificial intelligence. Ask a group of scientists the definition of Artificial Intelligence, and you'll get a plethora of answers. An average person will be even less precise, with the normal meaning something akin to “a machine that thinks” or “a computer that acts human,” and examples ranging from Siri to Optimus Prime. 

These aren't much help. 

Famed computer science pioneer Alan Turing proposed a way to discover AI, known now as the Turing Test: if a computer can answer questions convincingly enough to trick a person into thinking it's also a human, then it is intelligent. Nowadays, with the rise of chatbots and virtual assistants, this test sets the bar too low for actual intelligence. 

Lucasfilm/Disney

The Turing Test is still an important element in AI research, but it's by no means the final word. Typically, there's a differentiation in the field of AI between Weak and Strong AI, as explained by the Stanford Encyclopedia of Philosophy:

“Weak” AI seeks to build information-processing machines that appear to have the full mental repertoire of human persons. ...“Strong” AI, on the other hand, seeks to create artificial persons: machines that have all the mental powers we have, including phenomenal consciousness. By far, [Strong AI is the area] that most popular narratives affirm and explore. The recent Westworld TV series is a powerful case-in-point.

Dr. Timothy Brown likewise posits that artificial intelligence in now-commonplace “Weak” manner should be defined as "virtual reality," meaning that the software approximates human intelligence without actualizing it — like Siri. True “Strong” AI would be Data from Star Trek or the eponymous WALL-E

Leave aside the other philosophical implications of AI, such as personhood and ethics. In the end, the definition of AI comes down to a metaphysical question: is intelligence based on behavior, or ontology

This is a 21st-century addition to the Mind vs. Brain problem which ponders whether or not consciousness is only chemical in nature. A Mind has a non-physical aspect, while a Brain is purely physical. Therefore, if consciousness is merely a matter of complex neuroscience, then computers with the same level of sophistication are inevitable, according to some. However, if your philosophy of Mind has a metaphysical component, then Strong AI is probably not possible. 

For example, an algorithm can be trained to recognize and catalog circles, but only a mind can then abstract the concept of circularness from a specific shape. Medieval thinker Thomas Aquinas said that intelligence is the act of an intellect, or to paraphrase, what a soul is doing when it is thinking. Soul here metaphysically refers to the immaterial, intrinsic nature of a person, not a separate, wispy substance that leaves the body at death — or when sucked out by a dementor. 

Warner Bros.

This hylomorphic (being = matter + form) understanding of existence concerns AI because, without a soul-ish intellect, a robot cannot achieve consciousness. The mind cannot come solely from matter. Since self-awareness is the lynchpin to Strong AI, it remains relegated to science fiction. 

CBS

Sorry, Data.  

Whether positive (C-3PO) or negative (HAL 9000), other film and television examples of artificial intelligent friends still share that common boundary between fiction and reality. 

What is Friendship?

Now that we understand AI philosophically, we can turn to the philosophy of friendship. Don't worry, this won't be as technical as AI. 

Giphy

The concept of friendship itself seems elementary to the point of ignoring it. However, just like all things in everyday life, friendship at a deeper level is inseparable from philosophy. So, what is friendship?

Colloquially, friends are the people you enjoy spending time with while sharing activities and interests. Some have even said friends are “the family you choose.” Deep thinking on this subject goes back to the ancient Greeks. Aristotle described friendship as coming in three flavors: enjoyment, usefulness, and mutual appreciation. His views are explored by Zat Rana

While [Aristotle] saw the value in accidental friendships based on pleasure and utility, he felt that their impermanence diminished their potential. They lacked depth and a solid foundation.

Instead, Aristotle argued for a different kind of bond: 

Things that cause friendship are: doing kindnesses; doing them unasked; and not proclaiming the fact when they are done.

Kindness. Charity. Selflessness. Friendship is beginning to sound a lot like love! That's how Aristotle saw it too; Aristotle used the Greek word phileo to refer to this type of “brotherly” love. Therefore, to have a mutually edifying friendship, both parties must be able to love.  

Notice I said “both parties.” That's the key to the AI friendship question! Stanford, please explain: 

Love and friendship often get lumped together as a single topic; nonetheless, there are significant differences between them. As understood here, love is an evaluative attitude directed at particular persons as such, an attitude which we might take towards someone whether or not that love is reciprocated and whether or not we have an established relationship with her. Friendship, by contrast, is essentially a kind of relationship grounded in a particular kind of special concern each has for the other as the person she is; and whereas we must make conceptual room for the idea of unrequited love, unrequited friendship is senseless. 

Even an animal can in limited fashions return the affection of a person, though they may not be self-aware. Start Trek: The Next Generation circumvented this problem by having Data explain friendship in a scientific yet humanizing fashion: 

As I experience certain sensory input patterns, my mental pathways become accustomed to them. The input is eventually anticipated and even missed when absent.

Data’s identity as a synthetic being with personhood was explored extensively throughout the seven-season series and spin-off films. The newly-released Star Trek: Picard show is plumbing this theme even further. 

Still, Starfleet aside, would a real-world machine ever say, “I missed you,” unless it was programmed to? This leads us to the crux of the debate.

Can AI & Humans Truly Be Friends?

According to Lost in Space, this question has already been answered. Not all the characters came to that conclusion easily, however. Writing for The Verge, Karen Han points this out:

Lost in Space’s AI storyline should feel familiar to anyone even remotely interested in science fiction....The way characters choose to treat artificial intelligences is often a leading indicator of how the audience is meant to perceive them, and how their characters will develop. Will... immediately refers to the robot as ‘‘him’’ instead of ‘‘it,’’ a person rather than an object. Everyone else takes some time to adjust. Will’s mother sees a tool; his father sees a threat; [and] Dr. Smith sees a weapon.

Taking both seasons into account, here are more examples of how the boy's connection to the machine grows throughout the show:

  • Will saves the Robot's life when it's still intending him harm. This act of kindness forges their bond.

  • Afterward, the Robot continually does things to save Will, Judy, Penny, and their parents. 

  • The Robot trusts Will enough to walk off a cliff even though it’s against his best interest to do so.

  • When the horse he bonded with dies, the Robot holds onto the horse's bridle as a keepsake. 

  • Like Will, the Robot even begins to use the word “friend.”

  • The Robot deprioritizes his bond with Will to save another of his kind--and Will chooses to help him do so.

Netflix

I’m so used to the anthropomorphizing of the character of the Robot that even I’m calling it a “him,” just like Judy and Penny learn to do from Will.  

The anthropomorphic nature of the character is part-and-parcel a function of the story as science fiction. Am I splitting hairs by differentiating between actual Strong AI and the fictional Strong AI in Lost in Space? Perhaps, but narrative media often influences societal worldviews and expectations, especially with unrealized technology. 

In a real-life context, this is reminiscent of the meditations of Amin Ebrahimi Afrouzi, an AI technologist and Ph.D. candidate: 

AI agents reflect on their actions and try to maximize their rewards. But in what sense could we say that they “reflect” on their “motivation(s)” or “actions”? We cannot simply ascribe such concepts to AI without anthropomorphizing. But anthropomorphizing only enables us to talk about what AI does, and not how it comes to do it.

Is this reciprocity between Will and the Robot learned, or innate? The Robot seems to grow in its conception of friendship the more it spends time with the Robinsons. In season 1, it’s like a faithful but simple-minded companion like a dog, while in season 2 it shows agency. Penny and Will even debate the matter. “He’s changed. But that’s okay since I’ve changed too.”

Thousands of years later, Aristotle's words reverberate into the vacuum of space (Yeah, I know nothing can echo in a vacuum, but humor me, okay?): 

Friends hold a mirror up to each other; through that mirror they can see each other in ways that would not otherwise be accessible to them, and it is this mirroring that helps them improve themselves as persons. 

In fact, Lost in Space frequently references a “telepathic connection” between Will and the alien machine. Will receives images and feelings, but the Robot could also gain human characteristics such as this enhanced capacity for friendship--assuming it wasn't present before (season 2 makes this seem unlikely). 

With the addition of more planets and giant killer robots, season 2 makes it clear that the Robot is alien technology. It is unclear about the nature of the alien intelligence which created them, as the Netflix story has yet to reveal the origin of the seemingly-sentient robots. (My statement presumes that the robots are created.) This type of AI would be inherently different than our conception of AI, leading to a host of new conundrums. Robotics professor Murray Shanahan refers to this otherworldly source of intellect as consciousness exotica:

To explore the space of possible minds is to entertain the possibility of beings far more exotic than any terrestrial species. Could the space of possible minds include beings so inscrutable that we could not tell whether they had conscious experiences at all? To deny this possibility smacks of biocentrism. ...Either a being has conscious experience or it does not, regardless of whether we can tell.

“Whether we can tell.” This is another potential in-road to the necessity of mutuality in a true friendship. C.S. Lewis echoes this truth of phileo friendship in his masterful work The Four Loves:

Every step of the common journey tests his [a friend's] mettle; and the tests are tests we fully understand because we are undergoing them ourselves. Hence, as he rings true time after time, our reliance, our respect and our admiration blossom into an Appreciative Love of a singularly robust and well-informed kind. If, at the outset, we had attended more to him and less to the thing our Friendship is “about”, we should not have come to know…him so well. You will not find the warrior, the poet, the philosopher or the Christian by staring in his eyes… better fight beside him, read with him, argue with him, pray with him.

Try doing any of those things with an AI. Doesn’t work so well. An AI can search Google or turn the lights on, but can it argue or pray with you?

Even if you deny the hylomorphic composition of body and soul, an AI will never be “Strong” enough to cultivate friendship because AI cannot develop needs or goals, as cognitive science professor Margaret Boden points out: 

Will we be able to share with our AI ‘colleagues’ in-jokes over coffee, in the banter between rival football fans, in the arguments about the news headlines, in the small triumphs of standing up to a sarcastic or bullying boss? No – because computers don’t have goals of their own. ...It makes no sense to imagine that future AI might have needs. They don’t need sociality or respect in order to work well. A program either works, or it doesn’t. ...The users and designers of AI systems – and of a future society in which AI is rampant – should remember the fundamental difference between human and artificial intelligence: one cares, the other does not.

Can you be friends with an AI?

Perhaps, but it won’t be friends with you.

AI responds to external stimuli according to its programming—it has no intellectual “soul.” Friendship is relational, and algorithms can only compute, not comprehend. The machine will not care about you like you do for it and will be, at best, unrequited friendship — which is an oxymoron.  

Unless you meet this robot. Then you can totally be best buds.

Netflix

What’s your favorite on-screen robot or AI? Let me know in the comments! (As you can tell, I’m leaning towards Lt. Cmd. Data.)