“If being human is not simply a matter of being born flesh and blood, if it is instead a way of thinking, acting, and feeling, then I am hopeful that one day I will discover my own humanity.”
Thus spoke one of “Star Trek’s” most memorable (and endearing) characters, the android Data from “Star Trek: The Next Generation.” In contrast to his stoic predecessor, Mr. Spock, Data seeks to understand and experience emotions — which in the show and elsewhere are nearly always placed in opposition to logic.
Yet emotions and logic are not so at odds as they are often treated. On an evolutionary level they are fundamentally interdependent, a fact that has significant implications for how feelings might manifest in artificial intelligences. To understand this, we need to investigate the logic behind emotion and the emotional guide for logic.
But first: What exactly are emotions? It’s pretty much impossible to explain what they feel like in any non-self-referential form. What they are is at least an approachable question.
I once heard an excellent description of emotions as the world’s fastest conclusion-drawing machine: Your brain synthesizes a huge amount of information into an impression, a feeling. Emotions may not be the most accurate method to analyze circumstances, but they are an efficient way to assess — and react to — many situations.
Inasmuch as emotions involve integrating all kinds of information including physical perceptions and past experiences, they are an emergent property of our neural system. Theories that brain regions are each individually responsible for generating different emotions — i.e., one region of your brain makes you happy, one makes you angry and so on — simply don’t hold up. A more scientifically promising model is that brain regions are all networked together, like computers over the internet, and that different regions work together to produce different emotions.
Is it so far-fetched, then, to imagine that emotions might emerge in sentient A.I. even without our deliberately putting them there? While a robot may never experience physical emotional cues such as an elevated heartbeat, cold sweat or butterflies in the stomach, its mental perceptions of feeling could still exist. If emotions are indeed akin to an instantaneous summing up of inputs, then could a conscious machine be said to experience emotions as its internal processes produce new conclusions?
Depending on the inputs, the answer may not be meaningful. What it “feels like” to calculate the square root of two is not very relevant to any emotions we classically recognize as human.
A better question is: What would an A.I. do with the ability to process the kind of information that humans care about — like who is friend and foe, what is polite in a certain situation and what do you do to have fun?
After all, from an evolutionary standpoint, emotions are a form of motivating behaviors: Fear induces prey to flee predators, affection facilitates group cohesion, happiness rewards beneficial activities. The logic behind emotions is that they push individuals to do what needs to be done to survive and reproduce. For our eventual A.I., if we design their “brains” to be more like modern computer chips, we may directly program them to have a goal, similar to how the purpose of ad-blocking software is to block ads. If instead we model their “brains” off biological ones, we may exercise an artificial selection process whereby we continually experiment with designs for A.I. and only keep ones that behave the way we want. In this way, we might indirectly choose characteristics (e.g., processing speed, emotional awareness) that favor a certain outcome (e.g., efficiency, fondness for humans) — a bit like how we domesticated wolves into dogs.
In either case, we must consider what goals a rational A.I. possesses. Too specific, and the system will break down once the objectives are achieved: A robot whose job is to construct one specific building will stop working after that building is done. Too general, and too many methods will become acceptable: imagine a robot programmed as a bodyguard who concludes that the best way to protect its owner is to lock her into a room where no one can threaten her. Logic itself, after all, is just a means. To use it, you need at least a starting point (from which to trace implications: “if x, then y”) and preferably also an endpoint (to which you can scope out a reasonable path). Emotions provide us with starting points. Moral values supply the endpoints. Together, they motivate action.
Thus in terms of developing artificial intelligences, especially if they derive logical and accurate conclusions — as our computers currently and predictably do — we need to be cognizant of how emotions and values manifest in their minds. What will motivate them to act? If we think about the characteristics we emphasize when we design them, perhaps we can piece together their emergent feelings in the same way that we evolutionarily explain human emotions in the context of survival.
In the meantime, we’ll have to consider whether a trend toward more interactive, human-friendly software and robots like Siri and Diego-san will produce a race of machines as flawed as we are, only smarter, stronger and more powerful.
Alternatively, we end up with the benign Data, whose programming sets him apart from his crewmates but whose struggle to be human makes him humanly relatable. We could stand to learn from his curiosity, his earnestness, his honesty — and his dedication to one day “discover [his] own humanity.”
“Until then,” he tells us, “I will continue learning, changing, growing and trying to become more than what I am.”
Contact Mindy Perkins at mindylp ‘at’ stanford.edu.