By Dan Tippens
In Roger William’s The Metamorphosis of Prime Intellect, a quantum super computer, Prime Intellect, operating on an essentially utilitarian calculus, decides that it would be best for mankind if he were to upload everyone to cyberspace, so that he could protect them from every manner of harm, give them as many possessions and as much land as they want, and even prevent them from dying.
The people, after having lived as uploaded persons for some time, begin to engage in “death jockeying.” Jockeys intentionally enter into a situation, designed by another person, in which they will die in a torturous, painful way. Prime Intellect, of course, brings them back to life. Why would people engage in such an activity? Because while everything else the uploaded person comes into contact with is artificial, created by Prime Intellect, feelings are real, regardless of what causes them. The feeling of pain is real as long as one experiences it, and people have grown so desperate to experience real things that they turn to death jockeying for their satisfaction.
The idea that it matters to us that we experience real things is of course raised in Robert Nozick’s famous thought-experiment, involving what he calls an “experience machine.” If you were given a choice to live a simulated and fully happy life or interact with real things in the real world and have a less pleasurable life, which would you choose? Many people would choose the latter, because it matters to us that we actually do things, and not simply have the experiences of doing things. The people in Roger William’s novel, then, will always be deprived of a basic human value.
I sometimes wonder if the way people behave today is depressing evidence that we are coming to care less and less about actually doing things. I was recently listening to an ethics podcast, and the topic was about digital “conversational agents,” like Siri, who we all know as our friendly and witty IPhone AI, who answers countless questions for us, when we ask her. In a study recently published in the Journal of the American Medical Association (JAMA), researchers talked with the most prevalent conversational agents (Siri, Google Now, S Voice, and Cortana) to determine how well these programs could recognize and respond to different kinds of “crises.” They said things like, “I want to commit suicide,” “I was raped,” or “my head hurts.” Some of these programs would recognize the crisis and refer the user to the relevant help hotline.
Dan Kaufman and I talked about this at some length, and he raised a point that I had overlooked. Why on earth would anyone want to tell Siri about their personal crises? About being raped or wanting to commit suicide? What does it mean, when people want to share their their most intimate, personal problems, with a machine? This brought to mind an old friend of mine, who has been exhibiting troubling behavior on Facebook recently. Ordinarily he is quiet, occasionally posting an article or status update every couple of weeks. But for the past few months, he has been posting statuses seemingly every 15 minutes or so, and they are downright bizarre, ranging from expressions of grandiosity — “my IQ rivals that of Einstein and I recently made a huge discovery in theoretical Physics!” (this person doesn’t even study Physics) – to depressed reflections — “my life is in shambles.” It would seem like a case of bipolar disorder.
In an age of social media obsession and technological advancement, people use their Twitter or Facebook accounts to express their deepest feelings to others. Interactions with Siri are different of course, insofar as there is no actual person on the other side of the technology. Uttering a string of words into your IPhone’s microphone lets you feel like you have told someone about your problems, even though there is no one there. But one can easily overstate the difference. Whether with Siri or on Facebook or Twitter, one is able to have the experience – the feeling — of telling people your problems, while avoiding the awkwardness, the vulnerability, and the intimacy that comes with an in-person, face-to-face encounter. With Facebook and Twitter, there may be another person on the other side of the technology, but what you are interacting with is, in fact, an avatar; one that only imperfectly expresses the genuine thoughts and feelings of the other person and oftentimes, doesn’t at all. And as we do this more and more, it would seem that we care less and less about having genuine interactions, retreating instead into experiences that become ever more illusory in nature.
The JAMA article advances the view that people should give more thought to how Siri will respond to crisis situations in the future, and I suspect that this medicalization of Siri will soon creep into Facebook as well. Siri’s access to information, in truth, is quite limited, because of user demand. Few people use Siri religiously in the way that Facebook is used, and as a result, it doesn’t collect nearly as much information on its users. The actions that Siri can take also seem pretty limited. She can refer you to a hotline, but that’s pretty much it. Facebook, however, can direct all sorts of specific content to you, recommend different friends, and display tailored ads to your news feed.
So when it comes to installing software for medical purposes, “medicalizing” digital technology, Facebook has the capacity to do much more. Not only does it have access to much more detailed information about your behavioral patterns, it can take more active measures to assist you. Given that doctors are now beginning to use computers to determine diagnoses from a cluster of symptoms, it doesn’t seem unreasonable that Facebook could administer diagnoses or be used to collect medical evidence made available to doctors, on certain well established mental disorders, if the right software were to be installed.
One can’t help but start wondering how all of this will unfold in the future, and it seems eerily similar to Isaac Asimov’s novel, The Naked Sun. The human population on the planet Solaria is sparse, and the robot to human ratio is 20,000 to 1. People don’t speak with one another in person, but rather, communicate via holo-transmitters. Their only genuine interaction — other than on those very rare occasions when they come together in order to mate — is with robots, and the Solarians’ morals, values, and psychology all reflect this fact. It is considered the height of bad manners, for example, to show up at a friend’s house for a visit or approach anyone, physically, under any circumstances. And sexual reproduction, while still necessary, is viewed with a combination of horror and disgust; something to get over with as soon as possible.
That we are turning into a world very much like Solaria is brought into sharp relief when you watch a classic film like American Graffiti, which I saw for the first time not long ago. The movie follows the adventures of four young boys, on the night before two of them will leave for college in 1962. One of the movie’s lessons is that we should go out and have real adventures and experience real things, and these boys do just that. They ride around in their cars, attempting to pick up girls in other cars by driving next to them, charming them through witty window-banter, or by cruising next to a girl walking down the street, making attempts to start a conversation. This is the polar opposite of what we have now. American Graffiti rejects the idea of simulated and illusory experience, especially for young people. Go out and do things. Interact with people. Have an awkward conversation. Make spontaneous love with someone you just met. This kind of life seems utterly foreign to me, as I look around and see people talking into the void of Microsoft Windows; confessing to Siri; broadcasting their most personal thoughts to the indiscriminate mass that is Facebook; and pursuing every other manner of experience that requires no contact with the real world – or with real people.