Will A.I. Be The Death of Us?

by Charles Justice

____

Here are a number of scenarios, most of them science fiction, that have to do with computers or machines taking over the world:

  1. An apprentice learns a magic trick for making a tool take over all the work of cleaning his master’s workshop, but he can’t figure out how to shut the operation down, and it keeps on multiplying and making what threatens to be an exponentially larger mess, until the master returns and restores order.
  2. A super intelligent computer on a spaceship bound for Jupiter starts systematically killing the human occupants, when it decides on its own that the humans are hampering the mission.
  3. An ancient race of super-intelligent beings, called the “Krell,” create a planetary supercomputer that realizes every wish of the inhabitants, leading to their self-destruction by “monsters from the Id”.
  4. A technological advance shared on the internet leads to a “singularity,” a super-intelligent global computer mind that is self-aware and hostile to human interests. A war ensues between robots and humans.
  5. A conglomeration of robots and computers called “Cylons” attack and destroy most of human civilization, but a remnant of human survivors in a flotilla of spaceships manages to escape and sets out to search for the fabled original planet earth.
  6. An unhinged authoritarian presidential candidate on earth’s reigning superpower manages to get elected and then manages to sustain substantial fanatical support from his followers by utilizing easily available social media computer algorithms that interact with digital device users in a way that promotes and magnifies extremism and conspiracy theories.

The titles of these stories are respectively: The Sorcerer’s Apprentice; 2001: A Space Odyssey; The Forbidden Planet; The Terminator; Battlestar Galactica; and The Apprentice, Episode 193.  Although the first and the last scenarios are not really science fiction, I believe that they are the  ones we should be worried about.

In the movie “2001,” the super-intelligent computer Hal 9000 was defeated by “pulling the plug.”  If the singularity (The Terminator) occurred and computers started this revolt against humans, couldn’t humans terminate the computer network by turning them off and pulling their plugs, swearing off from using them forevermore? Computers don’t work if they are not plugged in or their batteries are taken out. It’s humans that manufacture, maintain, and repair them, and it’s humans that supply the power that keeps them running. Computers, and computing equipment have global supply chains of factories behind them.  How likely  would it be for a computer to command this supply chain to create new versions of itself or to command humans in the supply chain to help maintain or repair itself?  I suppose the renegade computer could trick all the humans in the supply chain into manufacturing replicas of itself.  But how long would it take for some to figure out that they were being tricked?

The real success and use of AI seems to be in specialized areas where the problems and solutions can be reasonably well defined. Examples such as natural language processing, facial recognition, self-driving cars, and augmenting human medical expertise, show the incredible power and usefulness of AI deep learning algorithms and neural networking. These computer systems utilize banks of computers connected in parallel, and are given access to massive amounts of data.  With only minimal expert assistance from humans, the “neural networks” of computer systems, like IBM’s Watson, are able to outperform human experts, because of their ability to see patterns of data in millions of pages of specialized information.

At the heart of the idea of a super-intelligent computer that can do everything better than humans [sometimes called Artificial General Intelligence, or “AGI”] is the notion that one day a computer will have enough intelligence to be fully autonomous. It is not clear, however, whether it would be possible [let alone desirable] to completely eliminate human control.

Autonomy is a concept that we use to describe living organisms and is the degree to which something can make decisions on its own. It’s not an all or nothing thing.  No lifeform is autonomous of the earth, its atmosphere, or its water. There seems to be an evolutionary progression towards more autonomous organisms, from single celled diatoms that passively float around in the ocean to large-bodied mammals that range over large distances and make complex decisions.

There is absolutely no history of machines needing to survive and reproduce on their own. Why should there be? Only humans want machines to exist. Machines themselves have never had any say in it. No one has ever made a machine and then told it:  “OK, from now on, you have to make it on your own.” Machines always have human purposes built into them: they don’t decide to plug themselves into a power source or turn themselves on.  They were never a part of the struggle for existence that is the basis for Darwinian natural selection.  Wherever one finds a machine, there is always a person behind it: conceiving; building; maintaining; repairing; and supplying sufficient power. And this is why the so-called “autonomy” of computing systems is not biological or human autonomy or really autonomy at all, as we understand it, but simply the capacity to function and learn intermittently [and finitely]  without human  supervision.

We tend to project our own sense of autonomy onto natural processes like the weather, and onto geographic features like mountains, rivers, and oceans. The storm is “brutal and merciless.”  The volcano is “angry.” The mountain “looms menacingly.” We, of course, engage in the same kind of projection with regard to our own creations and especially computers and other machines. R2D2 and C3P0 from Star Wars are two almost universally recognizable fictional instances of this type of projection.

It may be more appropriate to use a biological metaphor here and say that computers play a similar role  to that of cells that make up the body of multicellular organisms.  Computers are machines that can perform many functions without  human supervision, but they are not autonomous.  Their use by humans is analogous to the functions of cells in the body.  All the cells of the body perform functions for the body; they are kept in a temperature controlled environment; they live bathed in nutrient fluid, so they don’t have to go out and look for food; and they don’t decide what to do, but rather are instructed by hormones and other chemical messengers to change their functioning when the body needs them to.

One of 18th Century philosopher Immanuel Kant’s deepest insights was that there is an intrinsic connection between morality and human autonomy.  When animals make choices and decisions it is “according to the dictates of nature.” Humans, by agreeing to limit their own behavior through moral rules, open up a world of creative choices not available to animals. However, our creativity comes at the price of responsibility and accountability. By all of us doing our “duty” in upholding and enforcing moral rules, we make possible the widespread trust and  cooperation that forms the background of human society and makes our unlimited creativity possible.

Battlestar Galactica and The Terminator are really stories that project our history of slavery onto science fiction scenarios about intelligent robots. What if the slaves seize the means to overthrow their masters?  This has always been a real fear of the slave owner classes, such as in the Antebellum South, so it’s still in our collective memories. The robots in these fictions are a metaphor for treating people as a means rather than as autonomous beings, and speculate as to the repercussions of treating a whole people this way.

The story of The Sorcerer’s Apprentice, however, is a warning to all of us. We can build machines that have the ability to make decisions that affect humans, but the computers are just going to automatically maximize some mathematical function, with no thought of the consequences.   Computing systems always need human supervision.  As Daniel Dennett argues in “The Singularity, An Urban Legend?”:

The real danger is not machines that are more intelligent than we are usurping our role as captains of our destinies.  The real danger is basically clueless machines being ceded authority far beyond their competence.” And he adds, ominously, “We are on the verge of abdicating…control to artificial agents that can’t think, prematurely putting civilization on autopilot. [1]

If we look at my one scenario that wasn’t fiction, namely the weaponizing of social media, it is apparent that the whole problem, as Dennett foresaw, was the absence of human supervision. The algorithms on Facebook and YouTube were set in motion, white supremacists and gullible QAnons found each other and multiplied, and no one in charge intervened until it was too late. Now, as a result of Trump’s election and four years of his disastrous presidency, we have a dangerously unstable situation in the United States, with one of the two main political parties embracing conspiracy theories and actively working towards cancelling democracy. Now that we know better what can happen, we need to ensure there is proper legal oversight over social media platforms, and the same goes for all AI applications. As Dennett wryly points out: “computers have no skin in the game.” They cannot be held accountable for decisions made, only humans can.

Charles Justice is a retired nurse with a bachelor’s degree in economics, living in Prince Rupert, British Columbia. Before Covid, he was a percussionist in a community band and ran a drum-circle for seniors at a nursing home. He blogs at:

https://philosopherjustice.blogspot.com/

https://rupertjeremiah.blogspot.com/

https://earthjustice.blogspot.com/

Notes

[1] Dennett, Daniel, “The Singularity, an Urban Legend?”, Edge.org, 2015

9 comments

  1. Interesting article. I think AI is a major threat to our future. A more immediate threat to us is bio terrorism by state actors, groups or most likely a suicidal one wolf. The technology for creating viruses with the capacity to kill 50% or more of those who get the virus and leave the person asymptomatic long enough for them to spread the virus widely before they have symptoms and die will be readily available and relatively easily accessed by a lone wolf or group probably within the next 10 or at most 20 years. Some state actors might be able to create such a virus to do it today but are unlikely to do that. I think that would send us quickly into a “Mad Max” way of living within weeks but certainly months since our modern western societies would collapse. There is a great podcast just released by Sam Harris interviewing Rob Reid who is a podcaster, author, and tech investor who interviews thought leaders in his podcasts. You can hear much of Reid’s talk as a non-subscriber here: https://samharris.org/podcast/

    1. Already listened to the podcast which Sam is offering in its entirety for non-subscribers, as a public service. Reid is a remarkable man and I hope he has commensurate influence with the powers that be.

  2. The issue was best capsulated, as I see it, by Jaron Lanier. What matters is whether or not we consent to make ourselves stupid enough to descend down to the machine’s level.

  3. Justice’s and Dennett’s articles seem wholly concerned with the negative near term possibilities, some of which we are already experiencing in everyday life due to social media and not an autonomous AI going astray to our peril. Though their concerns are noteworthy and cautionary they do not do justice to the greater threat of software overlords of the more distant future. The true stuff of the AI boogeyman.

    All the wonderful examples that Justice parades upfront with the exception of the Sorcerer’s Apprentice, AKA the paperclip making AI machine, envision the eventuality of a true genesis of a singularity. Where, software will not only equal human cognitive ability but surpass it, and, one can only imagine an ape trying to outwit a human by pulling his plug. It is too easy to dismiss these futuristic Luddite fantasies as Science Fiction but less than a hundred years ago, today’s cellphones were the bread and butter of SciFi writers.

    Could AI self replicate and author its own new programming and in other more sinister ways incorporate a Darwinian imperative to improve and survive at all costs? Interesting that Justice, a nurse, uses the metaphor of machines as contained and useful cells in the body without expanding the metaphor to the possibility of cancer.

    I only make the distinction of the paranoia associated with the dime store robot minds of the future because that is the sexy topic. The perhaps more mundane and immediate concern as Justice and Dennett point out, should be “us” and how we use and become ever more dependent on our dumb machines. I think most thinking people understand the problem,starting back with the printing press but, are bedeviled how to control that which amplifies our sacred values of free speech and expression. The Chinese have no such qualms and are doing with what we fear to their authoritarian advantage. That is what they are offering to the world as the alternative to our messy democracy.

    1. Good point about the possibility of cancer. The cancer cell is an example of a cell that stops functioning for the body and hijacks the body’s resources in order to endlessly replicate, eventually killing it’s host. Again, it is not a real type of autonomy, it’s a parasitic existence. Perhaps there is a science fiction story out there that likens a computer to a cancer cell, but no computer that I know of replicates itself. Note that no example of a computer virus was ever spontaneously produced by a computer. Every virus on the internet was put there by a malicious human.

      1. As has been said, the “dream” of every cell is to be two cells; the dream of many computer/software designers is a recursive, self improvement program. A concern for the future, not the more immediate issues you rightfully discuss.

  4. I would say by far the biggest risk of AI and machines more generally is that they will almost entirely eliminate unskilled and lower-skilled labor. Since there are millions of people who will not be capable of joining the higher level “information economy,” this will lead to massive unemployment and will produce, politically, a generation of demagogues that will make Trump look like Mr. Rogers.

  5. The overarching limitation of an apocalyptic concern about AI (as the article implies but does not state) is that it is simplistic. It represents a incomplete formulation of consciousness. The name of it implies that we are trying to mimic natural intelligence. The worries about AI will become an autonomous complete consciousness capable of independent function (like us) implies that natural (human) consciousness is a function of intelligence alone. This is laughable.

    Human consciousness is not simply intelligence. It involves mental function well beyond intelligence such as intuition, emotion, empathy, and other elements all in a unified whole with intelligence that has capabilities well beyond simple intelligence. This whole has been selected through billions of years of life on earth to be functional. AI can’t possible compete with that.

    You hear so much these days about artificial intelligence but nothing ever about artificial intuition or artificial consciousness. I wonder why that is?

Leave a Reply