Why You Should Fear A Genuine Autonomous Intelligence Machine

As our discussion of representational and abstract art demonstrates, machine intelligence needs a value-based internal model of the world to interact with its environment. If such an agent is to be humanity’s servant, we need to ask whose values are going to be the basis for an autonomous intelligence machine’s internal model

For example: how do we prevent an autonomous intelligence machine from harming or injuring a human? Isaac Asimov’s I, Robot series of stories is an early attempt to understand how robots would react to the human world around them.1The stories that inspired Asimov were about a robot called Adam Link, published under the pen name Eando Binder, that tackled this more explicitly by making the robot almost human.One famous feature of Asimov’s stories is the three laws of robotics, introduced in the story Runaround:

A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Does harm mean only physical injury, or is emotional harm included? How would an artificially intelligent agent understand short term harm that produces long term gain, or decide among conflicting goals? What are those goals – long life, happiness, wealth, serenity, equanimity, inquiry about the world? Should the agent minimize fear, avoid danger, preserve physical ability?  Many of the Asimov stories revolve around the ambiguities in those rules. Clearly, value judgements, tradeoffs, and compromises are necessary for a robot to obey these laws. This should not be surprising since this is what humans have to do. 

These rules cannot just be implanted, or programmed into the agent, the problem is too complicated.2 It is understandable that Asimov never really discusses how these rules or the associated necessary information is given to or stored in the robotic brains.  This story was published in 1942, before modern neuroscience or even transistors, before the idea of computer software running on top of hardware became widespread. Calculations in the stories are done on paper, and mathematicians are referred to as “slide-rule geniuses”. This is where the initial approaches to artificial intelligence (GOFAI) failed.

Modern approaches require agents to learn from teaching and experience.  Let us assume, although it is an unsolved, difficult problem in computer science, this will be eventually possible. An autonomous intelligence machine would have to be educated, just as a human would. Could you educate a prototype agent, and then isolate those teachings into a chip? That is unlikely to work because we are talking about agents that are not restricted to a narrow set of tasks, and the learning would never stop as these agents adapt to their individual experiences. 

Education, however, is culture dependent. With whose education and whose values are we going to train and educate the robots? However they are trained, they are going to be antithetical to some other culture.

It has been known since antiquity that changing one’s mental model of the world revolutionizes how you experience it. Ecclesiastes, the Stoics, the Buddha, or the early Christians had very different views about what is important and what is not.  The religious individual of the Middle Age saw God as consciously keeping the planets in motion. A post Enlightenment educated individual, even if they are a theist, sees the laws of the universe keeping the planets in motion. What about a robot trained in critical race theory, or the white man’s burden? How about the Chinese Communist view of the world versus the European view of the world?

The result would be autonomous machine intelligence with undefined behavior – a far cry from Asimov’s robotic safety rules. In the stories, these robots are tested to make sure they behave properly. In reality, you could not unit test, or system test such a complicated agent before you unleashed it into the world.  These agents would have to develop habits. After all, you cannot think out every problem solution from scratch every time.  Habits imply they will be just as bigoted, emotional, and as conflicted as humans. Would we need robot psychologists for a robot that developed, say obsessive-compulsive or anti-social behavior?  Or would a human psychologist be good enough? 

Could we limit their learning system so that certain types of human reactions are impossible? Emotions and feelings are at the bottom of all human behavior. Jaak Panksepp and Lucy Biven outline our internal model of the world as composed of three levels.3The Archaeology of Mind At the base layer are some preprogrammed behaviors such as homeostasis (e.g. thirst or hunger) and sensory effects (e.g. pleasant or unpleasant). At the second level are preprogrammed learning methods such as classical conditioning, operant conditioning, or habits.  The model at this level is enhanced by learning, but the means of learning are preprogrammed. The highest level consists of cognition, emotion, and choice. There are connections between all the layers.

An autonomous machine intelligence would not have to be structured this way, nor would it have to have the evolutionary biases we have.  Yet such agents will have needs (such as power), and will have to evaluate sensory input.  Such an agent would require some homeostatic mechanism, because without it, there would be no ability to maintain the entropy reducing behavior needed for survival.4Erwin Schrödinger, What is Life?, Chapter 6However this is supplied, it would be culturally determined. These requirements would produce their own set of drives and emotions related to their needs. These needs would have to be, as in human minds, connected to the higher processing levels which help their fulfillment. Emotions are based on the effects of lower levels of an internal world model, they are intrinsic, not an added on feature.5 It is not a simple manner of an isolated data emotion chip as suggested in a Star Trek episode.

While we may not yet know what the necessary mental architecture for an artificially intelligent agent is, some of its basic structure can still be outlined. The question of human morality, judgement, and biases has been debated for thousands of years, and will not disappear with the introduction of an autonomous intelligence machine. The promoters of intelligent machines, real intelligent machines, who gloss over, or ignore this problem are guilty of a grave intellectual crime.

  • 1
    The stories that inspired Asimov were about a robot called Adam Link, published under the pen name Eando Binder, that tackled this more explicitly by making the robot almost human.
  • 2
    It is understandable that Asimov never really discusses how these rules or the associated necessary information is given to or stored in the robotic brains.  This story was published in 1942, before modern neuroscience or even transistors, before the idea of computer software running on top of hardware became widespread. Calculations in the stories are done on paper, and mathematicians are referred to as “slide-rule geniuses”.
  • 3
    The Archaeology of Mind
  • 4
    Erwin Schrödinger, What is Life?, Chapter 6
  • 5
    It is not a simple manner of an isolated data emotion chip as suggested in a Star Trek episode.

One Reply to “Why You Should Fear A Genuine Autonomous Intelligence Machine”

  1. Glad to see someone else sees the value of science fiction, especially the dystopian genre, to understanding our current situation