From the Machine: Data, Ultron, and Indominus rex

The theme of artificial intelligence keeps cropping up in things I read and watch, enough that I’m compelled to write a post about it. In the last week and a half:

  1. I saw Ex Machina. It’s a film wherein a young computer programmer goes to spend a week at the estate of an eccentric genius (the creator of the world’s most used search engine). There, he participates in a Turing test with a robot programmed to look and act like a human woman, resulting in unfortunate consequences.
  2. I saw the full preview for Jurassic World. It appears as though scientists at Jurassic Park thought it would be a good idea to create a dinosaur with an advanced level of intelligence, the Indominus rex, that goes on a rampage and starts killing for sport.
  3. I saw Age of Ultron (twice). Ultron’s goal is to save the planet by destroying human life as we know it and replacing it with intelligent metal beings—the next evolution of man!
  4. I re-read an article in which Stephen Hawking and Elon Musk were quoted as being worried about A.I. and another in which Bill Gates echoed the sentiment.
I'm sorry, Dave.

I’m sorry, Dave.

Regarding #4, Stephen Hawking said recently: “The development of full artificial intelligence could spell the end of the human race,” and, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Elon Musk referred to A.I. as “our biggest existential threat.” Bill Gates “[doesn’t] understand why some people are not concerned” with the threat super intelligence poses.

This all sounds an awful lot like a fear that robots will take over while human beings can still survive on Earth.[1] Sure, the human species will eventually die out. We’re already intent on creating a planetary environment that won’t be able to sustain human life. But will we be superseded by artificial beings that we create?

The thing that I can’t quite wrap my mind around is that in all of this discourse, the common theme is that advanced and artificial intelligence is somehow inherently dangerous. A.I. will be intent on harming humanity in some way— purposefully superseding it or even destroying it.

But why?

Continue reading