From the Machine: Data, Ultron, and Indominus rex

The theme of artificial intelligence keeps cropping up in things I read and watch, enough that I’m compelled to write a post about it. In the last week and a half:

  1. I saw Ex Machina. It’s a film wherein a young computer programmer goes to spend a week at the estate of an eccentric genius (the creator of the world’s most used search engine). There, he participates in a Turing test with a robot programmed to look and act like a human woman, resulting in unfortunate consequences.
  2. I saw the full preview for Jurassic World. It appears as though scientists at Jurassic Park thought it would be a good idea to create a dinosaur with an advanced level of intelligence, the Indominus rex, that goes on a rampage and starts killing for sport.
  3. I saw Age of Ultron (twice). Ultron’s goal is to save the planet by destroying human life as we know it and replacing it with intelligent metal beings—the next evolution of man!
  4. I re-read an article in which Stephen Hawking and Elon Musk were quoted as being worried about A.I. and another in which Bill Gates echoed the sentiment.
I'm sorry, Dave.

I’m sorry, Dave.

Regarding #4, Stephen Hawking said recently: “The development of full artificial intelligence could spell the end of the human race,” and, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Elon Musk referred to A.I. as “our biggest existential threat.” Bill Gates “[doesn’t] understand why some people are not concerned” with the threat super intelligence poses.

This all sounds an awful lot like a fear that robots will take over while human beings can still survive on Earth.[1] Sure, the human species will eventually die out. We’re already intent on creating a planetary environment that won’t be able to sustain human life. But will we be superseded by artificial beings that we create?

The thing that I can’t quite wrap my mind around is that in all of this discourse, the common theme is that advanced and artificial intelligence is somehow inherently dangerous. A.I. will be intent on harming humanity in some way— purposefully superseding it or even destroying it.

But why?

Let me go back to Aristotle for a moment. Aristotle (among others, Aristotle’s just my go-to) thought that all things had a telos (a purpose, a nature, an end). In a very big nutshell, the telos of man is rational activity, and this has been echoed in Western thought ever since.

Of course, we’re a living species set on survival just like any other living species. We also invent things like love and happiness and capitalism, so we have additional aims. We have the ability to ask questions about our telos. But the point is, in the Western world, our existence has something to do with the use of our intelligence.

This brings up the question, then. If we frame things in terms of having some end, what would the telos of artificial intelligence be? Well, for the human programmer, one end is for it to be indistinguishable from human intellect. But what would it be for the A.I.?

Artificial intelligence is more intelligent than man, can calculate faster, doesn’t forget anything, assesses more possibilities of a situation faster, is a better predictor based on probabilities, and, importantly, doesn’t die.

Artificial intelligence gets to determine its own telos, doesn’t it? Man did. Why do we assume that A.I. is going to end on power? Why domination and take over? Would robots really rise up against their creators?

These viewpoints all utilize a particular idea of what intelligence is—gathering information, understanding, and learning. But to truly mimic a human consciousness, you would have to mimic all the other dirty, gritty, ugly things that affect our thinking. Pesky things like feelings and empathy. But also pesky things like context, history, and shared social experience.

Can you feel the love?

Spot and Data. Can you feel the love?

I’m willing to allow that human emotion could be programmed and learned. Maybe Data learns to love his cat and really feels a bond with it. Cultivating empathy also seems to be a learned skill. But even if we ignore these human traits and go with a stricter sense of rationality, all of the above discourse implies that morality and ethics, empathy and kindness, cooperation and camaraderie are not features of intelligence. Do we pretend to get along with each other in as much as we do because we’re social creatures only in that we need other people to help us survive?

The A.I. of the future won’t be social in the same way we are. Ultron is one thing. Ultron is like the Borg. Robots won’t have a natural instinct to procreate and propagate their species, because they aren’t a species. They don’t grow or spread (or assimilate) the same way we organic beings do.

Man has dominated over nature, killing less intelligent fauna for their pelts and tusks and meat. Killing flora for the space they take up.

Would A.I. do the same to us? Why the inevitability of this threat? Absolute rationality results in absolute power? Why do we think the highly intelligent Indominus rex would kill for sport? Why do we think artificial intelligence will act like men?

Ah ha! We have the beginning of an answer.

In all of man’s supposed rationality, he rarely seems willing to acknowledge that he is never actually ruled by rationality or solely by the type of intelligence that exists in a machine. We value rationality, so we think we are rational.

These aforementioned accounts of A.I. don’t seem to consider that perhaps artificial intelligence would end in a compassionate being.

These accounts don’t consider that maybe artificial intelligence, built in man’s image of rationality and intelligence, would do what man cannot and reach the telos of pure contemplation—thought thinking thought—and just leave humans alone.

But, isn’t it possible that A.I. will end up more like the Samantha operating system from Her?

It’s like I’m reading a book… and it’s a book I deeply love. But I’m reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you… and the words of our story… but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world. It’s where everything else is that I didn’t even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can’t live in your book any more.

Of course, we don’t know. We’re the feeble ones here. Maybe I’ll be killed in the Great Robot War of 2045. But I can’t help thinking that this fear comes from man’s ego and not man’s so-called intelligence.

Her gif


[1] I feel like this should go without saying, but I’m being (mostly) facetious here. I know that’s not what Hawking is getting at. At least, I really hope it’s not.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s