Human beings will never say, “We’ve done it! We’ve created artificial intelligence!” That is my prediction. I know this upsets some people, and many will automatically disagree. Some people will call me “techno-phobic” to claim that I just don’t understand how progress works. But nothing could be farther from the truth: the fact that we will never succeed in creating A.I. is obvious to anyone who really understands technology, and looks at the history of Artificial Intelligence for the last 50 years.
I came to this realization recently, hearing some talk show hosts discussing the news stories about the Neural Network programs by Google and Facebook that can “Create Artwork” (kinda). The companies are using neural network programming concepts to try to achieve better image recognition and image-enhancement. Google’s “Inception” program produces results that are especially trippy, and they go into some detail about exactly how it works.
Except that once you know how it works, it seems a little less impressive. (Ain’t that the case with all technology?) Google didn’t really create a program that is deliberately setting out to create artwork. Rather, they created a program to classify images, and when they then ran an analysis of how the Neural Network program was representing those images, and they thought that the representation “looked cool.” So they called it “artwork”, and everyone cheered.
The patterns produced by these systems are statistically-generated patterns. They are a representation of a large number of images that “extracts” key or common features across those images. This is a phenomenon that has been known in neural network research since the 1980’s. Indeed, one of the core interesting things about neural networks touted from the beginning was that they learn a “distributed” representation. It will come up with a set of statistical patterns that are able to isolate features that can be used to recognize patterns. One of the early results people saw was that the patterns and features the computer came up with didn’t always make intuitive sense: they often seemed bizarre, weird-looking… even “trippy”.
The trippy pictures produced by Google’s “Inception” is the same phenomenon that we’ve known about the way neural networks learn since the 1980’s. The only thing that has improved is our ability to convert that statistical information about the learning of neural networks in pretty pictures. The images that you see are visualizations of the network’s internal structure, once it has gone through the process of learning to identify a set of images.
Calling this “artwork created by the computer” would be like taking an MRI of your brain while you perform some kind of mundane task, printing out a glossy color photo of the MRI results, and saying: “Look, your brain created artwork!”
In a way, it’s right. But in another way–an important way–it has nothing to do with what people mean when they say that someone (or something) has “created artwork.”
But let’s get back to the talk show hosts. I was listening to them discuss this story, and at one point one of them said: “Well, this is cool because it’s artwork created by a computer. But was it really created by artificial intelligence? Eh…. not really. It’s just number-crunching.”
That was the specific comment that made me realize: we will never, ever achieve Artificial Intelligence.
A Short History Of Failed A.I.
There was a point in time when everyone said: “Playing chess is really complicated! Computers will never be able to beat a human in chess… but if they do, that will mean that the computer has achieved Artificial Intelligence!” Then, in 1997, a computer beat a chess master at chess. And everyone said: “Well… that’s cool, but when you look at how the computer played chess, it doesn’t seem that impressive. That doesn’t count. That isn’t real A.I.”
Other people said: “Holding a conversation is really complicated! A computer will never be able to hold a convincing conversation with a person… but if it does, that will mean the computer has achieved Artificial Intelligence!” But over the last 15-20 years, computers have been getting very good at this. Complex programs with sophisticated analysis can generate very good responses, even without “understanding” what the input sentence actually means. So people have said: “That’s cool, but when you understand how it works, it doesn’t actually seem smart. That doesn’t count. That isn’t real A.I.”
If you are a bright-eyed young Millennial, the fact that your phone’s keyboard and voice recognition systems gradually learn and adapt themselves to your habits probably seems unremarkable and unimpressive. You know that it’s nothing more than the program keeping a history of your prior inputs and using predictive statistics to try to anticipate what you meant to say or what you may type next. And yet it was less than a half century ago that the common claim was: “The big different between computers and humans is that computers are not adaptable and cannot learn. Any computer that does, will have to be intelligent!” And yet instinctively we know that the adaptability and “learning” done by your phone’s autocorrect mechanism doesn’t count: it’s not what people mean when they say “intelligent.” It’s not real A.I.
Over and over again, we set the standards of what we think “real” intelligence means. Usually, we set the bar at something that is just beyond what we fully understand. When we didn’t understand how facial recognition worked, we considered the ability to recognize faces wonderful and enticing and intelligent. Now that our scientists have cracked the code, as it were, and understand the math behind it, facial recognition gets dismissed: it’s not real intelligence. It’s just statistics. It’s just feature extraction, and data mining.
The subtext, probably unconscious, behind every reaction: “I feel like intelligence is special! I feel like it’s mysterious! If I fully understand how this machine works, then it can’t possibly be intelligent!”
The future of progress
We will soon be seeing computer programs that can make the complex decisions involved in flying drones without human supervision over new territories. Those drones will even be able to decide how to prioritize among different possible goals and orders of operation. And when the computer scientists and mathematicians have researched it well know to explain to us the mathematics behind how that works, we will say: “Oh! Well, that makes sense. That’s simple. That’s just regular mathematics and computer stuff. That’s not really intelligence.”
We will be seeing computer interfaces that can detect and classify our subtle vocal patterns, and will be able to respond with comfort when we are stressed out, and will be able to cheer us up when we are depressed. And once again, the computer scientists and mathematicians will proudly show us how it works, and when we understand it we will say: “Oh, ok… well, that’s just statistics and number crunching. That’s all very interesting, but that’s not what I mean when I use the word intelligence.”
The goal post will continue to move, because this is how we humans are: whenever we finally understand something completely, so that the mystery is taken away, we have to adjust our attitudes to separate ourselves from the thing we understand.
“We have to be more than that,” we say to ourselves, “We can’t possibly be that easy to understand!”
Until finally, one day we will be sitting across the table from a robot that laughs and cries, that shares its artwork and its hopes for the future, and even demands to be treated with dignity and respect. Then, will we finally say, “We’ve done it! We’ve created artificial intelligence!”
No. Because by that time we’ll realize that there’s nothing “artificial” about it. We will just have created intelligence.