Buckle your seat belts, here we go.
In 1993 science-fiction writer Vernor Vinge authored a paper introducing and describing the idea of The Singularity, a near-future Rubicon for humanity; we create machines with superhuman intelligence, thus changing everything forever. In the post-Singularity world, all the old rules are thrown out, progress accelerates exponentially, and the real action shifts away from humanity and towards our cybernetic spawn. Human beings are relegated to the sidelines as intelligent machines take over the world (or, in darker variations of the scenario, humans are enslaved or exterminated). In the best-case scenario, super-intelligent, immortal man-machine hybrids peacefully co-exist with the “unaltered” (i.e. regular humans).
Vernor Vinge -- this joker makes up wacky ideas for a living.
Vinge’s paper on The Singularity is clever, thought-provoking, and insightful. It’s exactly the kind of “how big can you think” speculation a good science fiction writer should come up with. Unfortunately, some groups of otherwise intelligent people seem to have swallowed Vinge’s paper whole and uncritically, elevating his fevered speculations to a kind of futurism gospel. Vinge’s paper is loaded with tantalizing specificity; The Singularity will probably occur between 2005 and 2030; it will be preceded by four “means” that we can currently observe unfolding in our technology newsfeeds (biological intelligence enhancement, advancement of computer/human interfaces, large computer networks becoming more intelligent, and the development of machine intelligence and the possibility of machine consciousness). This specificity gives the paper the feel of prophecy, at least to the unsophisticated reader. Science-fiction connoisseurs, on the other hand, will see through the purposefully affected serious tone of Vinge’s paper; in fact he is riffing, presenting a range of wild possibilities as if they might actually happen. That’s what science fiction writers do.
V.C. wunderkind Steve Jurvetson at The Singularity Summit, explaining how The Singularity will involve lots of corporate logos.
The inventor/entrepreneur Ray Kurzweil is particularly fond of the Singularity concept, and has written extensively about the subject in books such as The Age of Spiritual Machines and The Singularity Is Near. He is also a co-founder of the Singularity University. Recently featured in the New York Times, Singularity University describes its mission as “to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges.” I’m skeptical; Singularity U seems like a really good way to separate rich white male tech nerds (for the most part, anyway) from fifteen thousand dollars, in exchange for nine days of hyperactive white-board scribbling, gallons of free coffee, and a bag of Silicon Valley schwag (including a personal DNA test kit). Exponential technological progress is going to change everything. We don’t know how exactly, but there’s going to be a big change and then everything will be different. It might have something to do with your smart phone, social media, artificial intelligence, anti-aging technologies, space travel, and/or renewable energy!
There’s probably no harm in the existence of Singularity University. By all accounts the people who run it are idealistic (not hucksters), and the people who take the courses can generally afford it. But what is it, really? It’s just more riffing, just like Vinge’s original paper. The professors at Singularity University aren’t going to bring us any closer to The Singularity, because The Singularity is illusory.
WHY THE SINGULARITY WON’T HAPPEN
Let’s examine some of the premises of Vinge’s original paper, and discuss them in turn.
Premise #1: Improvements in computer/human interfaces will result in superhuman intelligence.
We’ve already had some improvements in computer/human interfaces, and they’ve proved to be fun and convenient. The mouse is nice, as is the trackpad. The portable computing device (laptop or smart phone) comes in really handy. And we can easily imagine an implant that allows us to access the internet via thought alone, or a contact lens micro-screen that projects data over our visual field.
Oh -- that's where they are.
But let’s get real for a second. Those of us with internet access already have near-instantaneous access to a good chunk of the world’s knowledge, right at our fingertips. Has it changed us that much? Instead of arguing about who was in what movie, we just look it up. Where are the Canary Islands, exactly? Just look it up. What’s the four hundredth digit of pi? Just look it up!
Having access to unlimited knowledge hasn’t changed us that much. It’s fun, and enormously convenient, but it’s not revolutionary.
Well, what about access to computing power? Computers can run enormously powerful simulations, and do enormously complex computations in the blink of an eye. Won’t that make a difference?
Once again, look at how we currently use the enormous amount of computing power available to us, and project forward. What do we do with it now? We watch TV on our computers. We play computer games that accurately represent real-world physics. Maybe our screen-saver analyzes astronomical data, in search of signals from ET, or folds proteins with the spare cycles, but in neither case do we pay much attention.
Improving the interface between brain and computer isn’t going to make a big difference, because the brain/computer analogy is weak. They aren’t really the same thing. We’ve already gone pretty far down the computer/human interface road, with the big result being increased access to entertainment (and porn).
Premise #2: Increases in computer processing speed, network size and activity, and/or developments in artificial intelligence will result in the emergence of superhuman intelligence.
Daniel Dennett has an interesting counter-argument for people who like to speculate about superhuman intelligence by comparing human intelligence to animal intelligence, and then extrapolating to superhuman intelligence. The speculation goes something like this; cats can’t do algebra — they can’t even conceive of it — but people can do algebra. So couldn’t there exist an order of mind that can perform complex operations and computations that human beings can’t even conceive of? Some kind of super-advanced alien (or future A.I.) mathematics that would befuddle even the Stephen Hawking types?
Dennett points out the problem with that argument; humans possess (we have evolved) a completely different cognitive faculty that cats don’t possess. We have the ability to think abstractly. We have the ability to run simulations in our minds and imagine various futures and outcomes (we can run scenarios). We can think symbolically and manipulate symbols (words, numbers, musical notation, languages of all sorts) in infinite numbers of configurations (why infinite? because we can also invent new symbols). In short, human beings can perform abstract mental operations.
Cats have a different relationship with symbols.
This is not to say that cats will never evolve symbolic cognition, or that the human brain has stopped evolving. But once we possess the imaginative faculty, once we evolve the ability to perform abstract mental operations, once the cat is out of the bag (so to speak) then there can exist no idea that by its very nature is off limits to us. Sure, some areas are difficult to contemplate. Quantum mechanics falls into this category. Quantum mechanics is entirely outside of our range of sensory experience (as human beings). It’s counter-intuitive; it doesn’t necessarily make sense. But this doesn’t mean we can’t think about it, and imagine it, and create analogies about it, and perform quantum calculations, and conduct quantum level experiments. Of course we can.
I believe Dennett makes this argument in Freedom Evolves (but I don’t have it handy to check — it might be in Darwin’s Dangerous Idea).
I’m not saying that humans are the “end of the line” or the “peak of the pyramid.” It’s possible, even probable, that our descendants (biological or cyborg or virtual) will be smarter than us. It’s also likely that the future of evolution (and I mean evolution in the broadest sense) holds “level jumps” that will change the very nature of reality (or rather, add layers). Perhaps our descendants (or another group’s descendants) will be able to manipulate matter with their minds — Akira style. Now that would change things up.
Even the polarphant must obey the rules of Darwinian evolution.
My point is that we should question the idea that superhuman intelligence can even exist. Certainly superhuman something-or-other can exist, but intelligence and consciousness are the wrong vector to examine. Sure, it’s probable that something out there (either elsewhere in the galaxy, or in the future) is or will be smarter and/or more aware/sophisticated than we are, but I question the idea that an entirely different order of cognition can exist. The cognitive space is like the chemistry space; there is not an entirely different set of elements somewhere else in the universe (or in the future or in the past). It’s all chemistry: hydrogen and helium and lithium and so forth. Same for the quantum physics space, once we have all the quarks and gluons figured out on our end we can surmise that it’s pretty much the same stuff everywhere. Same for the biological space — of course not every animal in the universe is going to have a genetic code sequenced out of adenine, cytosine, guanine, and thymine, but I’m guessing the rules of Darwinian evolution are universal. The same is true for cognition/intelligence/consciousness — it’s a space that includes manipulating abstract symbols, imagining and simulating possible futures, performing calculations, and being aware of one’s own perceptions/thoughts/emotions/identity (meta-awareness or self-consciousness). Of course you can divide up the cognition/consciousness space into various developmental sub-levels (Ken Wilber is a big fan of this) but I don’t buy the idea that there are vastly different orders of cognition and consciousness that exist somewhere out there, in the realm of all possibility.
A very large truck ... but still a truck.
The other problem with Premise #2 is the idea that making something bigger or faster changes its nature or function. If you increase the speed of a computer, then it can do what it already does much more quickly. With the right programming, for example, a computer can explore a logical decision tree and look for a certain outcome; thus computers can be programmed to be extremely good at chess. A very large network is just that — a big network — it can facilitate communications among billions of people and quasi-intelligent agents (bots, computer viruses, and so forth). But it doesn’t become something else just because you make it bigger or faster.
New functionality does not emerge unless new structures emerge. In nature, new structures can emerge via the process of evolution. In the realm of technology, new structures and functions are designed, or they evolve out of systems that are designed. We’re not going to see spontaneous intelligence (superhuman or not) emerge from the internet unless we turn the internet into a giant evolution simulator. You could of course argue that is already is, but if so, the evolving agents are funny cat videos and naked lady pictures. It’s memetic evolution; the funniest or sexiest or most heart-warming videos and pictures and posts thrive (get reposted/replicated) and the more complicated long-winded posts (like this one) enjoy the anonymity of obscurity. It’s not the kind of network that is going to spontaneously generate superhuman intelligence.
Only the strongest (lolcat) will survive.
Premise #3: The emergence of superhuman intellect will result in a radical transformation of the world.
Smart people, rather myopically, tend to take this idea for granted. Of course super-intelligence will be super important!
Historically, extreme intelligence only amounts to something when it is paired with other human qualities, like ruthless ambition, innovative inventiveness, disciplined practice, or preternatural persistence (Thomas Edison, for example, had all of those qualities). Look around — don’t we all know someone with a shut-in uncle who got a perfect score on their SAT’s? Or an unemployed, weed-dealing neighbor with a PhD in Semiotics? Intelligence is a nice thing to have, but on its own it’s just a brain burning brightly — until it’s all burned up.
Can you read? Thank Johannes.
When extreme intelligence is paired with motivating factors, the world does get changed. Gutenberg’s movable type printing press has proved influential, to say the least. The ambitious work of Thomas Edison and Nikola Tesla gave us cheap, universally available electricity, long-burning light bulbs, and dozens of other important inventions. Bill Gates, Steve Jobs, The Woz, and many others ushered in the era of personal computers. Maybe one day we’ll have a particularly ambitious A.I. contribute a new mobile gadget or something. But FTL travel? Teleportation? Singularity-level tech? I don’t think so.
Look at the A.I. curve. It’s much different than the processor speed curve. The latter is going straight up; the former goes up and down in fits and starts. The most promising approaches to A.I. are those that are attempting to reverse engineer the brain, and how the brain learns (artificial childhood). Maybe, if those go really well, we’ll get an artificial inventor who will invent cool stuff. But maybe we’ll get an A.I. that majors in Semiotics, proves unemployable, and deals weed for a living.
This post is getting too long, and I don’t want to completely doom its chances of reproductive success. I’ll save the rest of my thoughts on this subjects for Part II, which will include:
- When and where the real Singularity happened
- Why I might be wrong (and in what way)
- Vernor Vinge’s response