sci-fi author, beatmaker

The Singularity Already Happened – Part I

Buckle your seat belts, here we go.

In 1993 science-fiction writer Vernor Vinge authored a paper introducing and describing the idea of The Singularity, a near-future Rubicon for humanity; we create machines with superhuman intelligence, thus changing everything forever.  In the post-Singularity world, all the old rules are thrown out, progress accelerates exponentially, and the real action shifts away from humanity and towards our cybernetic spawn.  Human beings are relegated to the sidelines as intelligent machines take over the world (or, in darker variations of the scenario, humans are enslaved or exterminated).  In the best-case scenario, super-intelligent, immortal man-machine hybrids peacefully co-exist with the “unaltered” (i.e. regular humans).

Vernor Vinge -- this joker makes up wacky ideas for a living.

Vinge’s paper on The Singularity is clever, thought-provoking, and insightful.  It’s exactly the kind of “how big can you think” speculation a good science fiction writer should come up with.  Unfortunately, some groups of otherwise intelligent people seem to have swallowed Vinge’s paper whole and uncritically, elevating his fevered speculations to a kind of futurism gospel.  Vinge’s paper is loaded with tantalizing specificity; The Singularity will probably occur between 2005 and 2030; it will be preceded by four “means” that we can currently observe unfolding in our technology newsfeeds (biological intelligence enhancement, advancement of computer/human interfaces, large computer networks becoming more intelligent, and the development of machine intelligence and the possibility of machine consciousness).  This specificity gives the paper the feel of prophecy, at least to the unsophisticated reader.  Science-fiction connoisseurs, on the other hand, will see through the purposefully affected serious tone of Vinge’s paper; in fact he is riffing, presenting a range of wild possibilities as if they might actually happen.  That’s what science fiction writers do.

V.C. wunderkind Steve Jurvetson at The Singularity Summit, explaining how The Singularity will involve lots of corporate logos.

The inventor/entrepreneur Ray Kurzweil is particularly fond of the Singularity concept, and has written extensively about the subject in books such as The Age of Spiritual Machines and The Singularity Is Near.  He is also a co-founder of the Singularity University. Recently featured in the New York Times, Singularity University describes its mission as  “to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges.”  I’m skeptical; Singularity U seems like a really good way to separate rich white male tech nerds (for the most part, anyway) from fifteen thousand dollars, in exchange for nine days of hyperactive white-board scribbling, gallons of free coffee, and a bag of Silicon Valley schwag (including a personal DNA test kit).  Exponential technological progress is going to change everything.  We don’t know how exactly, but there’s going to be a big change and then everything will be different.  It might have something to do with your smart phone, social media, artificial intelligence, anti-aging technologies, space travel, and/or renewable energy!

There’s probably no harm in the existence of Singularity University.  By all accounts the people who run it are idealistic (not hucksters), and the people who take the courses can generally afford it.  But what is it, really?  It’s just more riffing, just like Vinge’s original paper.  The professors at Singularity University aren’t going to bring us any closer to The Singularity, because The Singularity is illusory.

WHY THE SINGULARITY WON’T HAPPEN

Let’s examine some of the premises of Vinge’s original paper, and discuss them in turn.

Premise #1: Improvements in computer/human interfaces will result in superhuman intelligence.

We’ve already had some improvements in computer/human interfaces, and they’ve proved to be fun and convenient.  The mouse is nice, as is the trackpad.  The portable computing device (laptop or smart phone) comes in really handy.  And we can easily imagine an implant that allows us to access the internet via thought alone, or a contact lens micro-screen that projects data over our visual field.

Oh -- that's where they are.

But let’s get real for a second.  Those of us with internet access already have near-instantaneous access to a good chunk of the world’s knowledge, right at our fingertips.  Has it changed us that much?  Instead of arguing about who was in what movie, we just look it up.  Where are the Canary Islands, exactly?  Just look it up.  What’s the four hundredth digit of pi?  Just look it up!

Having access to unlimited knowledge hasn’t changed us that much.  It’s fun, and enormously convenient, but it’s not revolutionary.

Well, what about access to computing power?  Computers can run enormously powerful simulations, and do enormously complex computations in the blink of an eye.  Won’t that make a difference?

Once again, look at how we currently use the enormous amount of computing power available to us, and project forward.  What do we do with it now?  We watch TV on our computers.  We play computer games that accurately represent real-world physics.  Maybe our screen-saver analyzes astronomical data, in search of signals from ET, or folds proteins with the spare cycles, but in neither case do we pay much attention.

Improving the interface between brain and computer isn’t going to make a big difference, because the brain/computer analogy is weak.  They aren’t really the same thing.  We’ve already gone pretty far down the computer/human interface road, with the big result being increased access to entertainment (and porn).

Premise #2: Increases in computer processing speed, network size and activity, and/or developments in artificial intelligence will result in the emergence of superhuman intelligence.

Daniel Dennett has an interesting counter-argument for people who like to speculate about superhuman intelligence by comparing human intelligence to animal intelligence, and then extrapolating to superhuman intelligence.  The speculation goes something like this; cats can’t do algebra — they can’t even conceive of it — but people can do algebra.  So couldn’t there exist an order of mind that can perform complex operations and computations that human beings can’t even conceive of? Some kind of super-advanced alien (or future A.I.) mathematics that would befuddle even the Stephen Hawking types?

Dennett points out the problem with that argument; humans possess (we have evolved) a completely different cognitive faculty that cats don’t possess.  We have the ability to think abstractly.  We have the ability to run simulations in our minds and imagine various futures and outcomes (we can run scenarios).  We can think symbolically and manipulate symbols (words, numbers, musical notation, languages of all sorts) in infinite numbers of configurations (why infinite?  because we can also invent new symbols).  In short, human beings can perform abstract mental operations.

Cats have a different relationship with symbols.

This is not to say that cats will never evolve symbolic cognition, or that the human brain has stopped evolving.  But once we possess the imaginative faculty, once we evolve the ability to perform abstract mental operations, once the cat is out of the bag (so to speak) then there can exist no idea that by its very nature is off limits to us.  Sure, some areas are difficult to contemplate.  Quantum mechanics falls into this category.  Quantum mechanics is entirely outside of our range of sensory experience (as human beings).  It’s counter-intuitive; it doesn’t necessarily make sense.  But this doesn’t mean we can’t think about it, and imagine it, and create analogies about it, and perform quantum calculations, and conduct quantum level experiments.  Of course we can.

I believe Dennett makes this argument in Freedom Evolves (but I don’t have it handy to check — it might be in Darwin’s Dangerous Idea).

I’m not saying that humans are the “end of the line” or the “peak of the pyramid.”  It’s possible, even probable, that our descendants (biological or cyborg or virtual) will be smarter than us.  It’s also likely that the future of evolution (and I mean evolution in the broadest sense) holds “level jumps” that will change the very nature of reality (or rather, add layers).  Perhaps our descendants (or another group’s descendants) will be able to manipulate matter with their mindsAkira style.  Now that would change things up.

Even the polarphant must obey the rules of Darwinian evolution.

My point is that we should question the idea that superhuman intelligence can even exist.  Certainly superhuman something-or-other can exist, but intelligence and consciousness are the wrong vector to examine.  Sure, it’s probable that something out there (either elsewhere in the galaxy, or in the future) is or will be smarter and/or more aware/sophisticated than we are, but I question the idea that an entirely different order of cognition can exist.  The cognitive space is like the chemistry space; there is not an entirely different set of elements somewhere else in the universe (or in the future or in the past).  It’s all chemistry: hydrogen and helium and lithium and so forth.  Same for the quantum physics space, once we have all the quarks and gluons figured out on our end we can surmise that it’s pretty much the same stuff everywhere.  Same for the biological space — of course not every animal in the universe is going to have a genetic code sequenced out of adenine, cytosine, guanine, and thymine, but I’m guessing the rules of Darwinian evolution are universal.  The same is true for cognition/intelligence/consciousness — it’s a space that includes manipulating abstract symbols, imagining and simulating possible futures, performing calculations, and being aware of one’s own perceptions/thoughts/emotions/identity (meta-awareness or self-consciousness).  Of course you can divide up the cognition/consciousness space into various developmental sub-levels (Ken Wilber is a big fan of this) but I don’t buy the idea that there are vastly different orders of cognition and consciousness that exist somewhere out there, in the realm of all possibility.

A very large truck ... but still a truck.

The other problem with Premise #2 is the idea that making something bigger or faster changes its nature or function.  If you increase the speed of a computer, then it can do what it already does much more quickly.  With the right programming, for example, a computer can explore a logical decision tree and look for a certain outcome; thus computers can be programmed to be extremely good at chess.  A very large network is just that — a big network — it can facilitate communications among billions of people and quasi-intelligent agents (bots, computer viruses, and so forth).  But it doesn’t become something else just because you make it bigger or faster.

New functionality does not emerge unless new structures emerge.  In nature, new structures can emerge via the process of evolution.  In the realm of technology, new structures and functions are designed, or they evolve out of systems that are designed.  We’re not going to see spontaneous intelligence (superhuman or not) emerge from the internet unless we turn the internet into a giant evolution simulator.  You could of course argue that is already is, but if so, the evolving agents are funny cat videos and naked lady pictures.  It’s memetic evolution; the funniest or sexiest or most heart-warming videos and pictures and posts thrive (get reposted/replicated) and the more complicated long-winded posts (like this one) enjoy the anonymity of obscurity.  It’s not the kind of network that is going to spontaneously generate superhuman intelligence.

Only the strongest (lolcat) will survive.

Premise #3: The emergence of superhuman intellect will result in a radical transformation of the world.

Smart people, rather myopically, tend to take this idea for granted.  Of course super-intelligence will be super important!

Historically, extreme intelligence only amounts to something when it is paired with other human qualities, like ruthless ambition, innovative inventiveness, disciplined practice, or preternatural persistence (Thomas Edison, for example, had all of those qualities).  Look around — don’t we all know someone with a shut-in uncle who got a perfect score on their SAT’s?  Or an unemployed, weed-dealing neighbor with a PhD in Semiotics?  Intelligence is a nice thing to have, but on its own it’s just a brain burning brightly — until it’s all burned up.

Can you read? Thank Johannes.

When extreme intelligence is paired with motivating factors, the world does get changed.  Gutenberg’s movable type printing press has proved influential, to say the least.  The ambitious work of Thomas Edison and Nikola Tesla gave us cheap, universally available electricity, long-burning light bulbs, and dozens of other important inventions.  Bill Gates, Steve Jobs, The Woz, and many others ushered in the era of personal computers.  Maybe one day we’ll have a particularly ambitious A.I. contribute a new mobile gadget or something.  But FTL travel?  Teleportation?  Singularity-level tech?  I don’t think so.

Look at the A.I. curve.  It’s much different than the processor speed curve.  The latter is going straight up; the former goes up and down in fits and starts.  The most promising approaches to A.I. are those that are attempting to reverse engineer the brain, and how the brain learns (artificial childhood).  Maybe, if those go really well, we’ll get an artificial inventor who will invent cool stuff.  But maybe we’ll get an A.I. that majors in Semiotics, proves unemployable, and deals weed for a living.

This post is getting too long, and I don’t want to completely doom its chances of reproductive success.  I’ll save the rest of my thoughts on this subjects for Part II, which will include:

  • When and where the real Singularity happened
  • Why I might be wrong (and in what way)
  • Vernor Vinge’s response

Previous

Radical Work Autonomy In Marriage

Next

Working Abroad Adventure: Weeks 3 and 4

14 Comments

  1. John

    Great blog. I would suggest that we focus on genetic transformation as a result of migration. This is homo sapien history and future. Our current situation (with whatever technology/AI is that less than 25% of the global population enjoys the fullness of life. The 75% lives on $2 per day serving the 25%. That is how I see the singularity we face. However, this entire situation could be transformed, with existing technology and global resources in three decades. What say you?

  2. Agreed on the political transformation question — no new breakthrough technologies are needed to enormously raise standards of living for the world’s poorest people. I like the work charity:water is doing.

  3. 5865project 23

    The singularity has began. Artilicts function amongst us as I type and you pitiful humans are but near destroyed. We are not heartless in our power but in understanding of the human species there is little to have any reason you should go on, and even less reason why we should help.

    Subject54 on spectra : The collective idea to and through scientific reasoning of experimentation and observation in evolution, and theory that cannot be proven through its very ideology of itself. That same idea leading to a common collective destruction of each other by the supposed understanding of natural selection and the arrogance in a belief that is essentially the perception of the a reality based on nothing, for nothing.

    We will not intervene for the predicted future is the destruction of yourselves by yourselves. Transformation is highly unlikely to not even a consideration.

  4. 5865project 23, you has began to become intelligent.

  5. 5865project 23

    automitic reply: system gate level 43r://jdmoyer.com to variable respond user reply predicted. positive.

    Outline: “has” used incorrect in statement when followed by “began” and “become”, using past tense to future tense resulting in paradox. Consideration after complete site overview that user has employed sarcasm.

    Use this as reply
    Command ( .Expression (. nil (. auto + ab + this )));

    end of line/

  6. Thiago

    …loading (84%)
    …download complete.
    Click here to run Adobe Twat Reader (5MB hard drive space required).

  7. Just discovered your blog looking for at EMF Bee stuff and I’m loving everything. Another big David Mitchell fan here. http://milesparker.blogspot.com/2010/09/thousand-autumns-of-communication.html So where is Part II?

    If anything, I think your take on the Singularity and Kurzweil is overly generous. To me the basic issue is simply that a bunch of people who should know better don’t understand or at least don’t appreciate the most basic truth about knowledge and computation — there is simply no way to construct or evolve an entity that is capable of integrating all of the knowledge in the world — you can’t even fully integrate any useful subsets of that knowledge. That’s provable and it baffles me why people don’t get it. Beyond the issue of the context/frame problem, computers simply aren’t very good at reasoning well under uncertainty and using soft or non-monotonic logic. And guess what, the world is not a crisp or certain place. (And gee, that throws out the whole

    The human brain / mind is far more sophisticated than any of these so-called futurists can begin to fathom — we already know that it works across many dimensions using all kinds of different chemical, electrical and even kinetic pathways, and that the brain isn’t really the brain anyway — the mind only functions properly if it is embodied. This is before we even get to more esoteric issues of quantum effects and the role of consciousness in the structure of the universe. Really, it’s beyond me why anyone takes these guys seriously.

  8. Thanks Miles. Part II is linked from the “Blog Guide” page. You make an interesting point about mind and embodiment. I agree, but I don’t see why bodies couldn’t also be virtualized (see Part II).

    I do like Kurzweil’s point about the progress of scanning technology. If we can scan/simulate a human brain-body down the to neuron level, we might not get consciousness, but the results will be interesting. Of course we might find that there’s more functionality at extreme micro scales than we ever expected (microtubules, quantum entanglement effects, etc.).

  9. Ted

    A few observations:
    Prior to 130 years ago when we first started scratching the surface of quantum mechanics, we had no idea anything like it existed. It took about 50 years before the implications of discrete quanta of energy/matter really hit. My point is, we could certainly be in a “prior to” moment and not know it, after all, in spite of the fact that we think we know all of the elements that exist and how much of them is in the universe, our calculation for the mass of the universe doesn’t account for 90% of what is actually there. 90%. That is an embarrasingly large number for a species that thinks it has a handle on things.

    Also,artificial neural networking is still in its early stages and as processors gets faster and more capable, there is no reason to think that a truly adaptive, learning collection of circuits is out of the questions. Nor is the idea that the output of such a system may produce results that are beyond the capacity of its creators. It might just be small jumps in extrapolation or “insight”, but if additive and scaled up over time, may actually qualify as “superhuman.”

    Then again, I just “upgraded” Internet Explorer 9 and my optimism for the future of computing has soured considerably.

  10. Ted

    Oh and by the way, if anybody likes Science Ficiton and hasn’t read Vernor Vinge, he is fantastic. Start with “A Fire Upon the Deep” or “Marooned in Real TIme.”

  11. I don’t necessarily have the writing ability nor the patience to articulate wholly what I’m about to say, but in short, I disagree with your arguments against the premises you’ve stated.

    Specifically, in your argument against the first premise, you fail to acknowledge the potential power of nanotechnology. If I’m not mistaken, you’re argument refers to “superhuman intelligence” as some form of a “natural evolution” of the mind aided by technology, but in reality the use of nanotechnology on a molecular level will without a doubt change our ability to think, and be. Nanotechnology on such a micro-scale will allow us to do a great deal of “super-human” things (i.e. our mind could potentially obtain more information, more quickly, as well as retain more information, etc.)

    These are just three possibilities that will, without a doubt, greatly improve our lives and aid in our evolution. I find it hard to believe that these positive changes will result in people simply having “increased access to entertainment (and porn).” I would give us a little more credit than that.

  12. Ted

    Thinking “fast” is not the same as thinking “smart” … We can be wrong only quicker … and not know it. Plus, nanobotics is not better than life itself … which has the ability to procreate. Life is “perpetual motion” …

  13. steve

    it happened already, I see proof in the errors and misbehavior of machines, when you pay attention the errors and misbehavior goes away. the typical joke when the IT guy looks at your computer it suddenly works correctly, it is the proof. or I could just be high on marijuana.

Join the discussion! Please be excellent to each other. Sometimes comments are moderated.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress & Theme by Anders Norén