Blah blah blah singularity blah blah machine AI blah blah the world will undergo a paradigm shift, it’s coming, all bow down before the mighty new technologies that will change humanity forever. The problem I have with talk of the technological singularity is not that it doesn’t make sense, and not that I don’t believe that technological advancement is indeed rapid, accelerating, and world-changing, but that we have somehow invented a symbol of vast but actually rather vague significance. I don’t think the “singularity” is a useful idea. I think it’s a buzzword to some, and a religion to others.
For what makes Futurology (capitalization mine) really, actually different than a belief that something momentous will happen in 2012, when the Mayan calendar wraps around? Not a lot, as far as I can tell. And now it turns out that two religious scholars have concluded exactly the same thing, in a 2008 paper in the Journal of Contemporary Religion:
Futurology-as-religion has charismatic leaders, authoritative texts, mystique, and a fairly complete vision of salvation. Futurology is, in effect, a new religious movement (NRM).
Let’s break this down a little further. How will we recognize when the “singularity” occurs? Some accounts speak of a period of “unprecedented technological progress” or an exponential growth in computing power, but we’ve been seeing that for 50 years. Or it is described as a point beyond which change is so rapid that prediction is impossible, but prediction more than a few years into the future is impossible anyway, if for no other reason than the butterfly effect. It’s not that I dispute the core argument that technology will continue to alter humanity in nearly unrecognizable ways. In fact, I find many of the future technologies discussed by the Singularists to be quite plausible, including nanotechnology, better AI, and greatly extended human life — there’s lots of serious research going on in all these fields. And I also suspect that we will continue to live through a period of accelerating technological capability, because it is the nature of technology to build upon itself. But why must there necessarily exist some special point? And why must all of these technological transformation necessarily be, well, good?
Even quantitative exponential growth in computing power (or other measurable human capacities) doesn’t imply a singularity. Exponential growth accelerates endlessly, but not infinitely fast; it has no special points or infinite asymptotes.
The only claim that seems at all concrete, the only thing that might give a definite date to the singularity, is the moment when a machine becomes smarter than a human. Such a machine, it is claimed, could improve on itself in a recursive and accelerating fashion, rapidly exploding up to incomprehensible intelligence levels and coming to rule the universe. Surely, this would change history in Godlike ways.
Except that nobody knows what machine intelligence is. Or how we’d recognize one if we met it. The word “intelligence” suggests that one day the computer would wake up and talk to us (presumably, through IM) but this is mere metaphor. (Kurzweil et al. also speak of replacing neurons with hardware or software to produce a synthetic human brain, but that would be a re-implemented human intelligence.) The phrase “machine consciousness” is even less useful, because we can’t even define the word “consciousness” for humans.
Nobody knows what the words use to describe the singularity actually mean.
If no one can specify criteria for noticing when this singularity has actually occurred, I argue that it doesn’t exist even in a theoretical, conceptual sense. In a practical sense it’s therefore no better than Nostradamus, or 2012, or tea leaves. What’s left in the concept is merely belief: belief that somehow, somewhen, something big and important is going to happen. The End of The World (as we know it.) The Ascent to Paradise. Living forever in the consciousness of the machine. Apocalypse. Salvation.
All hail the prophet Kurzweil.