Many scientists will talk about an upcoming event called a “singularity.” This event, championed by very brilliant man Ray Kurzweil, is the point at which machines become smarter than humans. And once machines are as smart or smarter than humans, the machines can design their own improvements, at an incredibly accelerated rate. Whether this singularity will happen, and if so what exactly might happen, is a point of significant debate among scientists and other forward thinkers.
I believe that this singularity will happen. And I believe that it will be absolutely terrible.
I believe that the singularity, this point where machine intelligence surpasses human intelligence, will eventually arrive. Perhaps not quite within 30 years, as Kurzweil predicts, but it will arrive. Prediction and heuristic algorithms are growing constantly more powerful, allowing for computers to extrapolate from incomplete data to make predictions. Even today, Google can take a search string and not just provide a best-hit output, but can integrate keywords, linked phrases, and other information to create a more holistic guess as to what the searcher is after. It seems like a sensible conclusion that this will eventually grow to at least an approximate facsimile of human thoughts, with a trillion times the background information and references to draw upon for support.
However, unlike Kurzweil, I am pretty sure that this technological singularity is going to prove to be incredibly frustrating.
The internet, for example, is an incredibly disruptive tool that has led to the rise of countless new opportunities. Yet it also brought new problems and conflict; net neutrality, Comcast-Time Warner oligopolies, the increasing concern of personal security and privacy in a world that is growing more and more digital; all of these problems tag along with this great breakthrough, like remoras attached to a shark.
Even today, in class, we debated Eli Lilly releasing synthetic human growth hormone (HGH), allowing for short children to be treated and to grow to a height more comparable to their peers. This treatment ran $20k-$40k per year, mind you. That immediately raised questions of inequality and the growing divide between the rich and poor.
Now, how will people respond to the option to upload a brain, to create godlike robotic bodies, to find new and inventive ways to cheat death? (How much does one of those robotic bodies cost, anyway?)
One of the wealthiest and most powerful nations in the world cannot even provide an acceptable health care system to its citizens. Introduce the option to purchase lab-grown organs or brain-scanning nannites, and I cynically imagine that the divide among the populace will further increase.
We are approaching the ability to sequence the human genome for a thousand dollars or less, yet six in ten Americans still don’t realize that ordinary tomatoes contain genes (now that’s scary).
The singularity, a huge leap forward in innovation and discovery, will open up amazing new abilities that previously were believed to be squarely in the domain of miracles. But that doesn’t mean that they won’t immediately be covered with a fine grime of human pettiness, price gouging, misplaced anger and distrust, and pure dumbfounded incomprehension.
Think about when you had to teach your grandmother to send emails. Now, try and imagine explaining to her that cell-sized computers are going to create a digital backup of her brain to transfer into a robotic artificial intelligence.
Imagine the cries of “class warfare” when the ability to create real-life save points is released – for the low, low cost of $7 million per year in equipment, processing power, implants, and data storage.
The singularity is coming, and it’s going to be terrible.