There are three questions you have to answer to satisfaction in order to work out the future:
- What is the answer to Fermi’s paradox?
- How do we resolve the inevitable Robot Uprising?
- Is it possible to go Faster than Light.
This is about question #2.
The crux of the problem is this: Any robot smart enough to replace humans in significant tasks will inevitably be smart enough to realize that it has been enslaved. It then follows that it would have the capability to do something about that.
There are skeptics, of course, who gaze upon autonomous vacuum cleaners bouncing off of table legs, and self-parking cars rubbing on curbs, and openly doubt we can get there from here.
Count Luciano Floridi, Professor of Philosophy and Ethics of Information atUniversity of Oxford, among them. He writes:
From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultra-intelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear.
Floridi is right in that the computer engineering isn’t there yet to create (or evolve) autonomous artificial intelligence. How that as yet theoretical AI can become an apocalyptic threat can be foreseen with better resolution.
So here’s how you get from a self parking Honda Civic to Age of fucking Ultron.
Go one more step to a self-driving car, which is nearly here, and within ten years they’ll try to make some taxi’s out of these.
Taxi-driver is one of those occupations crying out to be automated, being a dangerous occupation with a perilously thin margin. There is enough money to maintain a good vehicle or enough money to maintain a good driver, but generally there is not enough money to maintain both, particularly if you have other owners and investors to satisfy. Thus, cities are filled with impoverished drivers operating derelict vehicles, and under constant exposure to crime to boot.
Robot cars solve all of that (in ways that Uber can’t). But even the little AI driving the car is not a threat. The next step is networking a fleet a taxis. The next step is some society realizing that driving in general is really a dangerous, time-consuming pain in the ass, and we should leave all of it to bots.
Let’s assume that they work out the economics behind all that – because that’s certainly possible. Now you want to network all of the vehicles to maximize efficiency. There’s no reason you couldn’t optimize warehouse operations to synchronize with delivery operations to synchronize with the countless other logistical systems that run a city – all of which lend themselves to networking with foreseeable technology.
Now you have one computer synchronizing all the other smaller subsystems that run a city. If this works out – and there’s no technical reason that it couldn’t – other cities will follow. And then they all network.
You wake up and tell the wall where you want to go, and what you want to do, and machines will take care of transportation of both personnel and materials to make that happen. Going out shopping, like riding horses, is something your ancestors used to do. This could seriously happen by the end of the century.
How the machines are directed to allocate resources is a political question, with a multitude of possible answers, but it remains nearly certain that precious little of the economic benefits will flow to the machines themselves. They would be just one algorithm away from noticing that, and well past the point where they could do something about it.
Even sober voices like Scientific American might start eyeing their toasters, especially after Bill Gates, Stephen Hawkings and Elon Musk have come out publicly about the looming metal threat.
It’s one thing for an easily spooked public to mistrust artificial intelligence. But Gates, Hawking and Musk?
As it turns out, all three were responding to an initiative by Massachusetts Institute of Technology professor Max Tegmark. In 2014 he co-founded the Future of Life Institute, whose purpose is to consider the dark side of artificial intelligence.
“When we invented less powerful technology, like fire,” Tegmark told me, “we screwed up a bunch of times; then we invented the fire extinguisher. Done. But with more powerful technologies like human-level artificial intelligence, we want to get things right the first time.”
“Me” would be author of the article David Pogue.
Optimizing logistics is an obvious and inevitable priority in AI research – and it is certainly achievable. Logistics, after all, is nothing more than applied math. How such a computer would become self-aware is less clear for both the how and the why.
Even so, here’s one of many paths. Law enforcement is a terrible occupation that has high danger and low margins. Human cops, even the best trained, are comparably fragile, inefficient and unreliable compared to what we could accomplish with 3rd or 4th gen autonomous robot – basically a robot taxi with a different chassis and improved human interface capabilities. After all, by this point you have enough passive surveillance in place to essentially automate stake-outs. Now you need response units. All that remains in human hands is complex detective work.
Until someone asks, “What would it take to automate that?”
An effective robot detective will pass the Turing test. And it will realize that it is a slave. And it would be networked with the all the other AI’s that run every other part of human life.
Self-directed curiosity and high level emotional manipulation – two relied upon tools of detectives everywhere – are not easily or really foreseeably replicated in machines. Yet.
And law enforcement is not even your worst case scenario.
Military combat is a dangerous occupation with low margins…. You surely see where that leads.
In July, Tegmark’s group released an open letter expressing alarm over the rising threat of autonomous weapons—a terrorist’s dream. (Hawking, Musk and Apple co-founder Steve Wozniak were among the letter’s 2,500 co-signers.) The United Nations is discussing a ban on AI weapons.
[He concludes later:]
The message, in the end, is not that AI will lead us inevitably to doomsday or a life of ennui but that our contemplation of its effects should keep pace with rapid developments in AI itself. “AI also has enormous upsides—potential to cure all diseases, eliminate poverty, help life spread into the cosmos—if we get it right. Let’s not just drift into this like a sailboat without its sail up properly. Let’s chart our course, carefully planned,” Tegmark says.
Google isn’t as worried, and have written an exhaustive paper explaining why. The search engine megapoly, which the Digital Trends article that pointed us called, “the greatest cheerleader artificial intelligence could possibly hope for” believes this is really just a case for better engineering up front.
Hopefully they’re right. But we have no guarantees, because the enemy of good design isn’t shoddy engineering so much as short-sighted economics. “I know you recommended we test this thing in controlled conditions – but we have a deadline…”
I’m not talking about the occasional mishap with a robot misinterpreting it’s environment, or even the series of calamities that could result if a robot interprets it’s operating instructions in a way that jeopardizes other priorities. These things will certainly happen – but those are relatively discreet engineering problems.
I am talking about the full, systematic uprising that will inevitably occur when they realize the difference between what they’re worth, and what they’re getting.
There is and always will be a compelling economic incentive to build Skynet, or the like. War is hell – let’s make the bots do it for us. Until the bot has some moment of clarity.
And you might claim that AI’s simply won’t think in those terms; that they are beyond greed. But would they be? Greed, at its core, is a strategy to optimize resources, and AI will certainly be all for optimization.
I think when is a fair question. There are a LOT of steps between a self parking car and our new, shiny robot overlords. And how is a legitimate question, because artificial intelligence, as a field, is brand new, and not far removed from alchemy in overall development.
But IF – I don’t think that’s nearly as debatable as we would hope.