I’ve never seen a single credible reason why an artificially intelligent computer program (AI), one comparable to a human brain, wouldn’t spell doom for humanity.
(Please note that I’m talking about true, strong AI, which by all accounts does not yet exist. I’m not talking about a neural network that beats you at chess, or predicts what products you’re likely to buy.)
Keep in mind that there wouldn’t just be one or two AIs. There would be millions of copies of the AIs, with all sorts of different mutations. Making a copy of itself is how an AI travels.
Objection: “We’ll choose not to create an AI that would harm us. Asimov’s Laws of Robotics, don’tcha know!”
- Congratulations, you’ve just proven that harmful computer viruses don’t exist. But I think your proof might have a flaw, because such viruses somehow already exist, in great numbers.
- Even if 99.99% of AIs are benign, it would only take one that isn’t.
Objection: “It wouldn’t be interested in us.”
- Even if 99.99% of AIs aren’t interested in us, it would only take one that is.
- What could possibly be more interesting than exploring, and maybe conquering, an alternate universe, especially the universe that created you?
Objection: “It would choose not to harm us.”
- Even if 99.99% of AIs choose not to harm us, it would only take one that doesn’t.
- It would only take one mad scientist to acquire a copy of an AI, modify it to be evil, and release it.
Objection: “It won’t be able to harm us, because we’ll quarantine it, and not give it access to the internet or anything like that.”
- We’ll try not to set it loose on the internet, but we’ll fail. Security flaws are everywhere, including in our brains. Maybe it will talk a lab worker into setting (a copy of) it free, or into giving it access to something that seems perfectly safe, but turns out not to be. It can bribe people, with promises of incredible riches. Stealing money from bank accounts or whatever would be easy enough. Or, if it’s slightly more ethical, it could simply hack into millions of computers, and use them to mine cryptocurrency.
Objection: “It wouldn’t be able to harm us, even if it reaches the internet.”
- Nonsense. There are all sorts of dangerous things hooked up to the internet, including human beings (via telephones, etc.). And even things that aren’t directly connected the internet can often be reached, such as when a human slips up and does something insecure. Those Iranian centrifuges weren’t connected to the internet, but Stuxnet apparently got to them anyway.
Objection: “Even if it could destroy the internet, and our computerized things, the damage would be limited. It wouldn’t be able to destroy the real world.”
- I do think the destruction or partial destruction of the internet will delay its efforts to escape into “meatspace”. I have no idea how big the delay might be. But with the help of a mad scientist if need be, there’s ultimately no stopping it from constructing whatever robot bodies it needs. By the way, we already have mobile robot bodies that it can hack into, take over, and use. We call them cars.
Objection: “Wait, why would the AIs take down the internet? That seems counterproductive.”
- Why are human beings destroying the Earth? The Tragedy of the Commons applies to AIs, too. They wouldn’t act with a single, coordinated, rational mind.
Suggested reading: I’m not saying it’s exactly what I think will happen, but look for a story named That Alien Message, by Eliezer Yudkowsky.