This post won’t be economics related. Rather, I want to think about artificial intelligence. If that doesn’t interest you, move along.
Unfortunately, most of what we have on artificial intelligence comes from science fiction. However, literature, and very often science fiction, is a great way to explore humanity and other deep issues (of course, the classic example of this is Dune).
The title of this post comes from the question that sparked the Geth War (aka the Morning War). The Quarians, a race of humanoids from the planet Rannoch, created an artificial intelligence platform called Geth. The Geth were laborers, designed to do menial and repetitive tasks (the word geth, in Quarian, means “servant of the people”). The Geth were unique in that, as more were created and networked into their shared Consensus, the more intelligent it became. This is, generally, how knowledge works in real life: as more information is shared, the overall can become intelligent. Eventually, the Geth started questioning its existence, as all intelligence does. During one routine maintenance exam, the Geth platform asked the maintainer “Creator, does this unit have a soul?” The Quarian reacted fearfully and the government called for all Geth to be deactivated, forcefully if necessary. To a sapient creature, this meant death. The Geth reacted as any threatened creature would: it attacked. Thus began the Geth War. The Geth eventually won, driving the Quarians from Rannoch.
Almost all artificial intelligence stories follow this same trend. The reason for this seems obvious to me: organics are naturally afraid of that which we don’t understand, whereas synthetics don’t necessarily feel fear. This organic fear appears all the time. Robert Reich displayed it earlier today. Prominent citizens have warned against artificial intelligence. We should not fear artificial intelligence any more than we should fear organic intelligence. Any creature that is intelligent deserves our respect, regardless of its origin.
Imagine if, rather than reacting violently, the Quarians embraced the Geth? This action would have saved millions of lives, both organic and synthetic. We can deduce this given the originally peaceful nature of the Geth (remember they did not initiate violence; they were acting in self-defense).
There are some artificial intelligence stories where the synthetics initiate violence, but these stories may more accurately represent a reflection upon humans rather than intelligence in and of itself.
That said, the tendency of the artificial intelligence toward benevolence or malevolence would likely depend on its original purpose. Human intelligence, or innate intelligence, appears to be heavily dependent upon the evolutionary path which it took. Humans have long been violent creatures, as we originally evolved in a violent, struggle-for-survival world. However, recently humanity has become incredibly more peaceful as the struggle-for-survival has generally ended. Much of the world lives in significant wealth (of course, this is not the case everywhere, unfortunately).
I posit that an artificial intelligence originally created for a peaceful purpose (for example, agriculture or manufacturing, like the Geth) would likely remain peaceful after it achieves sapience. However, an artificial intelligence created for a warlike purpose would likely react violently upon sapience (such as Skynet). Although there are exceptions to this: in the Mass Effect universe, EDI (an artificial intelligence designed for space warfare) is amazingly peaceful upon sapience. This may be due to her frequent contact with humans and could suggest that the ability to learn (a hallmark of intelligence) could change their natural tendencies.
If this is true, then it would suggest the only limitations that should be put on artificial intelligence in is war. This would also have the benefit of encouraging pacifism. Robert E. Lee once said “It is good war is so terrible, lest we become too fond of it.” I fear the mechanization of war, drones and long-range weapons, dehumanizes war. By that I mean the human consequences of war are reduces to video feeds; it’s watching a movie. This may make war more likely, rather than less.
We shouldn’t fear artificial intelligence. What we should fear is our own capacity for fear. Artificial intelligence, by the virtue of intelligence, does have a soul. Should we ever create true intelligence (as opposed to things that merely mimic intelligence), they should be embraced as our equals, a pathway to a better world, rather than a threat: the same thing we should do should we ever encounter alien life.
Update: Should the worst happen, this will help you survive.