This post diverges from my usual topic of theology to cover a philosophical question of importance to modern society.
The current discussion on the dangers of AI, as found in several recent news stories and commentaries, are way off-base. They propose, first of all, that AI will develop to the point of sentience, so that AI systems are making their own decisions. And the danger is said to be that the AI may decide to harm humanity or do away with us altogether.
In an interview with the BBC, Dr. Stephen Hawking make several comments along these lines:
“The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever increasing rate…. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Elon Musk issued a similar type of warning. In response to one Twitter user, he said that World War 3: “May be initiated not by the country leaders, but one of the AI’s if it decides that a preemptive strike is the most probable path to victory.” [CNBC]
And in any case, if the war were not initiated by an AI, the race for AI superiority could be the root cause of World War 3, he claimed: “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo (in my opinion).” [CNBC]
Musk was not specific as to why an AI race would trigger a war. Some researchers have suggested that an arms race based on AI weapons could lead to war. And then the AI weapons would be difficult to turn off. They could get out of control and cause even more harm.
But all of the above warnings make an assumption that is unproven, that AI can eventually reach a point of sentience, whereby the machine can reason abstractly and make real decisions, rather than simply follow its programming. There is no proof from science that such a point can be reached. Computers have become more and more powerful. The cellphone in your pocket is literally more powerful than any supercomputer from the year 2000. And yet there is no indication that sentience is developing. So it seems to be not the result of gradual changes or increases in computing power. It may be that consciousness or sentience is a halting problem — a type of problem that no computer can “solve”, no matter how powerful. And this is the lynchpin that causes all of the above warnings to fall apart.
Another problem with the above theories is that it may take many years, assuming it were even possible, for AI systems to actually become a sentient artificial intelligence. These are unlikely long term dangers, if they can happen at all.
Real Short term Danger
The current state of AI is far from sentience. But deep learning and artificial neural networks are already available. Couple this computing power with the vast amount of data (“big data“) that is available about any population, which makes frequent use of mobile phones, the internet, and various connected devices, and new capabilities arise.
What if AI were used to advise a politician, to sculpt his political positions, speeches, and decisions? Would he gain a significant advantage over his opponents? Perhaps he could not only correctly read public opinion, so as to obtain votes, but influence public opinion, too. This capability may already be at hand, though I don’t believe it has been used, yet.
What if a socio-political group, whose views are in the minority in modern culture, were to use the same approach to influence public opinion, over the course of a decade or more, to raise their view to the majority? Public opinion has clearly been influenced, very substantially, over the past few decades by the news and entertainment industry. What if an organization raised the money to attempt to control the future direction of that influence?
Eric Horvitz, managing director of Microsoft Research, has said that ” ‘over a quarter of all attention and resources’ at Microsoft Research are focused on artificial intelligence.” [Washington Post]
What if a large company, not necessarily Microsoft, were to use AI — in its current state — to analyze the marketplace and the response of the public to products and advertising, so as to gain an advantage over other businesses. One approach would be to give the people what they want (as the saying goes). But another approach would be to influence the public so that they want what you are selling. Companies using AI to guide the development of products as well as their marketing would outcompete companies which failed to do so.
Now suppose your company or organization has money, but lacks the resources to buy and run the current generation of deep learning computer systems. It won’t be long — and this is inevitable and not very speculative — that companies will offer their AI services to anyone who can pay. The main problem with political campaigns in the near future might not be Political Action Committees or soft money, but the use of AI by some politicians and some political special interest groups.
And, unfortunately, it may be possible for totalitarian governments to use AI, deep learning, big data, etc., all for the purpose of subjugating the population. They could use this power to influence public opinion, to root out opposition groups, and to find new ways to control the people. And this danger does not depend on AI reaching a point of consciousness or sentience. It’s theoretically possible with the current state of the art and science of AI.
Please take a look at this list of my books and booklets, and see if any topic interests you.