The Real Short-term Dangers of Artificial Intelligence

This post diverges from my usual topic of theology to cover a philosophical question of importance to modern society.

The current discussion on the dangers of AI, as found in several recent news stories and commentaries, are way off-base. They propose, first of all, that AI will develop to the point of sentience, so that AI systems are making their own decisions. And the danger is said to be that the AI may decide to harm humanity or do away with us altogether.

In an interview with the BBC, Dr. Stephen Hawking make several comments along these lines:

“The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever increasing rate…. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Elon Musk issued a similar type of warning. In response to one Twitter user, he said that World War 3: “May be initiated not by the country leaders, but one of the AI’s if it decides that a preemptive strike is the most probable path to victory.” [CNBC]

And in any case, if the war were not initiated by an AI, the race for AI superiority could be the root cause of World War 3, he claimed: “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo (in my opinion).” [CNBC]

Musk was not specific as to why an AI race would trigger a war. Some researchers have suggested that an arms race based on AI weapons could lead to war. And then the AI weapons would be difficult to turn off. They could get out of control and cause even more harm.

Lynchpin

But all of the above warnings make an assumption that is unproven, that AI can eventually reach a point of sentience, whereby the machine can reason abstractly and make real decisions, rather than simply follow its programming. There is no proof from science that such a point can be reached. Computers have become more and more powerful. The cellphone in your pocket is literally more powerful than any supercomputer from the year 2000. And yet there is no indication that sentience is developing. So it seems to be not the result of gradual changes or increases in computing power. It may be that consciousness or sentience is a halting problem — a type of problem that no computer can “solve”, no matter how powerful. And this is the lynchpin that causes all of the above warnings to fall apart.

Another problem with the above theories is that it may take many years, assuming it were even possible, for AI systems to actually become a sentient artificial intelligence. These are unlikely long term dangers, if they can happen at all.

Real Short term Danger

The current state of AI is far from sentience. But deep learning and artificial neural networks are already available. Couple this computing power with the vast amount of data (“big data“) that is available about any population, which makes frequent use of mobile phones, the internet, and various connected devices, and new capabilities arise.

What if AI were used to advise a politician, to sculpt his political positions, speeches, and decisions? Would he gain a significant advantage over his opponents? Perhaps he could not only correctly read public opinion, so as to obtain votes, but influence public opinion, too. This capability may already be at hand, though I don’t believe it has been used, yet.

What if a socio-political group, whose views are in the minority in modern culture, were to use the same approach to influence public opinion, over the course of a decade or more, to raise their view to the majority? Public opinion has clearly been influenced, very substantially, over the past few decades by the news and entertainment industry. What if an organization raised the money to attempt to control the future direction of that influence?

Eric Horvitz, managing director of Microsoft Research, has said that ” ‘over a quarter of all attention and resources’ at Microsoft Research are focused on artificial intelligence.” [Washington Post]

What if a large company, not necessarily Microsoft, were to use AI — in its current state — to analyze the marketplace and the response of the public to products and advertising, so as to gain an advantage over other businesses. One approach would be to give the people what they want (as the saying goes). But another approach would be to influence the public so that they want what you are selling. Companies using AI to guide the development of products as well as their marketing would outcompete companies which failed to do so.

Now suppose your company or organization has money, but lacks the resources to buy and run the current generation of deep learning computer systems. It won’t be long — and this is inevitable and not very speculative — that companies will offer their AI services to anyone who can pay. The main problem with political campaigns in the near future might not be Political Action Committees or soft money, but the use of AI by some politicians and some political special interest groups.

And, unfortunately, it may be possible for totalitarian governments to use AI, deep learning, big data, etc., all for the purpose of subjugating the population. They could use this power to influence public opinion, to root out opposition groups, and to find new ways to control the people. And this danger does not depend on AI reaching a point of consciousness or sentience. It’s theoretically possible with the current state of the art and science of AI.

by
Ronald L. Conte Jr.
Roman Catholic theologian and translator of the Catholic Public Domain Version of the Bible.

Please take a look at this list of my books and booklets, and see if any topic interests you.

Advertisements
Gallery | This entry was posted in commentary, culture. Bookmark the permalink.

9 Responses to The Real Short-term Dangers of Artificial Intelligence

  1. Guest says:

    I’m a computer scientist. There are some things no computer can do that humans can. I found out why when studying classical logic. If we take the three acts of the human mind, no machine is capable of the first. I think that this is an unbridgeable gulf. Humans are capable of doing far more than mere computation. I have no clue why everyone seems to be assuming that machines will make humans irrelevant. When machines can ask and answer philosophical questions of their own volition then we have to panic.

    What’s scarier is humans thinking machines are smarter than humans. Computers are just automatons that can process symbols really fast. They don’t understand the significance of these symbols, though humans can. I guess computers are just black magic to the layman.

    • Tom Mazanec says:

      What are the three acts? I thought there were two…reason and moral choice. What am I missing?

    • Guest says:

      1) Apprehension: Understanding the essence of things.
      2) Judging: Predicating a subject (saying something about something).
      3) Reasoning: Mechanically following the laws of logic.

      These are the three acts of the mind in classical logic. Reasoning is calculation, and statistics and data can help a computer infer a true proposition; but no computer is capable of understanding.

  2. Mark P. says:

    Technology such as AI often seems to be used by secular society to reduce man to a biological computer. From there, they extrapolate their non-beliefs in a soul, free will, etc. Some space enthusiasts do the same thing, claiming that such-and-such planet is within a certain distance from a certain type of star, and then state with near certainty that life will spontaneously arise there not due to the work of God, but because of mathematical formulas and unproven scientific hypotheses. In both cases, the achievements of men are sometimes used to deny God.

  3. Tom Mazanec says:

    Another problem with neural nets is that they are opaque…they modify themselves into spaghetti code that cannot be understood by their own creators. 99% of the time this code is better than that produced conventionally, but once in a while it is wildly wrong. So giving important decisions to an AI is problematical.

  4. Siby says:

    As a researcher in Deep learning, I completely agree with Ron’s view. Using AI as a tool to meet their own selfish ends by politicians and businesses is far more realistic that AI gaining sentience and destroying humanity. One of the pioneers in AI analogizes this fear to “worrying about global warming on Mars”.
    I believe the best way forward is to democratize AI. Fortunately many companies like Google and Open AI are enabling this by providing access to algorithms and frameworks that allow an ordinary person to create an AI program.

Comments are closed.