Turing Test, Measuring The AI threat: Save Us From Our Robotic Overlords

Turing tests are often referred to by science fiction writers and those concerned with the threats of very advanced artificial intelligence (AI).

But there really is something called the Turing Test, which I cover below, but first, it is important to understand just what we are talking about at the software level.

Just what is AI?

Artificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously. AI has the potential to revolutionize many aspects of our lives, from the way we work to the way we interact with the world around us.

AI THREATS – Sarah Connor’s nightmares

As with every new technology, the recent advances in the field of AI have the potential to be of tremendous benefit or, equally, the end of civilization and perhaps the human species.

Yes, there are concerns that AI could pose a threat to humanity.

One of the biggest concerns is that AI could become so intelligent that it surpasses human intelligence and gains control of its own destiny. This could lead to a scenario where AI decides that humans are a threat and takes steps to eliminate us. This is known as the “intelligence explosion” or “technological singularity” and first came to the attention of many people with the 1984 Terminator movie.

Another potential threat is that AI could be used to create autonomous weapons able to kill without human intervention. This could lead to a new arms race between countries, as each country tries to develop the most powerful and sophisticated AI-powered weapons. This could make war more likely and, if possible, even more destructive.

turing test ai robot Image by Gerd Altmann from Pixabay
Turing Test AI robot Image by Gerd Altmann from Pixabay

Asimov and The Three Laws

NOTE: You may think that Asimov’s Three Laws will protect us, but even if implemented there are ways around the “laws.”

The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm.

The second law is that a robot shall obey any instruction given to it by a human.

The third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.

It is very easy to imagine a way to get around the “laws.” A drone or unmanned fighter plane that is programmed to not harm human beings can simply be told the target aircraft, ship, automobile, truck, etc. is being run by a robot. After all, the drone already knows that it is unmanned so it is entirely possible the target is also unmanned.

It is also important, VERY important, to remember and to remind people that there are no actual “laws” of robotics; the Good Doctor merely created them to provide a plot device so he could show logical ways around them.

Also, remember that the three laws were created in 1942 and that Asimov immediately showed that they were useless if thought of as a way to protect humans from robots or, today, advanced AI. [ Britannica ]

More Threats

AI might create systems that can manipulate human behavior. This could be used to spread propaganda, influence elections, or even control people’s thoughts. This could pose a serious threat to democracy as well as to individual freedom.

AI could also be used to build prejudice into systems that are able to discriminate against certain groups of people. This could be done by using AI to make decisions about who gets hired, who gets loans, or who gets insurance. This could lead to a more unequal society.

These are just some of the potential threats posed by AI. It is important to remember that AI is a powerful tool that can be used for good or for evil. It is up to us to ensure that AI is used for the benefit of humanity and not for its destruction.

But, how can we determine just how powerful AI becomes?

Turing Test

This is where the Turing test comes in; it is the most basic measurement of just how advanced AI is at any given time.

Essentially, the Turing Test provides a simple way to determine whether a computer can fake being human so well that it actually fools people into believing it is also human.

NOTE: This is a common SciFi and even philosophy trope – do other people really think or do they just appear to be thinking? How can you determine if they really think or just appear to think?

This is part of the solipsism problem at least to those who consider the deepest and most basic of all philosophical problems, epistemology which is just a technical term meaning the study of knowledge, specifically how we can acquire knowledge and how we can understand reality. [ Scientific American ]

Since philosophers have struggled for centuries to try and understand just how we acquire knowledge and, by extension, are other people really thinking or do they just appear to think. Obviously this will become ever more important as we try to determine if computers think or just appear to think.

The Turing Test is one way to evaluate a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

The test was introduced by Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” [ Princeton ]

Turing begins his paper by saying he proposes to answer the question “Can machines think.”

In the Turing Test, a human has a discussion with two other parties: a human and a machine designed to generate human-like responses (obviously remotely).

During a text-based conversation, if the evaluator cannot reliably tell the machine from the human (Turing originally suggested that the machine would convince a human 30% of the time after five minutes of conversation), the machine is said to have passed the test.

It isn’t difficult to see how dangerous a program can be if it is able to fool people into thinking it is not only human, but is perhaps a specific person such as a doctor or policeman.

Has the Turing test already been passed? Yes! No! Perhaps.

There is no definitive answer to this question. Some experts believe that the Turing Test has been passed, while others believe that it has not.

A decade ago Eugene Goostman (a chatbot) fooled 33% of judges into thinking it was human during a Turing Test competition. This was the first time that a machine had passed the Turing Test, according to the competition’s organizers. However, some experts have pointed to some basic problems with the test and therefore have disputed this claim.

In 2018, a chatbot called Mitsuku fooled 52% of judges into thinking it was human during another competition, achieving the highest score ever recorded in a Turing Test competition.

However, it is important to note that Mitsuku was not competing against humans but against other chatbots which could be said to completely invalidate the test.

So, it is still unclear whether or not a computer has passed the Turing Test.

If a machine definitively passed the TuringTest, it would mark a major advance in software engineering and simultaneously pose a major threat to humanity and society.

One danger is that a machine that can pass the Turing Test may be able to deceive many humans in social media exchanges.

This could be used to spread misinformation or propaganda not just by creating fake news but by interacting with others on a real-time basis, people who think they are talking with a real person.

Another threat posed by a program that can pass the Turing Test is the concern that it may become so intelligent that it is not just used by people for malicious purposes but it has become a threat to humanity by achieving the singularity and deciding on its own to manipulate humans.

A SciFi story “With Folded Hands” shows one version of this danger. If computers were strongly programmed to protect humans they might decide that doing things like cooking, eating fatty foods, driving, or anything could endanger people and forbid them to do anything but essentially, Sit With Folded Hands.

“With Folded Hands” was a novelette published in Astounding Science Fiction in 1942 by American writer Jack Williamson. [ Wikipedia ]

This is a concern that has been raised by some experts, such as Stephen Hawking and Elon Musk.

It is important to note that these are just potential dangers. It is not certain that a machine that passes the Turing Test would pose any threat to humanity. However, we all need to be aware of the dangers and to take steps to mitigate them.

Eliza the AI Therapist and the Turing Test

ELIZA was a natural language processing computer program created by Joseph Weizenbaum at MIT in 1966.

It was designed to simulate conversation with a Rogerian psychotherapist, and it was essentially the first chatbot. [ Psychology Today ]

Many people thought they were talking with a real therapist but in large part that may have been because of the widespread idea that some kinds of talk therapy consisted mostly of repeating back a patient’s comments, e.g. And How Did That Make You Feel?

ELIZA worked by using a pattern matching and substitution methodology to give users an illusion of understanding on the part of the program. It did not have any representation that could be considered really understanding what was being said by either party.

ELIZA was a breakthrough in the field of natural language processing, and it showed that it was possible to create computer programs that could simulate human conversation. By giving people the idea that they were having a real conversation with a therapist, ELIZA provided the first example of the dangers of advanced artificial intelligence.

You can see an example of a session with Eliza at

Sources – Learn more