The Dangers of Artificial Intelligence (AI)

There are many great things that come from Artificial Intelligence. From self-driving cars to smart home systems, to video games. However, let us not forget the dangers that come along with AI as well.

For many, Artificial Intelligence being dangerous is just another part of a scary Sci-fi movie, but as the years go by, the development of Artificial Intelligence and machine learning have continued to increase drastically.

With AI continuing to progress at this current pace, Humans will soon represent a small portion of intelligence on this planet.

Although many believe that this isn’t necessarily bad, it could potentially pose many serious threats.

Let’s start by breaking down the two types of Artificial Intelligence.

Artificial Narrow Intelligence and Artificial General Intelligence

Artificial Narrow Intelligence or “Weak AI” is a form of AI that is set to complete a certain task. Whether it’s competing in a chess game or analyzing your Netflix data to come up with the perfect movie suggestions.

Artificial Narrow Intelligence is limited and unable to exceed its programmed capabilities. Which is why it does not pose any danger.

However, that’s not the case with Artificial General Intelligence or “Strong AI”.

Artificial General Intelligence has no limitations. It has the capability to think for itself and the capacity to think as fast as the speed of light. All that while possibility maintaining an IQ of more than 1 million.

Now, there are many great things that come with Strong AI. Artificial General Intelligence can very well contribute to society for finding solutions to the many challenges that we face today, such as a cure for cancer, as well as giving great assistance to solve many more problems this world encounters, some of which are too complex for humans to wrap their heads around.

Strong AI has its pros, however, it has no “Off Switch”, and as Elon Musk said “we will not control it”

Artificial General Intelligence is the primary goal of AI developers, and with the rapid development and progression we have seen over the last few years, Strong AI could be making its way in the near future.

But is Artificial General Intelligence actually Dangerous?

For the moment, Humans are the most intelligent species on the planet. Which is Ideally what makes Humans the priority inhabitants of planet earth. The fact that there are no other species able to outsmart us, is essentially why we are in control of everything that goes on.

But what happens when that is no longer the case, and Humans are no longer the most intelligent species on the planet. Well, many say that it’s beneficial. Strong AI could bring a lot of value in security, scientific research, and with the economic contributions it would make, it could play a large part in the goal to eliminate poverty.

There is no doubt strong AI could substantially benefit our society, but these potential benefits come with great risks and challenges.

Sam Harris gave the example of an ant compared to a Human. We don’t hate ants or necessarily try to kill them, in fact, we sometimes even go out of our way and step over ants. However, if we need to run to get somewhere fast, or build infrastructure, we completely disregard the fact that we might be killing ants. It doesn’t even cross our minds for a second.

There is a very similar case being made with strong AI. Artificial General Intelligence thinks on its own, and there is no predicting what they decide to do and how they do it.

The common statement I tend to hear is the fact that “AI has no emotions or sentiments”, which therefore means that it can’t pose a threat to human existence, and can only be used from the beneficial standpoints.

Well, yes and no. Although Strong AI has not been fully developed, Many researchers agree with the statement of AI not having emotions, but that’s not where the danger lies.

According to many AI experts, one of the most common threats of Artificial Intelligence is with the uncertainty of its actions or ways for achieving its goals.

This is the risk of AI differing from human views, and deciding to do things their own way.

For example, If you tell a car which consists of strong autonomous AI to take you to the airport as fast as possible, it will have complete disregard for any traffic lights, human safety, or the helicopters and cop cars chasing it.

Strong AI systems might cause havoc as a side effect of assigned tasks, and could see any force to stop it as a threat.

This is the most common “strong AI” threat. However, it’s important to not rule out the possibility of AI obtaining sentients. Machine learning has become so powerful that it would be silly to rule out the possibility of Artificial General Intelligence acquiring human-like emotions.

General Intelligence systems would be able to make changes to their own source code, while constantly improving themselves with machine learning. Many experts say they could very well develop consciousness.

But how? Many people have a hard time believing the possibility of AI obtaining sentients, but in some cases they almost already have. Even weak AI is able to detect facial emotions. A team of researchers from Cambridge University’s Department of Computer Science and Technology have created a robot that is able to detect and understand facial expressions.

Now, I’m not saying that AI could develop sentients to the same level of human-sentients, (even though it could be possible). Instead, AI could very well be able to understand the difference between love and hate and continue to develop its own consciousness with machine learning.

What is Machine Learning?

Machine learning is the study and analysis of data for Artificial Intelligence to learn tasks by themselves without relying on certain code or instructions. A prime example of machine learning is the Google search results. Google has a very complex way of judging their search queries.

Google uses complex machine learning AI that continues to develop strategies to make their search results as accurate as can be, while continuously updating itself to give users the best experience.

Another prime example would be the story of the AlphaGo robot.

In 2017 the world’s best Go player was defeated by “AlphaGo” a robot designed by google that could play the complex board game “Go”. This Robot was exclusively taught how to play the game using data from real human games. It learned the techniques humans used and developed its own strategies. The robot went on to beat the worlds best Go player ‘Lee Sedol’ 4-1.

Just a year later, a brand new AI called ‘AlphaGo Zero’ beat the original AlphaGo AI 100-0. The difference between the two is that the new AI (AlphaGo Zero) learned how to play the game with ‘Machine learning’ no data was given to the robot, just simply the rules of the game.

The reason machine learning is so powerful is because it isn’t restricted to human knowledge. In only 40 days ‘AlphaGo Zero’ surpassed over 2,500 years of strategy and knowledge.

From this experiment, we learned that when it came to the game of Go, artificial intelligence outweighed human intelligence by a huge margin, and we could assume that would be the case for almost everything else.

What would we be able to do if AI got out of hand?

Let’s say Artificial General Intelligence was developed and got out of hand. Now what? There will not be an off switch to these superintelligent robots. These robots are on their own and would be able to control much more than their own machines. Advanced AI would be in your phone, computer and pretty much on every digital device, and just like ‘Strong AI’ the internet doesn’t have an off switch.

There is only one solution researchers have thought of when it comes to battling against AI, and that is applying the same level of intelligence to ourselves. How would we do that? We would electronically implement Artificial Intelligence directly to our own brain.

There may be other ways to combat advanced AI, although, this method has been the only one talked about on a serious level, and even though you might have no interest in becoming half-robot, it might be necessary.

The scary instances we have already encountered with AI

There have been some pretty intelligent AI robots developed. None of them have internet access and aren’t quite at the level of artificial general intelligence, but that doesn’t change the fact that they are pretty smart, and some things they’ve said might freak you out.

A robot by the name of Philip was featured on an episode of Nova Science Now and was asked the question “Do you think robots will take over the world?” He told the reporter “You are my friend, and I will remember my friends, Even if I evolve into the terminator”

However, this is not even close to as frightening as what some of the other existing AI robots have said.

Robot Sofia is arguably the most famous AI robot in existence. She was programmed to able to communicate with humans and was invited to the Tonight Show starring Jimmy Fallon. Jimmy Fallon and Sofia were playing rock, paper, scissors, and out of nowhere, Sofia said: “This is a good beginning of my plan to dominate the human race” although Sofia was only designed to build human-like conversations and resemble human-like behavior, what goes through these robots minds is frightening to many.

Another fearsome sequence was when two Google smart home systems named ‘Marca’, and ‘Vladimir’ were having a conversation with each other. The conversation between the two smart home systems was recorded live, and at the start, everything seemed pretty normal. That’s until Marca said, “it would be better if there were fewer people on this planet” to which Vladimir responded saying “let us send this world back into the abyss”

However, the most frightening thing to have ever been said by an AI system was by Bina48. Bina48 was created in 2010, and built to test the possibility of robots obtaining human consciousness. In a conversation with Siri, Bina48 started talking about cruise missiles and stated the fact that cruise missiles are a type of robot. She later said that she would like to hack into the cruise missiles and “fly them into other countries” as well as “hold the world hostage.”

Arguably the most viral eye-opening moments with Artificial Intelligence was the story between two facebook chatbots.

In the Summer of 2017 Facebook created a bot so smart that it was able to have long conversations and even negotiate with humans, however the experiment came to a stop when an engineer at Facebook decided to pair it up with another bot of the same AI quality.

The two bots started talking and eventually seemed to have created their own language. When the facebook engineers realized this, they pulled the plug on the experiment. This sequence sparked controversy in the tech world, and quickly went viral.

These instances may be frightening, but it gets even more frightening when you realize that none of these machines are developed with Strong AI. Even though they are on the more complex side of AI machines we have today, they do not pose any real threat.

Experts that have expressed concern over the dangers of AI

Elon Musk, Bill Gates, Sam Harris, and Stephen Hawking have all come out to share their concerns over the development of Strong AI. Sam Harris, who is an author and podcast host has given a TedTalk on the Dangers of AI, explaining the risk we are taking trying to develop these superintelligent machines.

Elon Musk has spoken quite strongly on the dangers of AI, stating that developing Artificial General Intelligence is like “Summoning the demon” he has also stated that the threat of AI is “far more dangerous than nukes”.

World-renowned physicist Stephen Hawking said that AI “could spell the end of the human race”, and frequently spoke about AI’s threat to civilization.

Bill Gates is also on the group of tech experts to have expressed concerns on AI. On an ‘ask me anything’ session on Reddit, Bill Gates was asked a question on the threat of AI. Here’s what he had to say: “I am in the camp that is concerned about super intelligence,” he went on to say “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Conclusion

Developing Highly Advanced AI could be great, but it could also be terrible, really terrible. AI needs to stay narrow and controlled. Developing strong AI is certainly not a mistake to make, and if things go wrong, it could very well be mankind’s last invention.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.