6 Key Risks of Artificial Intelligence in 2021

Most people on this earth do not know what Artificial Intelligence is, let alone the negative effects of artificial intelligence. The lack of awareness about the potential dangers of AI for human civilization is alarming. Although the advantages of artificial intelligence are numerous and its negative effects are few in number, the impact of the disadvantages can be devastating for the civilization. Influential personalities like the late physicist Stephen Hawking and tech billionaire Elon Musk have been raving about the disadvantages of artificial intelligence but no one seems to really pay attention.

In the science-fiction movies like the Terminator series as well as TV shows like Westworld, machines are shown to have taken over the world and humans are barely surviving in their fight against the machines. And as Oscar Wilde prophesied, “Life imitates Art far more than Art imitates Life”, we are truly facing an existential threat from AI.

Artificial Narrow Intelligence and Artificial General Intelligence

Artificial Intelligence is about building machines that can think and act intelligently. It includes Google’s search algorithms as well as self-driving cars. Artificial narrow intelligence (ANI or narrow AI) refers to a computer’s ability to perform a single task extremely well, such as crawling a webpage or playing chess. Artificial general intelligence (AGI) is when a computer program can perform any intellectual task that humans can do. While narrow AI has been here for quite some time now, experts in the field are ultimately working towards artificial general intelligence. DeepMind, a subsidiary of Google, is widely considered the forerunner in this race towards artificial general intelligence. As Elon Musk points out, “The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating digital super intelligence.

An AI that is vastly smarter than any human on earth and ultimately smarter than all humans on earth combined.” While the dangers posed by narrow AI are small such as the job automation, it is the artificial general intelligence which has the potential to make human civilization extinct. Given the magnitude of the matter, it seems apt to understand exactly what those dangers are. Here are the six key dangers of AI.

1. Autonomous Weapons

Present day drones are one example of autonomous weapons. Some AI experts believe AI to be even more dangerous than nuclear weapons. But there is also the possibility of the two dangers merging and AI deciding to launch nukes without human intervention.

Such a possibility does not seem far-fetched when we see that the Obama Administration’s Department of Defense, in 2012, issued a directive regarding “Autonomy in Weapon Systems” saying: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Such developments have the potential to start an arms race of autonomous weapons between the world superpowers. And what would be its logical conclusion? Could it meet the same fate as the that of nuclear weapons? Unlike nuclear weapons, AI is considerably cheaper to maintain and its rate of development is much faster as well. Therefore, an apocalyptic scenario is a real possibility where the autonomous weapons controlled by AI can make human civilization extinct.

Elon Musk hits the nail on the head when he says that “The average person does not see killer robots going down the street. They are like ‘Man what are you talking about?’ We want to make sure we don’t have killer robots going down the street. Once they are going down the street, it is too late.”

2. Invasion of Privacy

In today’s world, cameras are everywhere and AI’s social recognition algorithms know who you are. DeepFake technology – a branch of AI – makes it indiscernible that which image or video is real and which is fake. For instance, China’s social credit system is expected to give every one of its 1.4 billion citizens a personal score based on how well they behave. Things such as, do they jaywalk, do they smoke in non-smoking areas and how much time they spend playing video games. This is not merely one country. A whole bunch of companies specialize in similar tech and sell it around the world. Authoritarian regimes, which are rising around the world, could use such technologies to control people. This is exactly why governments around the world are rightly concerned about combatting this threat.

3. Misalignment between our goals and machine’s

It is a truth in life that all humans are different. What makes sense for one person may not make sense for the other person. Human behavior is largely determined by the incentives that are available. The decision-maker is a person with more power and authority. An AI which is vastly superior in intelligence to humans could conclude that it has goals which are different to that of the humanity. Such a case is likely, given the different conclusions different people reach. Therefore, AI could simply decide to wipe out humanity if it suits its ends.

Musk says that, “AI does not have to be evil to destroy humanity. If AI has a goal and humanity just happens to be on the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings. It is just like if we are building a road and an ant hill happens to be on the way. We don’t hate ants. We are just building a road and so goodbye ant hill. He further adds that, “I don’t think anyone realizes how quickly artificial intelligence is advancing. Particularly if [the machine is] involved in recursive self-improvement … and its utility function is something that’s detrimental to humanity, then it will have a very bad effect… If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans.

4. Social Manipulation

The rise of social media means that new channels of social manipulation are growing by the day. The social manipulation of people in the 2016 US election presents a fitting example. Cambridge Analytica – A British political consulting firm – acquired users’ data from Facebook without the users’ permission but with the permission of Facebook. They presented that data to the Trump election campaign so that Trump could use key words which would appeal most to the voters.

Similarly, if a super intelligent AI gets access to the social media data of users, think of the ways it could manipulate their minds and use it to its own advantage.

Elon Musk is right when he points out the dangers that Google’s DeepMind poses. He says that, “The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human. It plays the game at super speed, in less than a minute. DeepMind’s AI has administrator level access to Google’s servers to optimize energy usage at the data centers.

However, this could be an unintentional trojan horse. DeepMind has to have complete control of the data centers which means with a little software update, AI could take over the entire Google system to do anything. They can look at all your data and do anything.”

5. Discrimination and AI bias

Machines are able to collect, track and analyze a vast amount of data about us, it is possible for those machines to use that information against us. An insurance company telling you that you are not insurable based on the number of times you were caught on camera talking on your phone is not outside the realms of possibility. A job offer can be withheld from you based on your “social credit score.”

6. Job Automation

It is considered one of the immediate risks of artificial intelligence. More and more tasks will be automated and done by machines. The examples of the rise of self-driving cars and robots assembling automobile parts in factories are in front of us. This is especially true for predictable and repetitive tasks where disruption is already underway. Another example in this regard is robots operating in Amazon warehouses. Previously, humans used to move the packages from one place to next in these warehouses, now it is done by robots. This means the loss of jobs for humans. With the loss of livelihood, social tensions can rise, giving rise to hatred, crime, and violence.

How to mitigate the threat?

Government regulations about the misuse of AI are one possible course of action. Governments worldwide must keep a check on tech firms becoming very powerful to avoid misuse of AI technology.

Elon Musk offers another reasonable solution when he says that, “I think it is very important that AI must not be other. It must be us. And I could be wrong about what I am saying…I am certainly open to ideas if anybody can suggest a path that is better but I think we are really going to have to either merge with AI or be left behind.” His company Neuralink provides a practical step to making AI as our friend rather than an enemy by making possible the insertion of a computer chip in our brains.

To check out our AI services, please click here.

To breathe in some confidence, check out our profiles on Clutch, GoodFirms, & DesignRush.

Tell Us About Your Project