[4787 views]
Artificial Intelligence (AI) is benefiting the society in about every way possible, from commoners to the armed forces. AI-driven devices are becoming cognitive enough to aid people in times of need. For example, last year, Alexa alerted the police in New Mexico after it heard its owner threaten his girlfriend.
However, it can be a great risk to the society if it ends up in the wrong hands. Due to this, researchers all over the world are working colaboratively to frame policies and safety rules to check verification, validity, security and control of AI.
AI systems can self-learn. But will these be able to take the right decisions based on their social impact? For example, if you command your driver-less car to take you to your destination at the earliest, how will the machine analyse it? Will it exceed the speed limit, and harm people in its way?
Recursive self-improvement in AI systems could potentially trigger an intelligence behind. Creation of strong AI might be the biggest event in human history, as super-intelligence could help eradicate poverty, war and disease. However, before it becomes super intelligent, goals of AI need to be aligned with ours, aided with ethics and morals.
Like every technology, AI has its pros and cons. AI powered machines have increased work efficiency and accuracy, reduced cost of training and processes, and much more. To ensure that the risks associated with AI are mitigated, we need ethical codes and policies.
There are three scenarios in which AI might become a risk. These are:
AI programmed for Cyber-attacks and crime. Criminal can automate hacking attempts using AI. To ensure computer systems are safe, policymakers need to work with Technologists to spread awareness about possible issue with AI and prevent cyber attacks. Here are some incidents where AI machines were programmed for devastation.
AI-powered self-learning machines may fail to take right decisions. This happens when a machine's self-learning mechanism gets a random input, based on which the machine takes a decision. For example, a user claimed that Amazon's Alexa (digital assistant) scared him by saying, "Every time I close my eyes, all I see is people dying," without activation. When the user asked Alexa to repeat the statement, it said that it did not understand the command.
In another situation, It invited people for a party to an empty flat on its own.
AI develops destructive methods to achieve goal. Even when something is developed with good intentions, it may lead to trouble. For example, McAfee researchers found that Microsoft's Cortana can be used to hack computers running on Windows 10. It could deploy malicious software or reset a Windows account password. Susceptibility arises from its ability to listen for commands even when the computer is locked.
When computers deny. AI machines mimic the cognitive abilities of human beings and solve a problem (simple or complex) by using machine learning algorithms. There is a new human-like denial service available in bots that blocks users, making the service available in bots that blocks users, making the service less secure. This type of cyberattack could potentially harm companies dealing with digital services or online databases.
Publishing fake news. These days, many fake videos of renowned people are being uploaded on the Internet, in which they are seen making defamatory statements. Such fake posts could be detrimental for people or the companies/governments they represent.
Manipulation of information. Malicious use of AI has three main contexts: digital, political and physical. Potential threats are in drone warfare, data extraction and hacking.
Delivery of explosives. Industrial robots and collaborative robots (cobots) are increasingly being adopted to perform manual tasks like cleaning and making deliveries. These machines can be hijacked and used maliciously to carry explosives or perform other illegal tasks.