The most important virtue that separates human beings from other kinds, be it animals or machines, is their cognitive ability. However, today, human-like intelligence is being induced in computers. This concept is called Artificial Intelligence (AI), a field of technology that has made headway at a rapid pace.
Conceivably, AI has a number of advantages over its human precursor. With AI, self-driving cars have been created which can reduce the number of vehicular accidents as it takes informed and precise decisions. AI can better perform routine/monotonous jobs, allowing people to focus on subjects that require greater attention. It is capable of executing tedious tasks without becoming exhausted and with a reduced chance of errors. Owing to their mechanical precision and consistency, AI machines can take on higher risk assignments.
AI is also helping people with sensory/physical impairments by offering electro-mechanical compensatory organs, thus allowing them to move and interact the same way a non-impaired individual would. AI relieves information load, as it is neither affected by the complexity of the environment nor by the enormity of surrounding inputs. For example, AI-powered drones can carry out their operations even under certain limiting weather conditions. There is no denying the fact that AI does enjoy a plethora of benefits over regular human capabilities. It has aided and continues to aid humanity in a variety of ways; but like every man-made entity in science, AI also has flaws, entailing certain risks and dangers.
Due to advancement in AI, a software program called ‘Deepfakes,’ has been introduced. Deepfakes is the 21st Century version of Photoshop, and it revolves around transforming original or creating fake images, audios and/or videos. Data tampering is also seen as a predicament of AI, which is the act of feeding manipulated, malicious and erroneous data/information to the machine through legal/illegal channels. An unauthorized intruder could use malicious code(s) to alter the data or the underlying programming code in a system and completely destroy the data, program or system.
AI has been used in conducting cyber-attacks, which is one of the biggest security threats. AI advancement can also lead to ‘job automation’ (replacement of humans with smart machines); it is predicted that humans will be replaced by machines in the near future, leaving them jobless and miserable. Lastly, the creation of AI machines is costly, complex and requires a lot of maintenance.
The weaponization of AI is a terrifying threat to humanity as it is creating lethal weapons that require little to no human intervention. As reported, Russia is creating a robotic army with the help of China. This formation of a robotic unit was also confirmed by Russia’s Defense Minister, Sergei Shoigu. The United States, China, Israel, South Korea, the United Kingdom and Russia, are all developing and deploying Lethal Autonomous Weapon Systems (LAWS), sometimes also called ‘Killer Robots.’ These machines once activated, would be able to select, engage and terminate targets without human intervention. Such a scenario has the potential to change the nature of warfare in coming years, posing grave threats to global security. Robots battling and causing chaos in a city have only been seen in movies, but this could become a reality if the advancements in AI are not checked. The main danger lies in the degree of autonomyenjoyed by machines. These entirely autonomous machines pose a serious threat to humans, as ‘absolute autonomy’ is highly undesirable, necessitating human intervention and control for the safety of humanity.
Oxford Professor Nick Bostrom, in his book ‘Superintelligence: Paths, Dangers, Strategies’, elaborates on the idea of ‘intelligence explosion.’ According to him, super-intelligence is viewed as ‘any intellect that outperforms human cognitive capacity in almost all realms of interest.’ Although he believes that the rise of super-intelligence poses a significant threat to humanity, he opposes the notion that humans are powerless to prevent its negative consequences. A number of tech titans as well as scientists have issued warnings about AI advancement. One of the biggest tech proponents, Elon Musk, expressed his concern that AI would overtake humans in less than five years. Stephen Hawking, the world famous differently-abled physicist, who used technology to communicate as he faced difficulties in mobility and speech, also warned that ‘The development of full artificial intelligence could spell the end of the human race.’Other researchers at Oxford agree with this stance. For years, Stuart Russell, a pioneer in AI, has also been warning people about its dangers. If machine intelligence surpasses human intelligence, the world may become a dreaded place for humans. Human Rights Watch is already calling for an international treaty that outlaws the development and employment of fully autonomous lethal systems. The warning sirens have begun to sound.
With this discussion in view, it can be safely concluded that the significance of AI cannot be neglected. In fact, it is imperative to ensure that all AI algorithms have humans on-the-loop or in-the-loop. With humans out-of-loop and full control granted to machines, the resultant scenario could endanger humanity.
Be the first to comment