Role of Emerging Technologies and Influence of Revolution in Military Affairs in Strategic Stability: Future Possible Scenarios at Global and Regional Level

The first Gulf War marks the genesis of the role of technology in warfare. Prior to that, the concept of Star Wars was developed in tandem with the likes of it, the Revolution in Military Affairs (RMA). Barry Buzan was right in analytically foreseeing future conflicts since the fall of Berlin Wall. His anatomy of defining the sectors of security while expanding the scope of theory of securitization is being witnessed in the contemporary times. Likewise, one of its facets can be categorized under the realm of non-traditional threats emanating from the use of digital technologies. So far, the intended purpose of nuclear weapons, to avert wars, has manifested itself. Against the backdrop of that lies the role of technology and RMA which is disrupting the traditional notions of conflicts and war. Similarly, academia is debating what the role of emerging technologies in the dimensions of warfare will be. The answers are quite vague owing to the transitional phase of such technologies in tandem with scarce literature on the novel subject matters under discussion. Nonetheless, work is being undertaken on the epistemology of the new trends and transformations.

Nuclear studies have also seen a technological revolution and innovation in contemporary times. In fact, nuclear weapons themselves are the product of great scientific and technological innovation of the 20th century. A novel concept of Cross Domain Deterrence (CDD) is also making ingress into the realm of nuclear politics. Although nascent in its state, such concepts are liquidating already established notions, like deterrence and strategic stability. Coming to the contemporary era, these emerging technologies are manifesting in the form of cyberspace (the role of Stuxnet virus directed at Natanz nuclear facility of Iran) and artificial intelligence (the autonomous weapons systems and their would-be directed role for command and control system). Some technologies can be categorized as enablers for nuclear politics while some can be regarded as the destabilizers. This decision rests with those in power-corridors.

Information is being considered as the fifth dimension of warfare in addition to its traditional strands of land, sea, air, and space. Cyberspace is interchangeably used for Information Warfare and Information Operations (IW & IO) as well; however, the former is a sub-section of the latter. An acute example in this case that suits best is the Stuxnet attack on Natanz nuclear facility of Iran.

This was the first instance on the geopolitical landscape where a weapon was built in the virtual battlefield, processed the adversary’s vulnerabilities in the virtual battlefield, and finally effected the actual physical battlefield. The said that the battlefield was none other than the nuclear politics of Middle East where the US and its allies were pushing for diplomatic victory against Iran that ultimately emerged in the form of The Joint Comprehensive Plan of Action (JCPOA). This level of incursion renders the security and peace of the world, which is the mandate of the United Nations (UN). Before the advent of such an attack, all being said and written was not manifested.

The post-Stuxnet environment has opened avenues for likes of it. Does this mean that the concept of deterrence has been liquidated and if the response is affirmative then where does strategic stability stands? Some analysts vouch that cyber weapon can undermine nuclear politics and its concept of deterrence, while others tend to disagree. The evolving scenario would be nuclear versus the cyber; at the back of the barrel, though, it would be the man who has to make the decision. Quite surprisingly, this would not be the case for artificial intelligence (AI).

On September 26, 1983, the satellites and computers of the Soviet Air Defence Forces were tasked with using data to determine if the United States was launching a nuclear attack; they told the humans in charge exactly that was happening. Five US ballistic missiles were incoming and the time for the USSR to prepare to launch a retaliatory attack was now. The reason why we are alive today is that the human involved, then – Lt. Col. Stanislav Petrov – believed that the computer was wrong. He was right. If the computers were in charge, civilization as we know it would have been over.

Among the jobs that could be outsourced to decision-making computers are the jobs of modern-day Petrov’s and other humans tasked with deciding if it’s time to end humanity with a nuclear strike. In fact, this outsourcing of command-and-control of the nuclear arsenal must happen. Some policy advisors have recently argued, because both nuclear capabilities and computing power have advanced manifold. The timeframe required to assess whether a retaliatory second-strike is necessary and then launch it has decreased from the 20 or so minutes in Petrov’s time to perhaps 2 and a half minutes or so.

Central to nuclear weapons policy is the concept of deterrence, or mutually assured destruction. No nation will launch a nuclear strike because doing so would ensure that nation’s own destruction. This doctrine is essentially what’s kept nuclear weapons out of circulation for the past decades. Handing over control of nuclear weapons to AI is seen by some as a necessary adaptation to modern-day technology that would keep the Cold War-era notion of deterrence alive.

There are policy analysts who believe it is absolutely a good idea. Two of them, Adam Lowther and Curtis McGiffin, both former military, arguing for an AI-controlled ‘fail-safe’, not unlike the ‘doomsday device’ in Stanley Kubrick’s Dr. Strangelove. The solution is AI, which would be able to determine if an attack had been launched and would trigger an automated response even before the first nuclear detonation.

But then there are many more policy analysts who believe that ceding control of the nuclear arsenal is a tremendously bad idea that upends the entire doctrine of deterrence, which is what has averted a nuclear war to date. Relying more heavily on AI removes the human element. It removes people like Stanislav Petrov. “There’s no way to know what happens if we cede control of these systems to an artificial intelligence, but we do know the likelihood of a person like Petrov stepping in to stop the madness plummets,” Vice’s Matthew Gault wrote.

And if AI is in charge, assuming in the United States, that means other nations without that AI like China or Russia might be more likely to launch a first strike or take other countermeasures, “which could increase first-strike instability and heighten the risk of deliberate, inadvertent and accidental nuclear use,” as Rafael Loss and Joseph Johnson writes.

Such emerging technologies not only hinges upon the delicate balance of strategic stability but it can render in its liquidation. The day-to-day changes in the technological innovation has taken the lead from the traditional workforce and decision-making model. Would cyber be given an ultimate space where it would be used against the nuclear weapons? The subsequent unfolding scenario would be apocalyptic and if something survives the nuclear winter, it would be fighting with stones in the hand. Likely, will the AI automated decision would allow humans to intervene in order to deescalate the war-like situation? The subsequent unfolding scenario would be doomsday for such automation systems will seek humans as a credible threat to its decision itself and will lock out them from their executing commands. Many movies and TV serials present us with such scenarios. No matter how science-fictional these programs might be, still they present an insight into a window of possible future war or conflict scenarios that should not get ignored. After all, if perception management is the precursor for an action to be taken, it would be the same in the case of delicate debate discussed above.

Syed Ali Hadi

Syed Ali Hadi is currently working as a Research Assistant at Centre for Strategic and Contemporary Research. He is currently pursuing his M.Phil. In Strategic Studies from National Defense University Islamabad.

Recent Posts

Indus Water Treaty: Continuity or Renegotiation

Scholars believe future conflict between Pakistan and India could revolve around water disputes. The situation…

20 hours ago

Who Gets What in the Sino-India Border Patrolling Settlement?

In a sudden turn of events, the standoff between India and China in eastern Ladakh…

20 hours ago

Environment Hazards: Is This Another Threat to Pakistan?

Pakistan is tightly gripped by climate change, despite contributing less than 1% to the greenhouse…

20 hours ago

Embracing the Liberal Order in an Anarchic World

In an anarchic world where governments persistently pursue their own interests, the spectre of conflict…

20 hours ago

Evolving Dynamics of Pakistan’s Sea-based Deterrence: Impact on Strategic Stability of the Region

In International Security, a nation’s strategic considerations evolve in response to changing geopolitical dynamics and…

20 hours ago

US-China Climate Cooperation Under Biden: Progress at Risk with Trump’s Return

During the presidency of Joe Biden, climate change had been one of the areas in…

21 hours ago