The Role of Artificial Intelligence and Machine Learning in Countering Terrorism in Pakistan
Quote from strafasia on 25th June 2024, 5:06 pmArtificial intelligence (AI) is a technology capable of performing complex tasks that were traditionally performed only by humans, such as problem solving, reasoning, and decision-making. Machine learning, a subset of AI focuses on algorithms that can improve the performance of specific tasks by learning from given data. These algorithms analyse patterns and draw conclusions from patterns in a dataset, eliminating the need for human intervention. Consequently, there has been a growing reliance on advanced technologies to bolster surveillance and threat detection capabilities. An illustrative example of this utilisation can be observed in the methods employed by the Chinese government, which has leveraged AI to identify individuals deemed as potential threats to national security. However, the efficacy of AI models in identifying and preempting potential terrorist activities hinges upon the availability of information about certain behaviour of individuals in order to identify potential terrorists and predict their future activities.
Following the Chinese model, Pakistan can also leverage advanced technologies, particularly artificial intelligence, to enhance national security measures and counter the threat of terrorism. For instance,,INSIKT Intelligence is a startup in the US that has employed the use of social media analyses and other information to find possible online threats. Similar tools of AI technology can be modified for use in Pakistan to examine online terrorist behaviour in greater detail.
AI can also be used to identify people who might become radical on the internet. It is possible to spot potential indicators of radicalisation by analysing online activities through using machine learning techniques, such as natural language processing (NLP). It plays a key role through automated text analysis that allows in identifying language, emotions and ideas. NLP can recognise subtle hints in language that can indicate a shift towards radical thinking...
By constantly learning from new data, these AI systems can improve their accuracy over time, becoming more adept at distinguishing harmless expressions of opinion from indicators of extremism and radicalisation. An example of a tool created for this purpose while adhering to stringent privacy and security standards is the EU-funded Real-time Early Detection and Alert System for Online Terrorist Content (RED-Alert) project. ……
The misinformation and disinformation spread by terrorists on social media poses a serious threat to national security. It is commonly assumed that terrorist groups can create an environment conducive to terrorism, through the spread of false information that causes fear and uncertainty, which destabilises communities and makes them easier to manipulate. They also undermine trust in authorities by circulating fake news about government actions, weakening societal cohesion, and making extremist narratives more appealing. For recruitment, terrorists spread distorted ideological narratives and highlight perceived injustices to attract and radicalise individuals who feel marginalised. This spread of false information helps create an environment conducive to terrorism by polarising society and increasing tensions, normalising extremist views over time, facilitating coordination of terrorist activities under the guise of false narratives.
Such information cannot be shared online at a lower cost. Therefore, misinformation is largely disseminated by bots, or online programmes that carry out repetitive tasks. A 2017 study found that there were 140 million bots on Facebook, about 27 million on Instagram, and 23 million on Twitter. Propaganda on social media can be automatically distributed by groups such as ISIL, which have demonstrated proficiency in using bots. However, websites like Snopes.com can be used to verify the reliability of sources and identify hate speech and disinformation for combating the significant percentage of misinformation and fake news spread by terrorists.
Apart from protecting online spaces, Pakistan can leverage the installation of biometric verification systems, such as the Safe City Project. This initiative can be expanded to integrate biometric systems and AI-driven monitoring solutions by installing high-definition CCTV cameras equipped with facial recognition and biometric scanning capabilities at key public locations such as pedestrian crossings, transport hubs, and crowded markets. By integrating these biometric systems with AI, the data can be compared against international watchlists and databases to quickly identify potential threats. This approach can significantly enhance national security by enabling real-time identification and monitoring. In the past, Pakistan practiced such initiatives by using Skynet which systematically analysed metadata among the country's 55 million Pakistani mobile phone users to demonstrate terrorist activities with an error up to only 0.008% percent.…
In conclusion, AI and ML offer tremendous potential for combating terrorism in Pakistan through a variety of innovative approaches. Security can be significantly increased by using biometric verification systems. Leveraging AI's power of reasoning, problem solving, etc., coupled with ML's ability to learn from data, security agencies can develop predictive models of potential terrorist activities and forms of online radicalisation. Thus, Pakistan can step up its counter-terrorism efforts by ensuring ethical considerations and respecting privacy and civil liberties.
Artificial intelligence (AI) is a technology capable of performing complex tasks that were traditionally performed only by humans, such as problem solving, reasoning, and decision-making. Machine learning, a subset of AI focuses on algorithms that can improve the performance of specific tasks by learning from given data. These algorithms analyse patterns and draw conclusions from patterns in a dataset, eliminating the need for human intervention. Consequently, there has been a growing reliance on advanced technologies to bolster surveillance and threat detection capabilities. An illustrative example of this utilisation can be observed in the methods employed by the Chinese government, which has leveraged AI to identify individuals deemed as potential threats to national security. However, the efficacy of AI models in identifying and preempting potential terrorist activities hinges upon the availability of information about certain behaviour of individuals in order to identify potential terrorists and predict their future activities.
Following the Chinese model, Pakistan can also leverage advanced technologies, particularly artificial intelligence, to enhance national security measures and counter the threat of terrorism. For instance,,INSIKT Intelligence is a startup in the US that has employed the use of social media analyses and other information to find possible online threats. Similar tools of AI technology can be modified for use in Pakistan to examine online terrorist behaviour in greater detail.
AI can also be used to identify people who might become radical on the internet. It is possible to spot potential indicators of radicalisation by analysing online activities through using machine learning techniques, such as natural language processing (NLP). It plays a key role through automated text analysis that allows in identifying language, emotions and ideas. NLP can recognise subtle hints in language that can indicate a shift towards radical thinking...
By constantly learning from new data, these AI systems can improve their accuracy over time, becoming more adept at distinguishing harmless expressions of opinion from indicators of extremism and radicalisation. An example of a tool created for this purpose while adhering to stringent privacy and security standards is the EU-funded Real-time Early Detection and Alert System for Online Terrorist Content (RED-Alert) project. ……
The misinformation and disinformation spread by terrorists on social media poses a serious threat to national security. It is commonly assumed that terrorist groups can create an environment conducive to terrorism, through the spread of false information that causes fear and uncertainty, which destabilises communities and makes them easier to manipulate. They also undermine trust in authorities by circulating fake news about government actions, weakening societal cohesion, and making extremist narratives more appealing. For recruitment, terrorists spread distorted ideological narratives and highlight perceived injustices to attract and radicalise individuals who feel marginalised. This spread of false information helps create an environment conducive to terrorism by polarising society and increasing tensions, normalising extremist views over time, facilitating coordination of terrorist activities under the guise of false narratives.
Such information cannot be shared online at a lower cost. Therefore, misinformation is largely disseminated by bots, or online programmes that carry out repetitive tasks. A 2017 study found that there were 140 million bots on Facebook, about 27 million on Instagram, and 23 million on Twitter. Propaganda on social media can be automatically distributed by groups such as ISIL, which have demonstrated proficiency in using bots. However, websites like Snopes.com can be used to verify the reliability of sources and identify hate speech and disinformation for combating the significant percentage of misinformation and fake news spread by terrorists.
Apart from protecting online spaces, Pakistan can leverage the installation of biometric verification systems, such as the Safe City Project. This initiative can be expanded to integrate biometric systems and AI-driven monitoring solutions by installing high-definition CCTV cameras equipped with facial recognition and biometric scanning capabilities at key public locations such as pedestrian crossings, transport hubs, and crowded markets. By integrating these biometric systems with AI, the data can be compared against international watchlists and databases to quickly identify potential threats. This approach can significantly enhance national security by enabling real-time identification and monitoring. In the past, Pakistan practiced such initiatives by using Skynet which systematically analysed metadata among the country's 55 million Pakistani mobile phone users to demonstrate terrorist activities with an error up to only 0.008% percent.…
In conclusion, AI and ML offer tremendous potential for combating terrorism in Pakistan through a variety of innovative approaches. Security can be significantly increased by using biometric verification systems. Leveraging AI's power of reasoning, problem solving, etc., coupled with ML's ability to learn from data, security agencies can develop predictive models of potential terrorist activities and forms of online radicalisation. Thus, Pakistan can step up its counter-terrorism efforts by ensuring ethical considerations and respecting privacy and civil liberties.