Counterterrorism officials have long assessed their approach to the utilization of terrorist organizations alongside digital tools and social media platforms, often likening their efforts to a whac-a-mole scenario.
Groups like the Islamic State and neo-Nazi organizations such as The Base harness digital tools to covertly gather finances, obtain 3D-printed weaponry, and disseminate these resources among their followers.
Over time, thwarting attacks and preserving an upper hand over such terrorist factions has progressed as more open-source resources have become accessible.
Currently, with artificial intelligence rapidly evolving, and now freely available as an app, security agents are in a race against time.
A source acquainted with the U.S. government’s counterterrorism initiatives informed the Guardian that several security agencies are deeply worried about how AI enhances the operational efficiency of hostile groups. The FBI refrained from commenting on the situation.
“Our research accurately forecasted the trends we are witnessing. Terrorists are leveraging AI to expedite their existing strategies rather than reinventing their operational frameworks,” remarks Adam Hadley, the founder and executive director of Tech Against Terrorism, an online counter-terrorism watchdog. He references the UN Anti-Terrorism Commission Secretariat (CTED).
“Future dangers encompass the potential for terrorists to utilize AI for rapid app and website development, essentially amplifying threats associated with pre-existing technologies rather than introducing entirely new categories of risk.”
So far, groups like IS and affiliated organizations have started to amplify their recruitment propaganda across diverse media formats, utilizing AI technologies such as OpenAI’s ChatGPT. This poses a more immediate risk as numerous sectors of employment prepare for potential upheavals, benefiting some of the wealthiest individuals globally while complicating public safety issues.
“Consider breaking news from the Islamic State. Today, it can be converted into an audio format,” states Mustafa Ayad, executive director for Africa, the Middle East, and Asia at the Institute for Strategic Dialogue. “We’ve observed supporters establishing groups to bolster their efforts, and we also have a photo array generated in the center.”
Ayad continues, aligning with Hadley’s insights: “Much of AI’s impact enables pre-existing methods. It also enhances their propaganda and distribution capabilities, which is critically significant.”
The Islamic State is not merely curious about AI; it actively acknowledges the potential benefits it offers, even providing encrypted channels with a “Guide to AI Tools and Risks” for its supporters. A recent propaganda magazine elaborates on the future of AI and the necessity for the group to incorporate it into their operations.
“It’s become crucial for everyone to understand the intricacies of AI, irrespective of their field,” the article states. “[AI] is evolving into more than just technology; it is becoming a driving force in warfare.” The writer even posits that AI services could serve as “digital advisors” and “research assistants” for any member of the organization.
Within the perpetually active chat rooms used for communication among followers and recruits, discussions are emerging on various ways AI could be utilized as a resource, though some remain cautious. One user queried whether it was safe to use ChatGPT for “explosives practices,” expressing uncertainty about whether authorities were monitoring the platform. Privacy concerns have surfaced as chatbots are increasingly utilized.
“Are there any alternatives?” an online participant asked among supporters in the same chat room. “Ensure safety.”
However, another participant discovered a method to evade attention during monitoring. By omitting schematics and instructions for creating a “basic blueprint for remote vehicle prototypes using ChatGPT,” they shifted focus. Truck ramming has emerged as a tactic in recent assaults, as well as for followers and operatives. In March, an IS-linked account released a video featuring AI-generated bomb-making tutorials utilizing avatars for crafting recipes from household materials.
Far-right entities are similarly drawn to AI. Advising followers on creating misinformation memes, such as graphic content featuring Adolf Hitler.
Ayad emphasized that some of these AI-powered tools are advantageous for terrorist groups in enhancing their operational security, enabling them to communicate securely without attracting undue scrutiny.
Terrorist organizations continually strive to maximize and adapt digital spaces for their advancement, with AI representing the latest example. Since June 2014, when IS first commanded global attention amid dramatic live-tweeted accounts of mass executions in Mosul, they have undergone significant cyber operations. Following the establishment of their so-called caliphate, there was an organized response by both government entities and Silicon Valley to mitigate online presences. Western intelligence agencies have increasingly focused on encrypted messaging applications, particularly where 3D-printed firearms can be located, for surveillance and policing efforts.
Nonetheless, recent reductions in comprehensive global counterterrorism initiatives, including some from U.S. agencies, have undermined these efforts.
“The more urgent weakness lies in the deteriorating counterterrorism infrastructure,” Hadley remarked. “Standards have considerably declined as platforms and governments divert focus from this critical domain.”
Hadley is advocating for improved “content moderation” concerning AI-enabled materials, pressing companies like Meta and OpenAI to “enhance current mechanisms such as hash sharing and traditional detection methods.”
“Our vulnerabilities do not stem from new AI capabilities, but rather from the reduced resilience against established terrorist activities online,” he concluded.
Source: www.theguardian.com
