CIHS

Date/Time:

Blog Post

CIHS > Artificial Intelligence > A Terrorist Tech Review in Khorasan
A Terrorist Tech Review in Khorasan

A Terrorist Tech Review in Khorasan

Broader implication is that counterterrorism efforts must adapt to an era of synthetic propaganda and AI-assisted operations. This means investing in new detection technologies, updating regulations for AI platforms, and perhaps rethinking how we monitor online terrorist communities without infringing on common people’s privacy.

Rahul Pawa

In June 2025, an unlikely tech column appeared in Voice of Khorasan, the English-language web terrorist propaganda magazine of ISIS-K (Islamic State Khorasan Province). Amid usual disillusions, Issue 46 featured a detailed comparison of popular AI chatbots; from OpenAI’s ChatGPT and Microsoft’s Bing AI to the privacy-focused Brave Leo and the Chinese bot dubbed “DeepSeek.” The tone resembled a consumer tech review, but the intent was deadly serious. The authors warned fellow jihadists about the risks of these tools, raising alarms about data privacy and surveillance. ChatGPT and Bing, they noted, might log user data or even expose a terrorist’s IP address, a potential death sentence for an terrorist on the run. They cautioned that generative AI could churn out false information or even mimic someone’s writing or speaking style, making it a double-edged sword for propaganda. After weighing the platforms, the magazine gave its endorsement: Brave’s Leo AI, integrated into a private web browser and requiring no login, was deemed the safest option for terrorists seeking anonymity. In essence, ISIS-K selected the chatbot that asks for the fewest questions in return, a chilling reminder that even terrorists prize privacy features.

This surreal scene, a terrorist group rating AI assistants illustrates how terrorist organisations are eagerly probing the latest technology for their own ends. If Islamic State’s Afghan affiliate is producing in-house reviews of chatbots, it is because they see potential tools for propaganda, recruitment, and even operational support. And ISIS-K is not alone. Across the ideological spectrum, violent terrorists are experimenting with generative AI, ushering in a new chapter in the long history of terrorists exploiting digital media. From jihadists in remote safe houses to far left extremist cells in Western suburbs, terrorists are testing how AI can amplify their hateful messaging and help them evade detection. It’s a development that has counterterrorism officials on high alert, and for good reason.

For decades, terrorist groups have been early adopters of new media. In the late 1990s, Al-Qaeda circulated grainy VHS tapes and CD-ROMs with lectures by Osama bin Laden. By the mid-2010s, ISIS perfected the art of online propaganda: slickly edited videos, encrypted chat channels, and multilingual web magazines reached recruits worldwide at the click of a button. The Islamic State’s media operatives earned a dark reputation as “innovators” in digital terror, leveraging YouTube, Twitter, and Telegram in ways governments struggled to counter. Now, generative AI is the latest technology wave and once again, terrorists are riding it.

What’s different today is the power and accessibility of these AI tools. Modern generative AI can produce content that is startlingly realistic and tailored to an audience’s biases or emotions. This opens troubling possibilities for propaganda. Terrorist groups can now generate fake images, videos, and even interactive dialogues at scale, with minimal resources. In recent months, terrorists have used AI-created images and videos to stoke sectarian hatred and amplify conflicts. During the Israel counter strike on Hamas in 2023, for example, Hamas-linked propagandists circulated doctored visuals, including fabricated pictures of injured children and fake photos of Israeli soldiers in humiliating situations to inflame public emotion and spread disinformation. These AI-manipulated images blended seamlessly into the online information ecosystem, making it harder to separate truth from fabrication in the fog of war.

ISIS and its offshoots have likewise ventured into deepfakes. Islamic State’s media affiliates reportedly published a “tech support guide” in mid-2023 instructing followers how to securely use generative AI tools while avoiding detection. Not long after, ISIS-K began unveiling AI-generated propaganda videos. Following a 2024 attack in Afghanistan, ISIS-K released a video bulletin featuring a fictitious news anchor, generated by deepfake technology calmly reading the group’s claims of responsibility. The video looked like a normal news broadcast, complete with a professional-looking anchor, except it was entirely fabricated by AI. In another case, after an assault in Kandahar, an ISIS-K propagandist created a second deepfake “Khurasan TV” clip, this time with a Western-accented avatar as the presenter. The goal is clear: lend terrorist propaganda a veneer of credibility and polish that previously required a studio and camera crew. Instead of grainy cellphone martyr videos, we now see digital avatars delivering the jihadists message in high definition, potentially fooling viewers (and automated content filters) that would dismiss overtly violent footage. As one security analyst observed, this marks a stark upgrade from the early 2000s when terrorist videos were rudimentary and “prioritised the message over higher production values” , today’s AI-crafted terror content can closely resemble legitimate media broadcasts.

Why are terrorist groups so keen on generative AI? The answer lies in what these tools promise: speed, scale, personalisation, and a degree of deniability. A large language model can produce terrorist propaganda texts in multiple languages almost instantaneously, allowing a group like ISIS-K or al-Qaeda to tailor messages to different ethnic or national audiences without a large translation team. AI image generators can churn out endless visuals for memes, posters, or fake “news” proof, enabling agitators to flood social media with content that algorithmic moderation hasn’t seen before, thereby evading detection by hash-based filters that flag known terrorist photos. As Adam Hadley of Tech Against Terrorism warned, if terrorists manipulate imagery at scale with AI, it could undermine the hash-sharing databases that platforms use to automatically block violent content . In effect, generative AI offers terrorists a way to boost volume and variety in their online output, potentially staying one step ahead of content moderation efforts.

Just as importantly, AI lowers the barriers for creating sophisticated lies. Misinformation and conspiracy theories can be mass-produced with ChatGPT-like models, which excel at mimicking authoritative tone or even an individual’s speech patterns. ISIS-K’s magazine explicitly noted this danger that AI can “create false information or mimic specific speech patterns” even as the group itself hopes to weaponise those abilities. A propaganda officer no longer needs fluent English or polished writing skills; a chatbot can draft a glossy screed praising ISIS or condemning “infidels” in seconds. Need a fake quote from a Western military official or an AI-generated image of an atrocity to blame on the enemy? These can be fabricated on demand. The Islamic State’s online supporters have already experimented with AI transcription and translation to repurpose old Arabic audio into new languages. Al-Qaeda supporters have circulated AI-crafted visual posters to glamorise their cause. And on the other end of the spectrum, far-left and white supremacist groups are also sharing tips on using AI , from an online channel that distributed AI-generated racist images to far-right influencers publishing a “guide to memetic warfare” with AI-based meme generators. Across the board, terrorists see opportunity in these technologies to make their propaganda more persuasive and pervasive.

There’s another reason AI appeals to terrorists: automation. A human recruiter or propagandist has only so many hours in a day, but an AI chatbot can engage dozens of people 24/7 without needing sleep or salary. We are already seeing rudimentary terrorist chatbots pop up. Intelligence analysts have noted that AI-powered terrorist bots could be deployed to engage potential recruits in personalised conversations, probing their grievances and gradually nudging them toward radicalisation. Unlike a human operative, a chatbot can patiently imitate a mentor or comrade, analysing a person’s messages for ideological leanings or emotional vulnerabilities and adjusting its replies in real time. In one scenario, a lone individual sympathetic to jihadist ideas might stumble upon a Telegram bot that answers their questions about Islamic State ideology, supplies them with propaganda videos, and steadily encourages more extreme action, all with minimal human oversight. By the time an actual ISIS handler steps in, the recruit may already be desensitised and committed. This kind of AI-guided grooming is no longer science fiction; it’s an active area of experimentation. Islamic State cadres reportedly have even taken online AI training courses to improve their outreach. The allure is obvious: AI can multiply a terror group’s reach while keeping its manpower safer in the shadows.

The ISIS-K magazine’s chatbot comparison shines light on another priority for terrorists in the digital age: operational security. The fact that a terrorist publication is dissecting privacy policies of AI platforms shows how concerned these groups are about surveillance. It’s a cat-and-mouse game , jihadists know Western intelligence and tech companies are tracking their online footprints, so they gravitate toward any tool that promises anonymity. Brave Leo, the magazine’s top pick, is telling in this regard. Brave’s AI assistant runs in a privacy-centric web browser, doesn’t require account registration, and claims to store no chat logs. In other words, it leaves minimal trace. By contrast, ISIS-K’s writers correctly noted that mainstream models like ChatGPT and Bing log user queries and could potentially hand them over to authorities. Even the Chinese “DeepSeek” bot was deemed too risky, given that its servers in China might be accessible to state security and had known data leaks , a nonstarter for terrorists who fear surveillance. The preference for a privacy-shielded AI mirrors terrorists’ broader tech choices: encrypted messaging apps over open forums, cryptocurrency (like Monero) over traceable bank transfers, and now open-source or decentralised AI over Big Tech offerings. Indeed, in the same issue of Voice of Khorasan, ISIS-K shared a new Monero crypto wallet for donations, another nod to privacy and intractability. The through-line is clear: whether it’s moving money or asking an AI how to say “hello” in Indonesian, they want to do it beyond the gaze of governments.

This pursuit of anonymity also highlights a profound irony. The very companies that pride themselves on user privacy , in this case, Brave , may inadvertently become platforms of choice for violent terrorists. It’s a perilous balancing act: privacy protections are vital for activists, dissidents, and ordinary citizens, but the same features can shield terrorists. ISIS-K’s public musings on chatbot privacy read like a dark parody of consumer Reports, yet they highlight a genuine challenge for tech providers. How do you offer privacy-rich services without becoming a safe haven for malign actors? The makers of Brave Leo likely never imagined their product would get the jihadi seal of approval in an ISIS magazine. But in the wilds of the internet, bad actors will exploit any tool that gives them an edge, from burner phones to VPNs to AI bots.

The emergence of generative AI as an terrorist tool is forcing a rethink in counterterrorism circles. Security services around the world have long grappled with terrorist use of the internet, but the scale and sophistication that AI brings to the table raise the stakes. Propaganda that once took a studio of editors to produce can now be generated by a lone operative with a laptop. Fake personas can be maintained by chatbots, making it harder for undercover agents to tell whether they’re engaging with a real recruiter or an algorithm. False information can be injected into social media debates at an unprecedented volume, potentially swaying narratives before truth can catch up. As one analysis warned, generative AI could allow terrorists to wage “intensive propaganda and disinformation (campaigns) with the click of a button,” dramatically increasing the speed and scale of influence operations. Governments and tech companies find themselves scrambling to respond to this threat. In a recent initiative, the non-profit Tech Against Terrorism partnered with Microsoft to develop AI-based detection systems that can identify and flag terrorist content produced by generative models. The hope is that AI can also be a part of the solution , for example, by recognising the telltale quirks of AI-generated images or text and preemptively filtering out propaganda barrages before they flood smaller platforms. Researchers are likewise studying how effectively AI-driven disinformation actually radicalises people, so policymakers can craft informed responses. Early academic findings suggest that AI-generated propaganda can be quite persuasive, especially if a reader is unaware it was machine-written, which rings alarm bells for those on the front lines of deradicalisation.

History shows that whenever terrorists gain a technological advantage, be it encrypted messaging, online finance, or drone weaponry, governments eventually catch up, though often after a dangerous lag. The same may prove true for AI. We are still in the early days of terrorists experimenting with ChatGPT-style tools, but the rapid adoption documented in ISIS-K’s magazines and far left chat rooms indicates a learning curve that is quickly being overcome. Today it’s comparative reviews of chatbots; tomorrow it could be fully AI-scripted recruitment videos or autonomously generated terror manifestos flooding the web. The broader implication is that counterterrorism efforts must adapt to an era of synthetic propaganda and AI-assisted operations. This means investing in new detection technologies, updating regulations for AI platforms, and perhaps rethinking how we monitor online terrorist communities without infringing on common people’s privacy. It’s a delicate balance. Overreaction could drive would-be terrorists further into dark, decentralised corners of the internet. Under-reaction could allow an onslaught of AI-fueled radicalisation that our current systems are ill-prepared to handle.

In the end, the scene of ISIS-K’s tech editors picking their favourite chatbot is a stark snapshot of the moment: even as their caliphate lies in ruins on the ground, Islamist terrorists (like many of their terrorist counterparts) are forging a digital caliphate in the cloud, complete with AI advisors at their side. Preventing the misuse of generative AI is now part of the broader fight against terrorism. It will require vigilance, creativity, and cooperation between tech firms, governments, and civil society. The world has seen how terrorists exploit a Twitter hashtag or a YouTube video to deadly effect; we must now anticipate how they might leverage an algorithm that can think and write at their command. The fight against terror has entered the age of artificial intelligence, and the global community cannot afford to be a step behind. The terrorists certainly aren’t.

(Author is Director Research at New Delhi based think tank, Centre for Integrated and Holistic Studies) 

About The Author

Leave a comment

Your email address will not be published. Required fields are marked *