A Terrorist Tech Review in Khorasan
Broader implication is that counterterrorism efforts must adapt to an era of synthetic propaganda and AI-assisted operations. This means investing in new detection technologies, updating regulations for AI platforms, and perhaps rethinking how we monitor online terrorist communities without infringing on common people’s privacy. Rahul Pawa In June 2025, an unlikely tech column appeared in Voice of Khorasan, the English-language web terrorist propaganda magazine of ISIS-K (Islamic State Khorasan Province). Amid usual disillusions, Issue 46 featured a detailed comparison of popular AI chatbots; from OpenAI’s ChatGPT and Microsoft’s Bing AI to the privacy-focused Brave Leo and the Chinese bot dubbed “DeepSeek.” The tone resembled a consumer tech review, but the intent was deadly serious. The authors warned fellow jihadists about the risks of these tools, raising alarms about data privacy and surveillance. ChatGPT and Bing, they noted, might log user data or even expose a terrorist’s IP address, a potential death sentence for an terrorist on the run. They cautioned that generative AI could churn out false information or even mimic someone’s writing or speaking style, making it a double-edged sword for propaganda. After weighing the platforms, the magazine gave its endorsement: Brave’s Leo AI, integrated into a private web browser and requiring no login, was deemed the safest option for terrorists seeking anonymity. In essence, ISIS-K selected the chatbot that asks for the fewest questions in return, a chilling reminder that even terrorists prize privacy features. This surreal scene, a terrorist group rating AI assistants illustrates how terrorist organisations are eagerly probing the latest technology for their own ends. If Islamic State’s Afghan affiliate is producing in-house reviews of chatbots, it is because they see potential tools for propaganda, recruitment, and even operational support. And ISIS-K is not alone. Across the ideological spectrum, violent terrorists are experimenting with generative AI, ushering in a new chapter in the long history of terrorists exploiting digital media. From jihadists in remote safe houses to far left extremist cells in Western suburbs, terrorists are testing how AI can amplify their hateful messaging and help them evade detection. It’s a development that has counterterrorism officials on high alert, and for good reason. For decades, terrorist groups have been early adopters of new media. In the late 1990s, Al-Qaeda circulated grainy VHS tapes and CD-ROMs with lectures by Osama bin Laden. By the mid-2010s, ISIS perfected the art of online propaganda: slickly edited videos, encrypted chat channels, and multilingual web magazines reached recruits worldwide at the click of a button. The Islamic State’s media operatives earned a dark reputation as “innovators” in digital terror, leveraging YouTube, Twitter, and Telegram in ways governments struggled to counter. Now, generative AI is the latest technology wave and once again, terrorists are riding it. What’s different today is the power and accessibility of these AI tools. Modern generative AI can produce content that is startlingly realistic and tailored to an audience’s biases or emotions. This opens troubling possibilities for propaganda. Terrorist groups can now generate fake images, videos, and even interactive dialogues at scale, with minimal resources. In recent months, terrorists have used AI-created images and videos to stoke sectarian hatred and amplify conflicts. During the Israel counter strike on Hamas in 2023, for example, Hamas-linked propagandists circulated doctored visuals, including fabricated pictures of injured children and fake photos of Israeli soldiers in humiliating situations to inflame public emotion and spread disinformation. These AI-manipulated images blended seamlessly into the online information ecosystem, making it harder to separate truth from fabrication in the fog of war. ISIS and its offshoots have likewise ventured into deepfakes. Islamic State’s media affiliates reportedly published a “tech support guide” in mid-2023 instructing followers how to securely use generative AI tools while avoiding detection. Not long after, ISIS-K began unveiling AI-generated propaganda videos. Following a 2024 attack in Afghanistan, ISIS-K released a video bulletin featuring a fictitious news anchor, generated by deepfake technology calmly reading the group’s claims of responsibility. The video looked like a normal news broadcast, complete with a professional-looking anchor, except it was entirely fabricated by AI. In another case, after an assault in Kandahar, an ISIS-K propagandist created a second deepfake “Khurasan TV” clip, this time with a Western-accented avatar as the presenter. The goal is clear: lend terrorist propaganda a veneer of credibility and polish that previously required a studio and camera crew. Instead of grainy cellphone martyr videos, we now see digital avatars delivering the jihadists message in high definition, potentially fooling viewers (and automated content filters) that would dismiss overtly violent footage. As one security analyst observed, this marks a stark upgrade from the early 2000s when terrorist videos were rudimentary and “prioritised the message over higher production values” , today’s AI-crafted terror content can closely resemble legitimate media broadcasts. Why are terrorist groups so keen on generative AI? The answer lies in what these tools promise: speed, scale, personalisation, and a degree of deniability. A large language model can produce terrorist propaganda texts in multiple languages almost instantaneously, allowing a group like ISIS-K or al-Qaeda to tailor messages to different ethnic or national audiences without a large translation team. AI image generators can churn out endless visuals for memes, posters, or fake “news” proof, enabling agitators to flood social media with content that algorithmic moderation hasn’t seen before, thereby evading detection by hash-based filters that flag known terrorist photos. As Adam Hadley of Tech Against Terrorism warned, if terrorists manipulate imagery at scale with AI, it could undermine the hash-sharing databases that platforms use to automatically block violent content . In effect, generative AI offers terrorists a way to boost volume and variety in their online output, potentially staying one step ahead of content moderation efforts. Just as importantly, AI lowers the barriers for creating sophisticated lies. Misinformation and conspiracy theories can be mass-produced with ChatGPT-like models, which excel at mimicking authoritative tone or even an individual’s speech patterns. ISIS-K’s magazine explicitly noted this danger that AI can “create false information or mimic specific speech patterns”