Terrorist Financing in the Age of Large Language Models
This research briefing by Jason Blazakis examines how advances in artificial intelligence (AI) and large language models (LLMs) could be exploited to support terrorist financing activity. The report argues that LLMs could act as powerful “force multipliers” by lowering barriers to persuasion, coordination and financial deception and that these technologies risk reshaping the economics of terrorist fundraising by enabling scalable, personalised and culturally tailored appeals at unprecedented speed.
The report assesses how AI-enabled tools could be used for generating fundraising narratives or outreach materials as well as used to help enable assisted fraud, cyber theft and improved concealment of proceeds. It compares how leading LLM providers, including OpenAI, Google and Anthropic, address terrorism and illicit finance within their published policies, highlighting notable differences in regulatory specificity and enforcement approaches. To test whether these policies translate into practice, the author conducted limited baseline prompt testing across the three major LLM models offered by these companies, examining whether they refused overt requests related to terrorist fundraising and money laundering.