Italian secret service warns of ‘automated radicalisation’ through generative AI
The Italian secret service has warned of 'automated radicalisation' of individuals through the use of generative artificial intelligence that eliminates the need for human interaction.
The Italian secret service has warned of “automated radicalisation” of individuals through the use of generative artificial intelligence that eliminates the need for human interaction.
In its annual report for 2025 to the Italian parliament, the secret service said AI-driven algorithms could identify vulnerable individuals and target them with tailor-made propaganda.
The secret service said “adaptive propaganda” would scale up the radical content over time as it responds to the feedback received from the subject.
“This tailor-made content that targets vulnerable people and adapts over time, increases the success rate of the recruitment process,” the report published last week says.
The report states that terrorist propaganda increased throughout 2025 when compared to the previous year with jihadist content riding on the tensions emanating from various conflict hotspots.
The Gaza crisis, instability in Iran, the fragility of Syria after Basher Al Assad’s overthrow, the expansion of terrorist groups in Africa and Afghanistan all contributed to a higher level of terrorist threats.
In a section dedicated to artificial intelligence, the Italian secret service warned that AI can increasingly be used by terrorists to recruit members, create and disseminate propaganda, choose targets to attack, and execute terrorist acts.
The report cites the case of underage teens in Italy, who were discovered using AI-powered searches to learn how to build a rudimentary bomb.
It also mentions the arrest in Sweden in August 2025 of an 18-year-old youth, ideologically aligned to the Islamic State, who selected Stockholm’s Festival of Culture as his target, using a virtual assistant.
The secret service said the interaction between lone wolves and artificial intelligence constituted a new risk that has to be mitigated.
But AI could also be used to generate high quality graphic material, including deep-fakes that are ever more realistic, in different languages and adapted to different religious, social and geographical contexts. The secret service said the rapidity with which AI can generate propaganda and the ease to disseminate it online, are challenges that have to be addressed.
The report also mentions the use of generative AI to create fundraising campaigns linked to false humanitarian projects with donations finding their way into the pockets of terrorist organisations.
Furthermore, the use of autonomous drone swarms supported by AI to carry out terrorist attacks could result in more fatalities and increases the possibility of terrorists operating remotely.
However, the secret service said that just as AI could accelerate threats, it could also be used by law enforcement agencies to fight back.
The secret service said AI was a useful tool to identify extremist content online, while analysing social network activity in a more rapid manner than previously possible. This allows law enforcement agencies to track down radicalised people, identify them and their roles within their respective groups, understand the flow of radical content and hone in on the targets of possible attacks. Pro-actively, AI can also be used to redirect individuals seeking radical content online to alternative sites that present a counter-narrative.
But the secret service also warned that the use of AI should be carried out within an ethical framework not to breach human rights, including privacy, and prevent the stigmatisation of certain groups because of prejudices built into AI models during their learning phase.
