Europol sends out ChatGPT warning on phishing, misinformation and cybercrime

AI-tool that has taken world by storm could be used by criminals to refine phishing strategems or create authentic-sounding propaganda

The European police agency Europol has warned against the possible abuse of chatbots such as ChatGPT, the AI-powered ‘Large Language Model’ (LLM) that has taken the world by storm.

In a report published Monday, Europol sent out a stark warning that criminals will be able to use ChatGPT to create credible phishing stratagems, with more refined techniques to steal personal and sensitive data, as well as cybercrime, propaganda or disinformation.

Europol said ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. “The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.”

The police agency also said that after from fraud and social engineering, ChatGPT excelled at producing authentic sounding text at speed and scale. “This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.”

Europol also said that in addition to generating human-like language, ChatGPT was capable of producing code in a number of different programming languages. “For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code. As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.”

The Europol Innovation Lab organised a number of workshops on how criminals can abuse LLMs such as ChatGPT, as well as how it may assist investigators in their daily work.  “As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook,” Europol said.

The Europol report raises awareness about the potential misuse of LLMs, to open a dialogue with Artificial Intelligence (AI) companies to help them build in better safeguards, and to promote the development of safe and trustworthy AI systems.

A large language model is a type of AI system that can process, manipulate, and generate text. Training an LLM involves feeding it large amounts of data, such as books, articles and websites, so that it can learn the patterns and connections between words to generate new content.

ChatGPT is an LLM that was developed by OpenAI and released to the wider public as part of a research preview in November 2022.

The current publicly accessible model underlying ChatGPT is capable of processing and generating human-like text in response to user prompts. Specifically, the model can answer questions on a variety of topics, translate text, engage in conversational exchanges (‘chatting’), generate new content, and produce functional code.