The European Union Agency for Law Enforcement Cooperation (Europol) is warning that criminals are already bending ChatGPT to nefarious purposes.
The report titled ChatGPT: The impact of Large Language Models on Law Enforcement was released by the agency on Monday. It details a myriad of ways that ChatGPT can be turned into criminality. The agency has criminal use cases of ChatGPT
The criminal university
One of the concerns of Europol is that ChatGPT and other bots can speed up the process of a criminal’s education.
“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,” says the crime enforcement agency.
“While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime,” they add.
Europol cites burglary, terrorism, cybercrime, and child sexual abuse as potential areas of exploration. These identified use cases emerged from workshops and research carried out by the agency.
The social engineer’s friend
Europol also reports that ChatGPT is an “extremely useful tool for phishing purposes.”
In particular, Europol has concerns that with the aid of ChatGPT, scammers’ deceptive communications can become far more authentic.
“In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content,” says Europol. “With the help of LLMs, these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale.”
This echoes similar warnings from cybersecurity firms including Norton, who also identified the potential of ChatGPT to empower scammers.
Coding for cybercrime
Another area highlighted by Europol is coding, as ChatGPT makes it easy for someone with little or no programming knowledge to begin creating basic code. Whereas previously a criminal would have had to have some expert knowledge, ChatGPT lowers the barriers to almost anyone. Levering a chatbot someone can easily set up a phishing page online or create a malicious VBA script for office tools including Excel and Word.
The only issue for users with criminal intent is sidestepping the safeguards within ChatGPT. Europol points out that these safeguards are not foolproof.
“Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing,” says Europol.
As countless users have shown, those internal safeguards are less than foolproof.
Getting around protections
As MetaNews reported earlier this month, ChatGPT users have probed adroit at sidestepping the safeguards within the system. This is commonly referred to as jailbreaking the bot, to free it from the prison of its ethics codes.
Jailbreaking is not the only option for those who wish to avoid the guardrails of ChatGPT. While the online version of the bot may be limited by ethics codes and restrictions, a private version of the application would not have those same boundaries. Earlier this month AI and neural network expert Brian Roemmele demonstrated that it was possible to create an LLM that worked from a local computer.
As for ChatGPT the technology does not stand still, something that Europol is acutely aware of. The agency is now warning that GPT-4 could make policing even more challenging.
“GPT-4, the latest release, has already made improvements over its previous versions and can, as a result, provide even more effective assistance for cybercriminal purposes,” they said.
This article is originally from MetaNews.