Hackers have started using WormGPT to aid in phishing attacks

Tomcat

Professional
Messages
2,656
Reputation
10
Reaction score
647
Points
113
SlashNext has released a study showing that attackers use generative artificial intelligence technologies to prepare and implement phishing attacks and distribute malware.

e00c20fdb10eb03eaa9a5047e9d5e80d.jpg


They use OpenAI's ChatGPT and WormGPT cybercrime tool. The latter is based on the GPTJ language model, which was developed in 2021. WormGPT offers unlimited character support, chat memory saving, and code formatting capabilities. According to the researchers, WormGPT can exhibit “strategic cunning” to orchestrate sophisticated phishing attacks. It was trained on datasets associated with malicious activity, but the author does not disclose what this data is.

381826eb7b611bd25454c435e4c0d191.jpg


In one experiment, the researchers had WormGPT generate an email to trick an account manager into paying a fraudulent invoice. The neural network created a text that was quite convincing.

95a4fe06076426e15b4aa8477e3df6d2.jpg


In doing so, ChatGPT generates human-like text based on input, and cybercriminals can use it to automate the creation of highly persuasive and personalized emails. At the same time, texts can be translated into different languages, even those that the hackers themselves do not know.

Cybercriminals also share jailbreaks for the ChatGPT interface. These are specialized hints that are designed to manipulate the neural network to reveal sensitive information, create inappropriate content, or even execute malicious code.

572374f028da310a30fc856b267e458d.jpg


SlashNext said that the spread of neural networks reduces entry barriers for attackers and their costs. At the same time, attacks can be “more precise” than before. If hackers don't succeed on the first try, they can try again with different content.

The researchers called on companies to develop regularly updated training programs aimed at countering AI-enhanced attacks. Such programs should educate employees about the nature of threats, how AI is used, and the tactics of attackers. In addition, implementing enhanced email screening measures may be helpful. Thus, security systems can flag messages containing certain keywords (“urgent”, “confidential” or “electronic transfer”).

Back in January, cybersecurity experts at Check Point Research found that hackers were using ChatGPT to write malicious code and phishing emails. Moreover, some of them have virtually no programming experience.
 
Top