509 ChatGPT discussions on shadow forums in a month. What is this-the beginning of an alarming trend?

Brother

Professional
Messages
2,565
Reputation
3
Reaction score
361
Points
83
Attackers are actively experimenting with AI tools for criminal purposes.

Kaspersky Digital Footprint Intelligence specialists have released a report titled "Innovations in the Shadows: How Attackers are Experimenting with AI" . It analyzes messages on the darknet about the use of ChatGPT and other similar solutions based on large language models by attackers. In total, in 2023, 2,890 such posts were found on shadow forums and in specialized Telegram channels. The peak occurred in April — this month, experts recorded 509 messages.

8svulldwvyjwgzxympiltj42a2nyjkq1.png


Discussions can be divided into several areas:

1. Jailbreaks are special sets of commands that open up additional opportunities for chatbots, including for illegal activities. During the year, 249 offers for the sale of jailbreaks were revealed. According to the company, jailbreaks can also be used to legitimately improve the operation of services based on language models.

372a6qnk7iud0y5yp3q2pk85g0p1fr2p.png


2. Using chatbots for malware – participants in shadow forums actively exchanged options for using language models to improve malware and increase the effectiveness of cyber attacks in general. For example, experts found a message that described software for malware operators that used neural networks to ensure operator security and automatically switch the used cover domains. In another discussion, the authors of the posts suggested ChatGPT for generating polymorphic malicious code, which is extremely difficult to detect and analyze.

vvf7xhoy9bt3d9s8itw70pe3l1n6fb7e.png


3. Open source tools based on chatbots, including for pentest. Malefactors also show interest in them. For example, there is a certain open source utility based on the generative model that is freely available on the Internet, which is designed to obfuscate code. Obfuscation avoids detection by monitoring systems and security solutions. Such utilities can be used by both pentesters and intruders. Kaspersky Lab specialists found a post on the darknet in which the authors share this utility and tell how it can be used for malicious purposes.

ksgi0dov7kbh4flj26ft3p29rwa8j0oh.png


4. Alternative versions of ChatGPT. Against the background of the popularity of smart chatbots, other projects aroused great interest: WormGPT, XXXGPT, FraudGPT. These are language models that are positioned as an alternative to ChatGPT with additional functionality and removed restrictions. Although most of the solutions can be used legitimately, attackers can also use them, for example, to write malware. However, increased attention to such offers does not always play into the hands of their creators. So, WormGPT was closed in August 2023 due to widespread fears that it could pose a threat. In addition, many of the sites and ads discovered by the company's specialists that offered to buy access to WormGPT were essentially scams. Interestingly, a large number of such proposals forced the authors of the project to start warning about the possibility of fraud even before closing.

6tp2791sp0psjvdswts302iut2buh71v.png


Another danger for both users and companies is the sale of stolen accounts from the paid version of ChatGPT. In this example, a forum participant distributes accounts that are supposedly obtained from malware log files for free. Account data is collected from infected users devices.

5zd8wxw2jkiaj4epi6wlsmyyepyt1mew.png


In addition to hacked accounts, it is common to sell automatically created accounts with the free version of the model. Attackers register on the platform using automated means, using fake or temporary data. These accounts have a limit on the number of API requests and are sold in packages. This allows you to save time and immediately switch to a new account as soon as the previous one stops working, for example, after being banned due to malicious or suspicious activity.

cwydr4bum12j1f6c68ywqf74vw8khbq8.png


In conclusion, Kaspersky Lab experts emphasize that although many of the solutions reviewed do not pose a serious threat at the moment, technologies are developing rapidly. There is a real risk that language models will soon allow for more complex and dangerous attacks.
 
Top