Generative AI didn't live up to cybercriminals expectations

Brother

Professional
Messages
2,565
Reputation
3
Reaction score
362
Points
83
ChatGPT and analogs turned out to be of little use in real-world scenarios, only taking up valuable time for attackers.

Despite the concerns of security researchers and law enforcement agencies in some countries about the malicious use of ChatGPT and similar LLM models, a study of cybercrime forums conducted by Sophos shows that many attackers are still skeptical about chatbots and neural networks, using them literally through pain and tears to achieve the desired result. criminals rarely do this.

Sophos researchers found several LLM models in the cyber underground at once, claiming to have capabilities similar to WormGPT and FraudGPT, which we discussed in the summer. Among such models for professional Internet crimes are EvilGPT, DarkGPT, PentesterGPT, and others. However, experts noted a clear skepticism about some of them. Among other things, the authors of these models were accused of fraud and non-fulfillment of the declared capabilities by chatbots.

The skepticism is reinforced by claims that GPT technology itself is highly overvalued, hyped by the media, and completely unsuitable for generating exploitable malware or creating advanced fraudulent tactics. Even if "criminal neural networks" can help their users a little, this will clearly not be enough to conduct a comprehensive and well-thought-out attack.

Cybercriminals have other concerns related to LLM-generated code, including security issues coupled with the risk of being detected by antivirus and EDR systems.

As far as practical applications are concerned, most of the ideas and concepts remain at the level of discussions and theories. There are only a few successful examples of using LLM to generate malware and attack tools, and then only in the context of proof-of-concept (PoC).

At the same time, some forums report on the effective use of LLM for solving other tasks that are not directly related to cybercrime, such as routine coding, generating test data, porting libraries to other languages, etc. However, regular ChatGPT is also suitable for these tasks.

Quite inexperienced cybercriminals show some interest in using GPT to generate malware, but they are often unable to circumvent the limitations of models or understand errors in the resulting code.

In general, at least in the forums studied by experts, LLMs are not yet the main topic of discussion or a particularly active market compared to other cybercrime products and services.

Most forum participants continue to pursue their daily cybercrime activities, only occasionally experimenting with the capabilities of generative AI. However, the number of GPT services discovered by researchers suggests that this market is gradually growing, and soon more and more attackers will begin to gradually introduce LLM-based components into their services.

As a result, the Sophos study shows that many participants in cybercrime activities struggle with the same concerns about LLM as all other users, including issues of accuracy, privacy, and applicability in real-world scenarios.

These doubts clearly do not stop absolutely all cybercriminals from using LLM, but most of them have so far taken a wait-and-see position until the technology is pumped even more.
 
Top