Hackers master artificial intelligence: the new Fraudgpt is ideal for writing phishing letters

Carding

Professional
Messages
2,870
Reaction score
2,493
Points
113
What can a advanced chat both in the hands of vile cybercriminals create?

The emergence of generative and models radically changed the landscape of cyberurosis. A recent analysis of the activity of the darknet forums by the Netenrich research team indicates the appearance and distribution of Fraudgpt service among cybercriminals - a chatbot with artificial intelligence.

Fraudgpt was created exclusively for malicious purposes. The list of its capabilities includes: writing phishing letters, hacking sites, theft of bank cards, etc. Currently, access to the service is sold in various black markets, as well as in the author’s Telegram channel.

8e67943735.png

Screenshots with a demonstration of work Fraudgpt

As can be seen from promotional materials, an attacker can make a letter that, with a high degree of probability, will force the recipient to go through a harmful link. This is critical for organizing phishing or Bec.

Fraudgpt subscription is $ 200 per month or $ 1700 per year, and a complete list of the capabilities of the evil chatbot includes the following:
  • writing malicious code;
  • creation of unreasonable viruses;
  • search for vulnerabilities;
  • creation of phishing pages;
  • writing fraudulent letters;
  • Search for data leaks;
  • training and hacking;
  • Search for sites for stealing cards.

Oddly enough, such a malicious chatbot is not something radically new and one of a kind. Literally at the beginning of this month, announcements about another chatbot with artificial intelligence called Wormgpt, about which we also wrote on the site, would be massively distributed in the dark forums.

Although ChatGPT and other AI systems are usually created with ethical restrictions, it is possible to redo them for free use than the attackers use, and even successfully earn on this, selling access to their creation to other criminals.

The appearance of Fraudgpt and the like fraudulent tools is an alarming signal about the danger of abuse of artificial intelligence. And so far it is mainly only about phishing, that is, the initial stage of the attack. The main thing is that over time, chat bots do not learn to carry out the entire cycle of attacks from beginning to end-in automatic mode. Because here in speed and methodology, artificial intelligence will no longer be equal.
 
Experts have discovered an AI tool for criminals called FraudGPT

A new AI tool for cybercriminals called FraudGPT is gaining popularity on the darknet and Telegram channels. It allows you to create phishing emails, tools for hacking, carding, and so on, according to researchers from Netenrich.

FraudGPT has been advertised as an alternative to ChatGPT for cybercriminals since at least July 22, 2023. A subscription to the tool costs about $200 per month, $1,000 for six months, or $1,700 for a year.

It is not yet known what kind of LLM this chatbot is built on, the researchers say. The company says that the tool allegedly already has more than three thousand confirmed sales and reviews.

Netenrich experts say that FraudGPT can be used to write malicious code, create undetectable malware, and search for leaks and vulnerabilities.

According to the researchers, in the future we should expect to find new ways to expand the capabilities of cybercriminals. As long as companies create ChatGPT and other AI tools in compliance with ethical standards, hackers can use the same technologies without complying with these standards, Netenrich says.
 
Top