Carding
Professional
- Messages
- 2,870
- Reaction score
- 2,493
- Points
- 113
What can a advanced chat both in the hands of vile cybercriminals create?
The emergence of generative and models radically changed the landscape of cyberurosis. A recent analysis of the activity of the darknet forums by the Netenrich research team indicates the appearance and distribution of Fraudgpt service among cybercriminals - a chatbot with artificial intelligence.
Fraudgpt was created exclusively for malicious purposes. The list of its capabilities includes: writing phishing letters, hacking sites, theft of bank cards, etc. Currently, access to the service is sold in various black markets, as well as in the author’s Telegram channel.
Screenshots with a demonstration of work Fraudgpt
As can be seen from promotional materials, an attacker can make a letter that, with a high degree of probability, will force the recipient to go through a harmful link. This is critical for organizing phishing or Bec.
Fraudgpt subscription is $ 200 per month or $ 1700 per year, and a complete list of the capabilities of the evil chatbot includes the following:
Oddly enough, such a malicious chatbot is not something radically new and one of a kind. Literally at the beginning of this month, announcements about another chatbot with artificial intelligence called Wormgpt, about which we also wrote on the site, would be massively distributed in the dark forums.
Although ChatGPT and other AI systems are usually created with ethical restrictions, it is possible to redo them for free use than the attackers use, and even successfully earn on this, selling access to their creation to other criminals.
The appearance of Fraudgpt and the like fraudulent tools is an alarming signal about the danger of abuse of artificial intelligence. And so far it is mainly only about phishing, that is, the initial stage of the attack. The main thing is that over time, chat bots do not learn to carry out the entire cycle of attacks from beginning to end-in automatic mode. Because here in speed and methodology, artificial intelligence will no longer be equal.
The emergence of generative and models radically changed the landscape of cyberurosis. A recent analysis of the activity of the darknet forums by the Netenrich research team indicates the appearance and distribution of Fraudgpt service among cybercriminals - a chatbot with artificial intelligence.
Fraudgpt was created exclusively for malicious purposes. The list of its capabilities includes: writing phishing letters, hacking sites, theft of bank cards, etc. Currently, access to the service is sold in various black markets, as well as in the author’s Telegram channel.
Screenshots with a demonstration of work Fraudgpt
As can be seen from promotional materials, an attacker can make a letter that, with a high degree of probability, will force the recipient to go through a harmful link. This is critical for organizing phishing or Bec.
Fraudgpt subscription is $ 200 per month or $ 1700 per year, and a complete list of the capabilities of the evil chatbot includes the following:
- writing malicious code;
- creation of unreasonable viruses;
- search for vulnerabilities;
- creation of phishing pages;
- writing fraudulent letters;
- Search for data leaks;
- training and hacking;
- Search for sites for stealing cards.
Oddly enough, such a malicious chatbot is not something radically new and one of a kind. Literally at the beginning of this month, announcements about another chatbot with artificial intelligence called Wormgpt, about which we also wrote on the site, would be massively distributed in the dark forums.
Although ChatGPT and other AI systems are usually created with ethical restrictions, it is possible to redo them for free use than the attackers use, and even successfully earn on this, selling access to their creation to other criminals.
The appearance of Fraudgpt and the like fraudulent tools is an alarming signal about the danger of abuse of artificial intelligence. And so far it is mainly only about phishing, that is, the initial stage of the attack. The main thing is that over time, chat bots do not learn to carry out the entire cycle of attacks from beginning to end-in automatic mode. Because here in speed and methodology, artificial intelligence will no longer be equal.
