Carding. Useful neurons?

chushpan

Professional
Messages
209
Reputation
1
Reaction score
194
Points
43
Hello Anonymous. Haven't heard from me for a long time. It's time to talk about the pressing issue:

DarkGPT? FraudGPT? Evil-GPT? Worm-GPT?

In the vast expanses of dark, there are increasingly tools that engage in self-learning based on information from dark and carding forums. And, as it seems to many today, they are able to help workers with many pressing problems. This includes writing phishing letters, creating phishing sites, writing malicious code, and whatever your heart desires.

Is it so easy today to do dirty work using neural networks? In this article I will try to answer this question both for my students and for newly arrived subscribers.

It is not surprising that chat gpt itself has a sufficient number of banwords to stop attackers. In contrast to this, there are ways to watermelon this neuron. The Internet is already full of examples of how everyone’s favorite assistant creates business ideas for selling not the most honest goods, explains how to create a bomb at home, or even hack an enemy’s password.

I am not saying that all of the above is a hoax. If you work hard with the bot, you can extract the necessary information from it, if you have the time and desire.

People in the dark have already stepped further. And they created an assistant with a clear criminal bias in advance.

For example:

FraudGPT is a tool that allows you to create phishing SMS messages to attack bank clients, as well as phishing web pages and letters, and malicious code. FraudGPT can search for hacker sites, leaks, vulnerabilities, etc. This tool is offered through a subscription model and initially cost $200 per month (or $1,700 for a one-time payment for the year). Now the price has dropped to $90 per month and 700 per year.

The tool looks suitable for a modern worker. It helps to save a lot of time on the need to create the same letters, but it will equally take time away from learning how to work with the neuron and posing a specific request.

Provided that you don’t know how to create the necessary SMS message for an attack, the neuron seems like a must-have. But what is the difficulty in pulling off an SMS yourself and fitting it to the bank’s template?

Let me explain: to receive SMS you will need: a virtual SIM to receive SMS, docks to create an account, a proxy, a VM/RDP/Antiidetect. I repeat, this is only to get the bank’s SMS message template itself to create an attack..

If you look at it, then fraudGPT already sees it as not so useless, right? :)

(This is not an advertisement. But if you need keys and a link to neurons, please contact me in PM)

WormGPT is a tool for generating BEC attacks on email (business email compromise), trained on a non-public dataset associated with malicious activity (without the restrictions inherent in ChatGPT on the number of characters, etc.). The monthly cost of this tool is $100, the annual cost is $500, and the “private version” costs $5,000.

DarkBERT is a tool created in South Korea, trained on darknet forums and leaks, using the RoBERTa language model) and sells for 110 per month.

DarkBARD is a service that was based on the BARD language model from Google for $100 per month.

XXXGPT is a tool that supposedly allows you to write malicious code for botnets, RATs, keyloggers, ransomware, stealers, malware for ATMs and POS terminals

Wolf GPT is an alternative to ChatGPT that provides complete confidentiality and allows you to create malicious code and generate advanced phishing attacks, but which seems to be and does not exist in reality despite advertising. Most likely, someone decided to capitalize on the hype around the malicious use of language models and simply deceive the deceivers. This happens quite often. Therefore, you should always take a responsible approach to finding tools for work. Be it cards, drops and even neurons. If anything happens, you can always contact me. Carding training for 25% of profits is still relevant. You have the opportunity to avoid the so-called rookie mistakes that all beginning workers make.

Let's return to the topic of neurons. A natural question arises: how to protect yourself from all this? There are no special ways to combat the results of artificial intelligence used for malicious purposes. Therefore, it is worth focusing on already standard methods of protection:

1) means of protecting email from phishing and malicious code

2) monitoring visited sites and requests to domains

3) enabling mechanisms for verifying email senders and recipients on email services - SPF, DKIM, DMARC

4) enhanced endpoint protection using EDR/XDR

5) user training.

The average person can find these rules, but will never adhere to them. But you are not a hamster, so be kind enough to know each of the rules for self-defense. Know your enemy by sight

Malicious artificial intelligence is not a new method of attack. This is just a tool that allows you to automate and diversify attacks. Therefore, by building effective protection, the user is simultaneously protected from everything bad based on AI. But this is already obvious. Consequently, they will not be able to find new ways of attacking already protected users. This requires experience, knowledge and, most importantly, action.

Good luck to everyone, strength and confidence in the future. Amen

PS Regarding evil-gpt, it turned out to be a fake, nothing like it exists other than a scam based on it :)
 
Top