Neural Networks, Fraudsters, and “Mammoths”: How Artificial Intelligence is Changing Cyber Fraud

Man

Professional
Messages
3,059
Reaction score
585
Points
113
Remember how as children we were taught not to talk to strangers? In 2024, this wisdom has acquired a new meaning: now you can’t even be sure that you are talking to a familiar person and not their neural network double.

According to
The Wall Street Journal, the number of deepfake scams in fintech has increased by 700% in 2023 compared to 2022. At the same time, fraudsters are increasing their use of chatbots, increasing their reach.

We invited Alexey Malakhov, editor of the Financial Security section of T-Zh, author and host of the Scheme podcast, to discuss how the arsenal of cyber fraudsters is changing under the influence of AI.

The expert will talk about new fraud schemes, large-scale neural network scams and the future in the spirit of Cyberpunk 2077. We will also talk about the prospects of identity verification technologies in a world where “trust but verify” has turned into “don’t trust and double-check.”


I first encountered neural networks in 2013 at the institute. During lab work, my classmates and I played with a simple neural network: we made it recognize handwritten numbers. At the time, I thought it was a fun toy that would hardly find serious application.

Ten years later, my colleagues from the podcast "Scheme" and I made a neural network copy of my voice as an experiment. We had a whole season of the podcast ready, and that was hours of studio recording to train the model. After eight hours of additional training, the neural network began to speak Russian, albeit with a strong English accent and creepy machine intonations.

"I get it! It won't be long before someone can be fooled like that," I thought, and was wrong again. Six months later, neural networks are successfully used to steal identities and scam people, and I devote entire podcast episodes to such schemes.

How Fraudsters Use Neural Networks​

Let me start with the key question: have new types of "divorces" appeared with the development of neural networks? Probably not, but neural networks have upgraded the old ones. They have allowed:
  • automate fraud;
  • increase the scale and, with it, the profitability of old unprofitable schemes;
  • increase the realism and persuasiveness of the deception.

Chatbots are responsible for automation and scaling. The easiest way to use them for fraud is to make a large language model write Nigerian letters. Provide the neural network with context, think through the prompts - and in a moment you have a personalized letter with which you can catch a gullible "client" on the bait.

However, such tricks are just the beginning. While less experienced cyber-scammers churn out ineffective “chain letters,” tech-savvy scammers deploy robots that do almost all the dirty work for them.

Automating Cheating with Chatbots​

There are many schemes where chatbots can be used. For example, the victim is sent an SMS with a password for an account on a crypto exchange. It looks as if someone accidentally entered the wrong number during registration. The victim follows the link, enters the password and sees a tidy sum. Rejoicing at his “luck”, the victim asks the support service to withdraw the money. The chatbot replies that first you need to transfer some ether to confirm the wallet. Of course, the account on the exchange ends up being fake, and all the crypto goes straight to the scammers’ account.

Another common scenario involves fake groups offering discounts.

Potential victims are sent messages like: "5 thousand rubles for purchases in Wildberries for subscribing to us on WhatsApp." A person subscribes to the group, then the robot asks to enter a phone number and "a confirmation code to receive a certificate." In pursuit of a discount, the victim does not even notice that in fact he is entering a code for two-factor authentication - and as a result, his WhatsApp account is hijacked.

Bots are also used to automate schemes related to romance and dating. A classic example: a pretty girl in a chat on a dating site writes to a guy that she lives somewhere in London and knows a banker who is ready to invest in cryptocurrency. Allegedly, in a week this will bring double the income.

Or even simpler: a hottie invites you on a date, sends you a link to a movie theater website and asks you to buy a ticket for the seat next to yours. But there is no ticket, and there is no movie theater either. And if the victim complains to the fake support service, they ask you to pay again, promising to return the money a little later.

While this type of correspondence used to be conducted by people, today it is increasingly being done by chatbots. Thus, the sales funnels of fraud are significantly expanding. A flesh-and-blood fraudster can simultaneously communicate with a couple of potential victims, while a bot can communicate with thousands.

True, by automating the deception, the scammers lose some of their credibility. Hard-coded bots get lost if you deviate significantly from the script embedded in them, and LLM agents are easily led by the simplest industrial engineering.

84f07127b587ec70143e2873b85b468d.png


Deepfake Scam​

Another new and extremely dangerous tool in the cybercriminals’ arsenal is deepfakes. Not only individuals, but entire corporations are becoming victims of schemes using this technology.

Mass deception​

Some of the most high-profile cases involve celebrity impersonations. Media personalities are constantly featured in the news, appearing on podcasts and shows. As a result, scammers end up with voice recordings and videos that can be used to recreate someone else's image.

So, in 2023, my colleagues and I were preparing a podcast episode about how the famous entrepreneur Dmitry Matskevich’s identity was “stolen.”

One morning, he was bombarded with questions about a new investment project: people were asking via email and messengers whether it was worth investing in his new project. Dmitry was amazed and couldn't understand what they were talking about. Then the entrepreneur was sent a link to a long video where he "in person" presented a smart algorithm that worked at 360% per annum. The scammers found a recording of Dmitry's real speech, made a deepfake and started calling in victims willing to invest their hard-earned money. It all looked pretty cringe-worthy, but many people bought it.

Screenshot from that same fake video

Screenshot from that same fake video

It is becoming increasingly difficult to recognize such fakes. I recently visited Kazakhstan, where I saw with my own eyes a fake video featuring the country's president. The head of state announced that all citizens were entitled to a share of the money from gas production. The link under the video led to a fake website where it was required to pay a "tax" to receive the payment. The scheme itself is as old as the world, but the president's voice and appearance were copied very convincingly.

What's more, a stream of a conference where Elon Musk allegedly spoke recently made it to the top of YouTube. Casually walking around the stage, he talked about the launch of a new cryptocurrency project. The video suggested scanning a QR code, going to the site and sending bitcoins. The neural network Musk promised to double this amount and a gift to boot. YouTube moderators realized it too late: by the time the video was removed, it had been hanging at the top for several hours, and a considerable amount had already accumulated in the scammers' wallet.

Screenshot of a fake Elon Musk broadcast. The phishing site is still up, so we've hidden the QR code and link

Screenshot of a fake Elon Musk broadcast. The phishing site is still up, so we've hidden the QR code and link

Classic scam with a hint of deepfake​

As a rule, individuals are attacked using deepfakes of relatives, friends and acquaintances . Since the days of ICQ, there have been "scams" of the series: "urgently lend me money until tomorrow, just transfer to this card, otherwise mine has been blocked." It would seem that no one would fall for the deception of 30 years ago, but when the victim "receives" an audio message or even a video clip with a familiar person who excitedly asks for help, the hand reaches for the wallet.

We at the editorial office like to show an example of one girl who read Tatyana's letter to Onegin on camera. The scammers re-dubbed the video as if she was asking to transfer 50 thousand rubles to a card number.

Another popular scheme is when a fake friend sends a video or audio with a call to vote for him in a competition, and then shares a link to a phishing site. There, the victim is asked to log in via a messenger. Of course, to do this, you need to enter your login and password or confirmation code. Voila! The fraudster gets access to the correspondence.

Attacks on companies​

By deceiving individuals, fraudsters can cause serious harm to an entire company. There is a known case when an employee of a large company was invited to a video conference with the participation of several dozen deepfakes of his colleagues. Most of them were there as extras and simply kept silent. But some were engaged in a dialogue, discussing work issues. At some point, the deepfake director casually instructed the employee to transfer $25 million to an account in a Hong Kong bank, which he did.

The fight between the shield and the sword, or how not to become a "mammoth"​

To successfully counter such attacks, companies will have to modernize their organizational culture. It is important to move away from "telephone control" and build processes so that the staff does not just silently carry out orders, but can check their authenticity and, most importantly, is not embarrassed or lazy to do so.

There are also technological solutions. Banks are currently leading in them. To combat deepfakes, financial institutions use Liveness technology - checking for the "liveness" of sound and image.

Algorithms analyze what is inaccessible to the human ear: unnatural sine waves, cosine waves, repeating sound patterns. And in the case of video, they check how a person moves, smiles and expresses emotions. Suspicious artifacts, glare, blur are detected, and thousands of other parameters are assessed. So far, even the most realistic deepfakes do not fool banking systems.

Unfortunately, such tools are not available to all organizations, and this is unlikely to change in the foreseeable future. But fraudsters will definitely continue to hone their skills on both companies and ordinary people. It is likely that the latter will be the main victims in the eternal struggle of "shield and sword". They can only rely on their vigilance .

I communicate a lot with people who are far from information security and just live their lives. When I tell them about neural networks, they say: "Wow, that's cool!" Most understand that you can steal someone else's face, but they think that such risks will not affect them. As if scammers are only interested in celebrities, and no one will fake "mere mortals". Such misconceptions can sometimes be very expensive.

I think we will all have to become paranoid in the near future. Standard precautions (not telling anyone your card numbers, passwords, etc.) are no longer enough. In the world of neural deepfakes, the golden rule from spy movies applies - "trust no one."

So it's worth agreeing with your loved ones about a code phrase in case of unforeseen circumstances. If you hear, for example, "the code is pink monkey" (don't ask), you will be sure that the message is genuine. Or try to contact through alternative channels. Especially if the voice message or video club asks "under no circumstances to call."

And remember: neural networks still make mistakes, so look and listen carefully to the video. If a person speaks too monotonously, rarely blinks, does not express any emotions, or, on the contrary, overacts, all this is a reason to be wary. A blurry, pixelated or heavily cropped picture, differences in skin tone on the face, neck and hands also give away a deepfake.

Sometimes you need to pay attention not only to the video, but also to the text. Recently, there was an epidemic of fake products on Amazon, the cards of which were generated by fraudsters using neural networks. But sometimes something went wrong…

Sometimes you need to pay attention not only to the video, but also to the text. Recently, there was an epidemic of fake products on Amazon, the cards of which were generated by fraudsters using neural networks. But sometimes something went wrong…

Cyberpunk Future​

But all these spy games are just delaying the inevitable. The lion's share of traffic on the Internet is already generated by robots, and now they have taken on content. If we do not find a fast and reliable way to distinguish neural networks from people, then a reality in the spirit of Cyberpunk 2077 is quite possible, where the Internet has become unusable. Only there it was taken over by wild AIs, and here LLMs will litter the Internet. You go online, and there is not a single person, only fake news and a whole horde of bots trying to deceive you.

After buying Twitter, Elon Musk promised to fight bots fiercely. But bot operators are still trying to manipulate public opinion. True, sometimes the screws are tightened in neural network APIs, and it turns out to be a bust

After buying Twitter, Elon Musk promised to fight bots fiercely. But bot operators are still trying to manipulate public opinion. True, sometimes the screws are tightened in neural network APIs, and it turns out to be a bust

I know many on Habr don't like this topic. For one mention of the Internet, bloggers can be pelted with tomatoes, and not without reason. In theory, states have the resources to centrally monitor the reliability of content and accounts, but quis custodiet ipsos custodes?

The intervention of "Big Brother" will cause a sharp rejection by many users. Now RKN has obliged everyone with more than 10 thousand subscribers to register in their system. But people do not have the feeling that this is all for the sake of verification of authors and protection of subscribers. Because of the reputation of the regulator, people feel that this is for the sake of control and fines. To be honest, I myself am not sure that anyone at RKN plans to use this registry as protection against fakes.

But already now fake profiles and neural network doubles of celebrities and bloggers are being actively created. In such accounts, photos are posted for months, advertising is sold, people are lured to various scams, and then it turns out that all the content of the profile is generated.

The good news is that it is not necessary to involve internet surveillance authorities to solve this problem. For example, content generated by neural networks can be labeled: OpenAI has long figured out how to do this in practice, but they are in no hurry to implement the idea. After all, even if all commercial neural networks start labeling content, nothing will stop fraudsters from training their own to work without labeling - and all the companies' efforts will go down the drain.

However, if the labeling option does not work, companies directly interested in the fight for the authenticity of content can start catching fakes. The Content Authenticity Initiative coalition, created at the initiative of Adobe, has already achieved certain progress in this area, but it is too early to judge the real effectiveness of such associations.

Whether it is worth giving control over fakes on the Internet to corporations is an open question, but such a scenario looks at least no worse than a scenario where it will be possible to get into this very Internet only through State Services. I do not rule out that there is a more reasonable way out of the situation, eliminating the need to sacrifice something and choose between two evils.

What do you think about this? I suggest discussing in the comments what, in your opinion, the optimal solution to the problem looks like.

Source
 
Top