Deepfake technologies: fraud, business, or just a joke?

Father

Professional
Messages
2,520
Reputation
4
Reaction score
551
Points
113
For a long time, the term "Deepfake" was perceived as an application where you can substitute faces in a video. This was facilitated by the fact that the technology received the most publicity first on image boards, such as Reddit, and then moved to social networks, including TikTok, where it was used more as a joke – an opportunity to make a funny video.

However, some users saw the technology as an opportunity for inappropriate use. Creating incriminating materials, fakes, and fraud based on the use of real people's faces and voices. A number of factors contributed to this:
  • updating video communication tools during the pandemic;
  • integration of face or voice authentication by a number of financial organizations;
  • improving the quality of deepfakes created by AI.
In this article, we discussed with experts the prospects of deepfake technologies, the specifics of their legal use, and the possibilities for fraud related to face and voice forgery.

Deepfake as a joke​

Deepfake technologies have become widely known, primarily as an opportunity to create adult videos featuring your favorite actress or actor. The boom of deepfakes of this kind has forced many specialized sites and social networks to ban their placement.

The State of California went even further and passed a separate bill allowing any manufacturer of "hot" deepfakes to be sued. This behavior is easily explained, because even a very simple program Face Swap allows you to create a sufficiently high-quality fake, which can mislead the user who is not aware of the technology.

Alexander Gorshkov
Development Director of Iris Devices LLC

When talking about deepfake, you always need to specify what is meant by this. Fake your voice, photo, or video. Voice spoofing is the simplest. It is often used by scammers as the most accessible method during a phone call. But voice tampering itself is ineffective. It is used mainly with social engineering methods. Voice generation is used both to deceive individuals and employees of large companies. In any case, this is not a simple action and is used for large-scale fraud. As for photo and video forgery, these methods of fraud are quite complex, high – budget, and the technologies for creating and using them are still being developed.

As any coin or coin has two sides, deepfake technology also has the possibility of both positive and negative use. There are bound to be people or companies that will use these technologies for fraud. Various technological measures will be developed to detect digital forgeries, but not in all cases it will be possible to deal with them only by automated means. To prevent the destructive use of deepfakes, a number of countries adopt special regulations, including administrative and criminal liability. In our country, the creation and use of deepfakes is allowed for entertainment purposes.

Deepfakes did not pass the biggest video hosting site – YouTube. Approximately 90% of them also use the faces of different celebrities. For example, users especially liked fakes using the face of Nicolas Cage, who became a real star of memes.

In social networks, deepfake technologies are often used to give a user's face different features (using so-called masks), but they do not aim to make it look like another person. They are purely entertaining, parodic and, as a rule, unnatural.

Deepfake as a business​

Celebrity video fakes can be used not only to create humorous content, but also in business. First of all, this is the film industry and advertising.

In the Russian media environment, we can single out an advertising project of Megafon, in which deepfake Bruce Willis "took part". The project was implemented by the Russian company Deepcake.

At the same time, Bruce Willis himself approved the work of Russian specialists and appreciated the high quality of work on his digital double. For the use of his image, the actor, of course, received a fee.

Roman Kores
Founder Horum.co

There is such a practice that events that were not written about on the Internet can be considered not to have happened, because no one will know about them. Deepfake in this context is an opportunity to create the appearance of an event that never happened on the Internet. This provides ample opportunities for manipulating the audience, especially in socio-political issues. Because, as a rule, a video with deepfake gets much more views than a video with its exposure.

But this phenomenon is also quite dangerous for businesses, as it allows an attacker to impersonate another person. Success increases noticeably when the company's top management consists of older people who do not always have perfect eyesight, and their knowledge of advanced IT technologies is quite general.

However, modern security tools allow you to distinguish the work of AI from a real human face by a number of markers. It is reasonable to assume that the integration of these security tools will proceed systematically to update the deepfake problem in society and the business environment.

The category of deepfake technologies can, in part, include holograms of famous personalities. As a rule, they are used for concerts of already deceased music industry stars. For example, similar events were held using holograms of Michael Jackson, 2Pac and Viktor Tsoi. In this case, either the music studio or the artist's relatives receive a financial reward for using the image.

Deepfake as a scam​

Programs for changing faces in videos can be used not only to copy the appearance of world stars, but also the top management of any of the companies. The main condition is the availability of sufficient source data for AI training, i.e. video and audio materials.

Alexander Gerasimov
CISO Awillix

The use of deepfake is really gaining momentum among scammers. Deepfake videos have even been used for political purposes, but mostly it's still a scam to steal money. So, for example, one of the directors of a British energy company was defrauded of 243 thousand dollars because of the deepfake of the head of his company, who requested an urgent transfer of money.

Deepfake is also used for passing KYC (Know Your Customer) - a verification procedure for identity verification, where attackers replace faces with someone else's in order to register under a different identity, for example, in financial services.

The use of deepfakes, if we leave aside the technical part and create a high-quality fake of the resource, is very much tied to social engineering. You need not only to get the contact details of the "target", but also to take into account the speech characteristics of the "original", and not to arouse suspicion by suddenly demanding to transfer a certain amount of money to a suspicious account.

Andrey Timoshenko
Director of Strategic Business Development at Innostage

Deepfake technology is used to create high-quality images and videos that are almost indistinguishable from the original. They allow you to introduce the necessary information to a certain group of people who make decisions in an organization or state, and manipulate them. Another way to use deepfake is to spread fake news and manipulate the minds of the general public.

Thus, we have a vivid example of attacks that are carried out using social engineering using AI. The human factor also plays a role in them. These attacks pose the highest risks for organizations, because they can ultimately lead to a devastating impact.

The introduction of remote identification contributes to the growth of cybercrime in various areas, primarily in the financial sector. So, with the help of deepfake, you can adjust the voice and image of the right person, hack the identification system and gain control over the bank account.

Separately, it is worth highlighting deepfakes, which are aimed at passing the identification of a person in the system. The use of a face oval as biometric data without anti-forgery mechanisms is fraught, as evidenced by more than one example of hacking the passage of such systems.

At the moment, the best way to protect yourself is to abandon such a recognition system in favor of "classic" solutions. Security systems for the same password or fingerprint are much more advanced, better studied, and much cheaper.

Results​

Deepfake technologies have already come into our lives and will inevitably develop, because they can serve as a legal source of fairly high income in several areas at once.

The development of technologies will inevitably lead to a reduction in their resource intensity: they will become cheaper and easier. Along with this process, the relevance of using deepfakes as a fraud tool will also grow.

Alexey Drozd
Head of Information Security Department at Serchinform

First of all, the entry threshold for creating high-quality deepfakes should be lowered. Now their development requires technology, time, capacity and knowledge. Secondly, fraudsters need the right conditions to succeed. For example, if banks start issuing loans only through video confirmation of intent, the number of deepfakes will increase, and they may be quite successful.

So, in China, two scammers for several years deceived the tax service's facial recognition system using deepfakes and stole $ 76 million. Their goal was to create dummy companies. To achieve this goal, attackers bought photos of people and their personal data on the "black online market", and made videos from images in deepfake applications.

The next step of the criminals is to buy flashed smartphones that could simulate the operation of the front camera. It was not enabled, but sent the prepared deepfake video to the tax service. The facial recognition system perceived people as real, which is why scammers were able to use their scheme for years. In total, the attackers stole $ 76 million.

But there is also a downside to this process: the more popular deepfake technologies are, the more aware the public will be of them. It is much more difficult to deceive people who assume that they are likely to be interviewed by a fraudster.

At the same time, security tools will inevitably evolve. The program can detect a fake either based on individual metrics (the behavior of the oval face, the frequency of blinks), or on their totality.

To sum up, we can say that deepfake fraud will become commonplace, and will pose the same danger as phishing mailing. That is, a significant, but quite expected threat.
 
Top