Should I be afraid of fraud using deepfakes

Lord777

Professional
Messages
2,579
Reaction score
1,510
Points
113
The ability to create high-quality digital fakes is a serious concern, as deepfake technology can easily deceive the naked eye.

Recently, the topic of deepfakes has been widely developed and has turned from a niche area into a mainstream phenomenon. Netizens are now well aware of the amazing possibilities of digital manipulation - from Channel 4's "Alternative Christmas Message" to the viral "Tom Cruise videos on TikTok".
The ability to create high-quality digital fakes is a serious concern, as deepfake technology can easily deceive the naked eye. Similar ethical issues have previously arisen in connection with the technology of sound embodiment. For example, in 2016, a prototype for editing and creating audio Adobe Voco, which was able to imitate the voice of a specific person based on his 20-minute speech, was never released due to ethical problems. Such a decision cannot be called unreasonable, given that in 2019, the CEO of a British energy company was cheated out of 220,000 euros by a sophisticated criminal who managed to impersonate the voice of his boss, even depicting a small German accent and melodious shades of speech.

But do deepfakes pose the same risk in 2023?
To date, there are three main factors contributing to the creation of deepfakes. The first and most famous one is new products and entertainment. The creators of "Alternative Christmas Message" are undoubtedly artists who have introduced a new kind of performance art.
Unfortunately, the potential of deepfake technology can also be used with malicious intent. There are serious concerns about the possibility of creating fake videos in which a person says everything you want.
However, an immediate threat is the creation of fake pornographic videos, which are then used for revenge porn. According to Sesity AI statistics, such videos account for 90% -95% of all deepfake videos tracked since December 2018.
Finally, the third growing trend is the use of deepfakes for fraud. An analysis by Onfido identity fraud specialists showed that in 2019, cybercriminals began using deepfakes for attacks for the first time.

How dangerous is it?
Currently, deepfakes are not a common vector of identity fraud. However, digital manipulation poses a threat to biometric authentication methods. A criminal can impersonate another person based on their digital identity.
This type of cybercrime is not for amateur scammers. Creating compelling deepfakes requires both a lot of technical expertise and a lot of computing power, unlike face morphing, which involves digitally altering static images and can be ordered online. Professional criminals will have to spend a lot of time developing their abilities before they can start the process of developing a deepfake video.
However, you should not rely on a high technical barrier to prevent the use of deepfakes to attempt identity fraud. As with any other fraud technique, "advanced" members of the cybercrime community can package the code, which will allow everyone else to use this technique. This possibility makes fraudulent use of deepfake technology a real threat that companies should consider and stay ahead of.
Organizations that work with high-net-worth individuals should exercise extra caution, as they are likely to become the main targets of cybercriminals. Why? First, the criminal must be sure that the initial investment in creating a personalized video will bring a fairly good profit. Second, a convincing deepfake usually requires six to nine minutes of video recording. Therefore, the attacks will target those who enjoy great authority in the media or regularly publish videos on social networks. .

How does biometric authentication detect deepfakes?
Although deepfakes are not yet a threat that we regularly encounter, a number of important methods are used to detect deepfakes in video identification authentication technology. First, biometric analysis based on artificial intelligence can very accurately determine whether the submitted video is fake. Methods such as lip analysis, movement analysis, and texture analysis are used to check the user's physical presence.
Second, by randomizing the instructions that users must follow to authenticate. For example, users are asked to look in different directions or read a phrase. There are thousands of possible requests that deepfake creators simply can't predict. Those users who constantly respond incorrectly are subject to additional investigation. Although deepfakes can be manipulated in real time, the video quality is significantly degraded, as the large processing power required does not allow you to quickly respond to changes.
Finally, criminals must convince identity verification systems that the deepfake video is being shot live using the phone's equipment. To do this, criminals must simulate a mobile phone on their computer. Identity management software can detect a computer and detect fraud.

Deepfake is part of the fraud ecosystem
The threat of identity fraud is constantly growing. Scammers are constantly improving their skills, looking for new and more sophisticated ways to avoid detection. Deepfakes are just one of the new threats, a wide range of which affects the various connections and competencies of professional criminals.
For example, 3D masks are a new fast-growing trend, accounting for 0.3% of identity fraud cases detected by Onfido's selfie products between October 2019 and October 2020. This technology is much more accessible to scammers without serious technical skills, and is also popular among theater enthusiasts, as they can buy stunningly realistic masks online. In Japan, the number of cases where scammers physically changed their features using plastic surgery to impersonate an individual is also growing. Of course, this is an extreme approach, but often the operation is much cheaper than putting into operation a complex deepfake, especially if you have medical connections. Since deepfakes require a significant level of technical specialization, criminals must either go through a time-consuming approach to development, or, in the case of an order, large costs will be required.
However, despite the fact that deepfake fraud is not yet widespread, organizations should not immediately abandon the risk of deepfakes. This is especially true for companies that have wealthy clients. As a recent deepfake with Tom Cruise showed, professionals can create high-quality deepfake videos that make a lasting impression. As with any new fraud trend, organizations should consider this risk and prepare in advance, rather than waiting for an incident to occur to respond. As the appetite for cybercriminals grows and the technical barrier falls, it is very important for organizations to think about whether their identity verification solutions will be able to detect deepfakes.

Author: Claire Woodcock, Senior Product Manager, Machine Learning and Biometrics, Onfido.
 
Top