Man
Professional
- Messages
- 3,054
- Reaction score
- 579
- Points
- 113
A new era of cyber fraud has arrived, where criminals can rob a bank from the comfort of their homes.
The Financial Crimes Enforcement Service (FinCEN) has issued a warning to financial institutions about new fraud schemes related to deepfakes.
From the beginning of 2023 to the present, FinCEN has observed a significant increase in the number of reports of suspicious activity related to deepfakes in the reports of financial institutions. Attackers are actively using the capabilities of generative artificial intelligence (GenAI) to forge documents and deceive identification systems. In such cases, fake IDs are often created, which allows fraudsters to bypass customer verification and verification procedures.
FinCEN pays special attention to the fact that deepfake and generated images are used to bypass standard authentication methods. For example, criminals can alter or create photos for fake driver's licenses, passports, and other documents, using a combination of real and fake personal data (PII) to create so-called "synthetic identities." Fake profiles are then used to open accounts and then conduct financial transactions.
How Financial Institutions Can Detect Fraud
To protect against threats, FinCEN highlights a number of indicators that can help financial institutions in identifying deepfake fraud:
FinCEN also warns of the risk of deepfakes being used in social engineering schemes. For example, criminals can use fake voices or videos to convince company employees to transfer funds to fraudulent accounts. In one of the well-known cases, fraudsters, imitating the voice of a top manager, managed to transfer more than $25 million to their accounts.
Best practices for protecting against threats
FinCEN recommends that financial institutions strengthen security measures and implement advanced authentication methods to counter deepfake attacks. Such measures include:
FinCEN also advises conducting regular training and updating knowledge for employees to identify signs of deepfakes and phishing attacks. In addition, financial institutions must consider the risks associated with using third-party providers for identity verification and implement risk management processes at all stages of cooperation.
Source
The Financial Crimes Enforcement Service (FinCEN) has issued a warning to financial institutions about new fraud schemes related to deepfakes.
From the beginning of 2023 to the present, FinCEN has observed a significant increase in the number of reports of suspicious activity related to deepfakes in the reports of financial institutions. Attackers are actively using the capabilities of generative artificial intelligence (GenAI) to forge documents and deceive identification systems. In such cases, fake IDs are often created, which allows fraudsters to bypass customer verification and verification procedures.
FinCEN pays special attention to the fact that deepfake and generated images are used to bypass standard authentication methods. For example, criminals can alter or create photos for fake driver's licenses, passports, and other documents, using a combination of real and fake personal data (PII) to create so-called "synthetic identities." Fake profiles are then used to open accounts and then conduct financial transactions.
How Financial Institutions Can Detect Fraud
To protect against threats, FinCEN highlights a number of indicators that can help financial institutions in identifying deepfake fraud:
- Inconsistencies in documents – financial institutions can detect forgeries when re-checking the documents submitted by clients. For example, if the photo on the ID looks suspicious or has obvious signs of digital processing.
- Identity verification issues – Some customers may not be able to conclusively verify their identity or source of income. If there are frequent technical failures during the inspection, this may indicate an attempt to use pre-prepared video instead of live video communication.
- Abnormal activity on accounts – FinCEN advises paying attention to accounts that show suspicious activity, such as multiple transactions in a short time or transfers to risky platforms such as gambling sites or cryptocurrency exchanges.
- Suspicious attempts to bypass verification – Attackers may try to change the method of communication during the verification process, citing alleged technical issues. Using webcam plugins can also be a signal of a video spoofing attempt.
FinCEN also warns of the risk of deepfakes being used in social engineering schemes. For example, criminals can use fake voices or videos to convince company employees to transfer funds to fraudulent accounts. In one of the well-known cases, fraudsters, imitating the voice of a top manager, managed to transfer more than $25 million to their accounts.
Best practices for protecting against threats
FinCEN recommends that financial institutions strengthen security measures and implement advanced authentication methods to counter deepfake attacks. Such measures include:
- Multi-factor authentication (MFA) is the use of two or more factors to verify identity, such as one-time passwords or biometrics.
- Live verification – conducting real-time checks via audio and video to confirm the client's identity. Even though fraudsters can use tools to generate synthetic responses, such checks can reveal inconsistencies.
FinCEN also advises conducting regular training and updating knowledge for employees to identify signs of deepfakes and phishing attacks. In addition, financial institutions must consider the risks associated with using third-party providers for identity verification and implement risk management processes at all stages of cooperation.
Source