Experts Reveal AI Technology of Scammers to Bypass KYC

Man

Professional
Messages
3,223
Reaction score
929
Points
113
Attackers use AI-based software to bypass strict KYC measures on cryptocurrency exchanges. This is stated in a report by security firm Cato Networks.

The tool, called ProKYC, demonstrates a "new level of sophistication" in crypto fraud. It represents a significant step up from the older methods that cybercriminals used to bypass two-factor authentication and KYC.

Instead of buying fake IDs, scammers use AI-powered tools to create brand-new documents and fake videos to go through facial recognition.

ProKYC is specifically configured to work with crypto exchanges and financial companies whose KYC protocols involve matching a webcam face with a government document.

In the published video, the user integrates the AI-generated face into the Australian passport template. ProKYC then creates a video and photo of the person to bypass KYC on the Bybit crypto exchange.

Thanks to such tools, attackers can create new accounts on crypto exchanges, experts noted. ProKYC is available for $629 per year. It's also designed to work on payment platforms like Stripe and Revolut.

Itay Maor, chief security strategist at Cato Networks, stressed that detecting and protecting against a new type of fraud is a difficult task.

"The creation of ultra-restrictive biometric authentication systems can lead to many false positives. On the other hand, weak control is a road to fraud," he said.

Methods for detecting the use of AI tools exist. Some rely on people to manually identify unusually high-quality images and videos, as well as inconsistencies in facial movements.
 
Attackers use AI-based software to bypass strict KYC measures on cryptocurrency exchanges. This is stated in a report by security firm Cato Networks.

The tool, called ProKYC, demonstrates a "new level of sophistication" in crypto fraud. It represents a significant step up from the older methods that cybercriminals used to bypass two-factor authentication and KYC.

Instead of buying fake IDs, scammers use AI-powered tools to create brand-new documents and fake videos to go through facial recognition.

ProKYC is specifically configured to work with crypto exchanges and financial companies whose KYC protocols involve matching a webcam face with a government document.

In the published video, the user integrates the AI-generated face into the Australian passport template. ProKYC then creates a video and photo of the person to bypass KYC on the Bybit crypto exchange.

Thanks to such tools, attackers can create new accounts on crypto exchanges, experts noted. ProKYC is available for $629 per year. It's also designed to work on payment platforms like Stripe and Revolut.

Itay Maor, chief security strategist at Cato Networks, stressed that detecting and protecting against a new type of fraud is a difficult task.

"The creation of ultra-restrictive biometric authentication systems can lead to many false positives. On the other hand, weak control is a road to fraud," he said.

Methods for detecting the use of AI tools exist. Some rely on people to manually identify unusually high-quality images and videos, as well as inconsistencies in facial movements.
Know where to get a subscription?
 
The emergence of AI-powered tools like ProKYC, as detailed in a report by security firm Cato Networks, represents a significant advancement in the sophistication of cybercriminal tactics aimed at bypassing Know Your Customer (KYC) protocols on cryptocurrency exchanges and financial platforms. Below is a comprehensive exploration of this issue, covering the technology, its implications, detection challenges, and broader context.

What is ProKYC and How Does It Work?​

ProKYC is an AI-based software tool specifically designed to undermine KYC verification processes, which are critical for ensuring compliance with anti-money laundering (AML) and counter-terrorism financing regulations. KYC protocols typically require users to submit government-issued identification documents and, in many cases, undergo facial recognition checks via webcam to confirm their identity. These measures are standard on cryptocurrency exchanges like Bybit and payment platforms such as Stripe and Revolut.

Unlike earlier methods of KYC evasion—such as purchasing forged IDs on dark web marketplaces or exploiting vulnerabilities in two-factor authentication (2FA)—ProKYC leverages artificial intelligence to create highly convincing counterfeit documents and media. For instance, the tool can generate entirely new identity documents, such as passports, by integrating AI-generated faces into legitimate-looking templates (e.g., an Australian passport). Additionally, ProKYC produces photos and videos that mimic real human behavior, enabling scammers to pass facial recognition systems that compare a live webcam feed to the provided ID.

A demonstration video highlighted by Cato Networks shows ProKYC in action: a user creates a fake Australian passport with an AI-generated face, then uses the tool to produce a corresponding video and photo that successfully bypasses Bybit’s KYC verification. This capability allows attackers to create new accounts on crypto exchanges and financial platforms, often for illicit activities like money laundering, fraud, or unauthorized trading.

ProKYC is reportedly available for an annual subscription fee of $629, making it accessible to a wide range of cybercriminals. Its design is tailored to target platforms with KYC protocols that rely on document verification and biometric checks, extending its applicability beyond crypto exchanges to financial services like Stripe and Revolut.

Evolution of Cybercriminal Tactics​

The development of tools like ProKYC marks a significant evolution in crypto-related fraud. Earlier methods of bypassing KYC were relatively rudimentary and often involved:
  • Purchasing Fake IDs: Scammers would acquire forged physical or digital IDs from dark web markets, which could be used to pass document verification but often failed under scrutiny.
  • Exploiting 2FA Weaknesses: Attackers used techniques like SIM-swapping or phishing to intercept 2FA codes, gaining access to accounts without proper identity verification.
  • Using Stolen Identities: Cybercriminals would steal real identities and attempt to pass KYC checks with legitimate but misappropriated documents.

ProKYC, however, introduces a “new level of sophistication,” as described by Cato Networks. By leveraging AI, it eliminates the need for physical forgeries or stolen data, creating entirely synthetic identities that are harder to detect. The use of AI-generated faces and videos exploits vulnerabilities in biometric systems, which often struggle to differentiate between real and artificially created media, especially when the fakes are of high quality.

Challenges in Detecting AI-Powered Fraud​

Detecting the use of AI tools like ProKYC poses significant challenges for platforms and security teams. Itay Maor, chief security strategist at Cato Networks, emphasized the difficulty of balancing robust security with user experience. Key challenges include:
  1. False Positives in Biometric Systems:
    • Ultra-restrictive biometric authentication systems, designed to catch sophisticated fakes, can mistakenly flag legitimate users as fraudulent. This leads to false positives, which frustrate customers and increase operational costs for platforms as they handle appeals and manual reviews.
    • Conversely, overly lenient systems are vulnerable to exploitation, as they may fail to identify AI-generated fakes.
  2. Manual Detection Limitations:
    • Some detection methods rely on human analysts to identify signs of AI-generated content, such as unusually high-quality images or videos, or subtle inconsistencies in facial movements (e.g., unnatural blinking or lip-sync issues). However, manual review is time-consuming, costly, and not scalable for platforms processing thousands of KYC verifications daily.
    • As AI tools improve, the quality of fake media is becoming increasingly indistinguishable from real content, making manual detection less reliable.
  3. Evolving AI Capabilities:
    • AI-generated content is improving rapidly, with tools like deepfake technology producing highly realistic videos and images. This makes it harder for both human analysts and automated systems to identify fakes without advanced detection algorithms.
  4. Lack of Standardized Detection Tools:
    • While some platforms employ AI-based detection systems to identify synthetic media (e.g., by analyzing pixel-level artifacts or inconsistencies in lighting), these tools are not universally adopted, and their effectiveness varies. Smaller platforms or those with limited resources may lack access to such technology.

Implications for Cryptocurrency Exchanges and Financial Platforms​

The rise of tools like ProKYC has serious implications for the security and integrity of cryptocurrency exchanges and financial platforms:
  • Increased Fraud Risk: By enabling scammers to create new accounts with fake identities, ProKYC facilitates activities like money laundering, pump-and-dump schemes, and unauthorized withdrawals. This undermines trust in platforms and can lead to financial losses for both users and operators.
  • Regulatory Scrutiny: Regulators, already focused on AML and KYC compliance in the crypto industry, may impose stricter requirements on exchanges if such tools become widespread. This could increase operational costs and complexity for legitimate businesses.
  • User Trust and Adoption: High-profile fraud incidents could deter users from engaging with crypto exchanges or digital payment platforms, slowing mainstream adoption of these technologies.
  • Arms Race in Security: Platforms will need to invest in advanced detection technologies, such as AI-powered anti-fraud systems, to keep pace with evolving criminal tactics. This creates a costly arms race between fraudsters and defenders.

Potential Countermeasures​

To combat AI-powered KYC evasion tools, platforms and security experts can adopt several strategies:
  1. Advanced AI Detection:
    • Implement machine learning models trained to detect deepfakes and synthetic media by analyzing subtle artifacts, such as irregularities in facial movements, unnatural lighting, or inconsistencies in document fonts and layouts.
    • Use liveness detection techniques, which require users to perform specific actions (e.g., turning their head or speaking a phrase) that are harder for AI-generated videos to replicate convincingly.
  2. Multi-Layered Verification:
    • Combine biometric checks with other verification methods, such as behavioral analysis (e.g., tracking mouse movements or typing patterns) or cross-referencing user data with external databases (e.g., government records or credit bureaus).
    • Require additional proofs of identity, such as utility bills or bank statements, to make it harder for scammers to rely solely on fake documents.
  3. Continuous Monitoring:
    • Monitor account activity for suspicious patterns, such as rapid fund transfers or transactions inconsistent with a user’s profile, to flag accounts created with fraudulent identities.
    • Use real-time risk scoring to assess the likelihood of fraud during and after KYC verification.
  4. Collaboration and Information Sharing:
    • Platforms can share intelligence on emerging threats like ProKYC through industry consortia or partnerships with cybersecurity firms like Cato Networks.
    • Collaborate with regulators to establish standards for KYC verification and fraud detection in the crypto and financial sectors.
  5. User Education:
    • Educate users about the risks of sharing personal information and the importance of securing their accounts with strong passwords and 2FA.
    • Warn users about phishing attempts that could be used to steal credentials or facilitate KYC bypass schemes.

Broader Context and Future Outlook​

The development of ProKYC reflects a broader trend in cybercrime: the democratization of advanced AI tools. Just as legitimate businesses use AI to enhance services, criminals are leveraging the same technology to perpetrate fraud at scale. The affordability of ProKYC ($629 per year) lowers the barrier to entry, enabling even less technically skilled attackers to exploit KYC vulnerabilities.

This trend is likely to accelerate as AI technology becomes more accessible and powerful. For example, advancements in generative AI models, such as those used for deepfake creation, could make tools like ProKYC even more effective, producing fakes that are nearly impossible to detect without specialized tools. This underscores the need for proactive investment in anti-fraud technologies and regulatory frameworks that can adapt to emerging threats.

Moreover, the targeting of platforms like Stripe and Revolut suggests that AI-powered fraud is not limited to the crypto industry. As digital payment systems and fintech platforms expand, they will face similar challenges in securing their KYC processes. The financial sector as a whole must prepare for an increase in AI-driven fraud attempts.

Conclusion​

ProKYC represents a significant leap in the sophistication of KYC evasion tactics, using AI to create convincing fake documents and videos that bypass biometric verification on crypto exchanges and financial platforms. Its availability for $629 per year makes it a dangerous tool in the hands of cybercriminals, enabling activities like money laundering and unauthorized account creation. Detecting such fraud is challenging due to the high quality of AI-generated media and the limitations of current detection methods. Platforms must invest in advanced AI detection, multi-layered verification, and continuous monitoring to stay ahead of these threats. As AI technology continues to evolve, the battle between fraudsters and defenders will intensify, requiring ongoing innovation and collaboration to protect the integrity of digital financial systems.

If you have further questions about specific aspects of ProKYC, KYC processes, or related cybersecurity measures, let me know!
 
Top