Carding Trends Related to Fake Digital IDs: A Detailed Analysis

Student

Professional
Messages
439
Reaction score
184
Points
43
Carding is a type of cybercrime involving the theft of credit card data and its use for fraudulent transactions, such as purchases, cash withdrawals, or money laundering. In recent years, fake digital IDs have become a key tool for carders, enabling them to bypass identity verification (KYC) systems and increase the scale of their attacks. Advances in technology, particularly artificial intelligence (AI), have made digital ID counterfeiting more accessible and difficult to detect. In this article, we will examine in detail the trends, mechanisms, examples, and risks associated with the use of fake digital IDs in carding, with a focus on educational purposes.

1. Synthetic Identities​

Description: Synthetic identities are fictitious identities created by combining real and fake data, such as names, addresses, Social Security numbers (SSNs), dates of birth, and photographs. AI accelerates the creation of such profiles, generating realistic data in seconds. These identities are used to open bank accounts, obtain loans, or register on platforms that require KYC.

Mechanism:
  • Data generation: AI tools such as generative models (e.g., GPT or specialized darknet services) create plausible data combinations. For example, names and addresses are taken from leaked data, while SSNs are generated algorithmically.
  • Carding integration: Synthetic identities allow carders to launder stolen funds by opening accounts for withdrawals or purchases. For example, a carder could register an account on a crypto exchange with a synthetic ID and use stolen cards to purchase cryptocurrency.
  • Examples: In 2024, synthetic identities were used to obtain $2 billion in auto loans in the US, a 105% increase from five years ago (TransUnion data). Projected losses from such fraud in 2025 will be $3.3 billion.

Risks of carding:
  • Creating accounts for withdrawing funds without suspicion from banks.
  • Scalability: one carder can create hundreds of synthetic IDs for mass operations.
  • Difficulty of detection: Synthetic IDs often pass basic KYC checks because they contain partial real data.

Security: Banks and platforms are implementing behavioral analytics and data age checking (for example, how long an SSN has been in databases) to identify synthetic identities.

2. Deepfakes for biometric verification​

Description: Deepfakes are AI-generated videos, images, or audio that imitate real people. They are used to bypass biometric verification, such as facial or voice recognition, required in modern KYC processes.

Mechanism:
  • Deepfake creation: Generative neural networks (e.g., DeepFaceLab, DALL·E, or commercial services like Synthesia) create realistic images or videos. Fraudsters use photos from leaks or social media to perform face swaps.
  • Carding applications: Carders upload deepfake videos instead of real selfies for verification on platforms like Binance, Coinbase, or Revolut. This allows them to open accounts or confirm transactions with stolen cards.
  • Examples: In 2023, deepfake attacks increased tenfold compared to 2022, and by 300% in 2024 (iProov data). About 24% of fake biometric verifications use deepfake technologies.

Risks of carding:
  • Bypassing complex KYC systems, including live detection checks that analyze facial movements or responses to commands.
  • Fast transaction execution: deepfake allows for instant identity verification for purchasing cryptocurrency or goods.
  • Accessibility: Deepfakes creation tools have become cheaper and easier to use, even for unskilled carders.

Security: Companies are implementing advanced liveness detectors that analyze micro-movements, lighting, and skin texture. Multimodal verification (for example, a combination of face, voice, and documents) is also being used.

3. Generating fake documents using AI​

Description: AI enables the creation of high-quality fake documents (passports, driver's licenses, ID cards) with the correct fonts, holograms, and QR codes. These documents look so realistic that they often pass automated checks.

Mechanism:
  • Tools: Generative AI models like Stable Diffusion or specialized darknet services (e.g., OnlyFake) create photo IDs for $10–$15. The user enters information (name, date of birth), and the AI generates the document.
  • Integration with carding: Fake IDs are used for verification on e-commerce platforms (Amazon, eBay), rental services (Airbnb), or to open bank accounts. Carders purchase goods with stolen cards, using fake IDs to confirm identity.
  • Examples: In 2024, 75% of counterfeit documents used in fraud were ID cards, and the increase in counterfeiting was 42% (Sumsub data). Darknet services offer "packages" of documents and credit card information ("fullz").

Risks of carding:
  • Large-scale return schemes: carders purchase goods using fake IDs and then process returns, keeping the goods for themselves.
  • Access to premium accounts: Fake IDs allow you to create accounts on paid services and pay for them with stolen cards.
  • Automation: AI allows for the generation of hundreds of documents in a short period of time, increasing the volume of attacks.

Security: Companies are implementing OCR (optical character recognition) with AI to analyze documents for artifacts and also check image metadata.

4. Video injection attacks​

Description: This method injects fake video directly into the data stream, bypassing the device's camera. These attacks are more difficult to detect because they mimic a real video stream.

Mechanism:
  • Technique: Fraudsters use software to spoof video feeds (for example, through virtual cameras or modified drivers). The deepfake video is presented as a "live" camera feed.
  • Carding Application: Used to bypass KYC on financial platforms that require video verification. For example, a carder can confirm a $10,000 transaction with a stolen card by spoofing the video feed.
  • Examples: In 2025, one deepfake attack is predicted to occur every 5 minutes, and 42.5% of financial fraud is related to AI (IDnow data). In 2024, video injections accounted for 15% of biometric attacks.

Risks of carding:
  • Instant approval of transactions at banks or crypto exchanges.
  • High efficiency: injection attacks are more difficult to detect than static deepfakes.
  • Availability of tools: injection programs are sold on the darknet for $50–$200.

Security: Platforms implement cryptographic verification of video streams and analysis of data transmission delays to detect substitution.

5. Monetization through the darknet​

Description: Fake digital identities are actively sold on darknet forums and through Telegram channels, integrating with the carding ecosystem. This includes "fullz" (complete data packages: name, address, SSN, card details) and fake ID creation services.

Mechanism:
  • Market: There are darknet marketplaces (such as Genesis Market, which closed in 2023) selling fake IDs, deepfake services, and stolen card data. Prices range from $15 per ID to $500 for a full package with video verification.
  • Carding Connection: Carders purchase fake IDs for schemes such as SIM swapping (intercepting numbers to access bank accounts), cashouts (withdrawing money through crypto exchanges), or bulk purchases.
  • Examples: Platform X saw an increase in mentions of selling fake IDs for crypto exchanges and bank accounts in 2024. Deepfake attacks increased by 704% from 2023 to 2024 (Sensity data).

Risks of carding:
  • Scalability: one carder can manage dozens of accounts using purchased IDs.
  • Anonymity: Darknet platforms provide privacy by making tracking difficult.
  • Complex schemes: Fake IDs are combined with stolen cards for multi-stage attacks (e.g. purchase → refund → withdrawal).

Defense: Law enforcement agencies are stepping up monitoring of the dark web, and companies are using AI to analyze transaction patterns and identify suspicious accounts.

Global context and statistics​

  • Fraud on the rise: By 2024, 92% of companies will have experienced fraud related to fake IDs (data from Experian). The identity verification market is expected to grow to $86 billion by 2025, but fraudsters are outpacing defenses.
  • AI as a driver: Generative AI models (e.g., Veo 3, Stable Diffusion) have made ID forgery accessible even to novices. By 2024, 60% of fake documents were created using AI (Onfido data).
  • Losses: In 2023, global losses from carding amounted to $43 billion, a significant portion of which is related to fake IDs (Nilson Report). This is projected to rise to $50 billion by 2025.

Counteraction and recommendations​

To combat counterfeit digital ID carding, companies and users can take the following steps:
  1. KYC Improvement:
    • Using multimodal biometrics (face + voice + behavioral data).
    • Liveness detection with micro-movement and texture analysis.
  2. Data analysis:
    • Checking the "age" of data (for example, how long the SSN has been used in databases).
    • Behavioral analytics to detect anomalies (e.g. mass account creation).
  3. AI technologies:
    • Using AI detectors to analyze deepfakes (for example, tools from iProov or Sensity).
    • Automated analysis of document metadata to detect counterfeits.
  4. User education:
    • Informing about the risks of data leaks (for example, publishing a passport photo on social media).
    • Checking suspicious transactions and notifications from banks.
  5. Regulation:
    • Strengthening legislation on cybercrime and darknet monitoring.
    • Cooperation between banks and platforms and law enforcement agencies.

Conclusion​

Fake digital identities have become a powerful tool in carders' arsenals thanks to the availability of AI and weaknesses in verification systems. Synthetic identities, deepfakes, fake documents, and video injections allow fraudsters to bypass KYC, scale attacks, and launder money. Further growth in such schemes is expected in 2025, fueled by advances in technology and darknet markets. Comprehensive measures are needed to protect against these schemes, from advanced biometrics to data analysis and user awareness. This analysis highlights the importance of the technological race between fraudsters and security systems, where AI plays both a threat and a protective role.

If you have any additional questions or require a more in-depth analysis of a specific aspect, please let us know!
 
Top