Carding
Professional
- Messages
- 2,871
- Reaction score
- 2,371
- Points
- 113
Intelligence agencies warn that in the era of generative technologies, no one can be trusted.
US intelligence agencies, including the National Security Agency, the FBI and CISA, have published a report that the threat from "synthetic media", or deepfakes, has increased significantly recently. Synthetic media are artificially created texts, videos, and audio recordings that are increasingly difficult to distinguish from real ones.
According to the agencies, scammers and spies often use deepfakes to gain access to corporate systems, imitating employees of companies or deceiving customers. The main victims are the military, government employees, rescuers, as well as critical infrastructure and defense enterprises.
In May 2023, two interesting cases were recorded. In one of them, the attackers imitated the voice and appearance of the company's CEO on a WhatsApp call. They even managed to recreate the interior of his room.
In another example, the perpetrators used a combination of fake audio, video, and text messages to pretend to be one of the supervisors. At first, communication was conducted via WhatsApp, and then it turned into a video conference on the Teams platform. "The connection quality turned out to be very poor, so the attacker suggested switching to a text format and began to insist on transferring funds," the report says. "At this point, the victim began to suspect that something was wrong and interrupted the conversation."
Government agencies also cite Eurasia Group's list of top political and economic risks for 2023, where generative artificial intelligence is ranked third. According to this report, advances in AI can undermine social trust and strengthen the influence of authoritarian regimes.
Synthetic media created using generative technologies is a convenient tool for spreading disinformation in the political and social spheres.
To protect against deepfake threats, agencies recommend that companies set up special programs to detect media forgeries and verify them in real time. In addition, cybersecurity professionals must develop a detailed incident response plan that takes into account different attack scenarios. This plan should be worked out regularly during training sessions.
US intelligence agencies, including the National Security Agency, the FBI and CISA, have published a report that the threat from "synthetic media", or deepfakes, has increased significantly recently. Synthetic media are artificially created texts, videos, and audio recordings that are increasingly difficult to distinguish from real ones.
According to the agencies, scammers and spies often use deepfakes to gain access to corporate systems, imitating employees of companies or deceiving customers. The main victims are the military, government employees, rescuers, as well as critical infrastructure and defense enterprises.
In May 2023, two interesting cases were recorded. In one of them, the attackers imitated the voice and appearance of the company's CEO on a WhatsApp call. They even managed to recreate the interior of his room.
In another example, the perpetrators used a combination of fake audio, video, and text messages to pretend to be one of the supervisors. At first, communication was conducted via WhatsApp, and then it turned into a video conference on the Teams platform. "The connection quality turned out to be very poor, so the attacker suggested switching to a text format and began to insist on transferring funds," the report says. "At this point, the victim began to suspect that something was wrong and interrupted the conversation."
Government agencies also cite Eurasia Group's list of top political and economic risks for 2023, where generative artificial intelligence is ranked third. According to this report, advances in AI can undermine social trust and strengthen the influence of authoritarian regimes.
Synthetic media created using generative technologies is a convenient tool for spreading disinformation in the political and social spheres.
To protect against deepfake threats, agencies recommend that companies set up special programs to detect media forgeries and verify them in real time. In addition, cybersecurity professionals must develop a detailed incident response plan that takes into account different attack scenarios. This plan should be worked out regularly during training sessions.