NEW CARDING CHAT IN TELEGRAM

Your voice is safe: the AntiFake system will not allow AI to fake human speech

Brother

Professional
Messages
2,590
Reputation
3
Reaction score
476
Points
83
The arms race against deepfakes is getting tougher.

A team of scientists from Washington University in St. Louis has developed the program AntiFake, which reliably protects the human voice from imitation by artificial intelligence. This technology is especially relevant today, when deepfakes are becoming an increasingly popular tool among attackers — videos and audio in which people say things that they didn't really say or do things that they would never have done in reality.

One of the most advanced features of generative AI systems is the ability to reproduce the human voice even on the basis of a short recording. Such forgeries can be used to create dirt on famous personalities, politicians, or just "good" acquaintances.

There were cases when people received phone calls in which the bot pretended to be their friend or relative. The victim was asked to send money to a third-party account, allegedly because of an urgent situation.

Played voices also help you bypass voice recognition-based security systems. Although programs that detect the authenticity of speech have existed before, AntiFake was one of the first systems that prevent the creation of fakes.

The principle of AntiFake is to slightly distort the original sound recording in such a way that these changes are invisible to humans, but confuse artificial intelligence systems, preventing them from creating a convincing imitation.

"We applied methods that were previously used for criminal purposes, but now we have directed them to protect users," explains Ning Zhang, project manager.

"The original audio signal is slightly distorted, enough to make it sound natural to human ears, but completely different for a car." Even if a fake is created based on the recording protected by AntiFake, the AI will not be able to reliably reproduce the speaker's speech.

Tests have shown that AntiFake copes with its task in more than 95% of cases. This is an important step forward in improving information security systems.

The developers are confident that their methods will serve as a reliable shield against intruders using the latest generative AI technologies. "It is difficult to predict how speech synthesis systems will develop further, but I think that our approach using the enemy's methods against itself will be effective in the future," Ning Zhang concluded.
 
Top