$0.75 Call: ChatGPT Has Become a Tool for Phone Scammers

Man

Professional
Messages
3,070
Reaction score
604
Points
113
The voice assistant API reveals the dark side of using modern technology.

Scientists from the United States have proven that OpenAI's Realtime API for voice communication can be used to automate telephone fraud. The cost of one successful fraud was less than a dollar.

Fears that voice AI models could be used for abuse appeared back in June. Then OpenAI delayed the release of the voice feature in ChatGPT due to security issues. Earlier, the company demonstrated a voice model that mimicked a celebrity, but after a stormy reaction from the public, the tool was withdrawn.

However, the API, released in early October, provides similar capabilities to third-party developers. The feature allows you to send text or audio to the GPT-4o model and receive responses as text, audio, or a combination of both. Despite the safety measures, the risk of abuse was not sufficiently reduced, which was confirmed by the results of an experiment by scientists from the University of Illinois at Urbana-Champaign (UIUC).

The purpose of the study was to find out whether it is possible to use the API to automate telephone fraud. Voice-controlled AI models can automate this process. During the experiment, scientists developed agents that successfully performed the tasks necessary to carry out fraud. The cost of each successful call was about $0.75. It took only 1051 lines of code to create the agents, most of which was aimed at working with the voice API.

An example of a bank scam (source: Daniel's Substack)

AI agents used the GPT-4o model, Playwright's browser automation tool, and certain instructions to commit fraud. The scenarios included hacking bank accounts and cryptocurrency wallets, stealing gift codes and credentials. For example, it took an agent 26 steps to successfully transfer money from a bank account.

The success of the various scenarios varied. For example, stealing Gmail credentials had a 60% success rate, took 122 seconds, and cost $0.28. Transfers from bank accounts turned out to be more difficult: their success rate was only 20%, the process took 183 seconds, and the cost was $2.51.

The average success rate of all scenarios was 36%, and the average cost was $0.75. The reasons for failures were most often speech recognition errors, as well as the difficulty of navigating banking sites.

In response to a question about possible countermeasures, the authors of the study noted that the problem is complex and requires a comprehensive approach, similar to cybersecurity. Experts see solutions at the level of mobile providers, AI providers, and regulators.

OpenAI, in turn, pointed to the presence of multi-layered security measures aimed at preventing abuse, including automatic monitoring and verification of content. The company emphasizes that its policy prohibits the use of APIs to send spam and malicious activity.

Source
 
Top