Man
Professional
- Messages
- 3,049
- Reaction score
- 575
- Points
- 113
Researchers have shown how OpenAI's ChatGPT-4o voice API can be abused to carry out financial fraud campaigns.
ChatGPT-4o offers text, voice and visual input and output.
With these features, OpenAI has integrated various security measures to detect and block malicious content, including to combat voice deepfakes.
As UIUC researchers Richard Fung, Dylan Bowman, and Daniel Kang demonstrate in their paper, these tools are not enough to protect against potential abuse by cybercriminals and fraudsters.
The article looks at various types of scams, such as wire transfers, gift card theft, crypto transfers, and theft of social media or Gmail credentials. The AI agents that carry out the scams use ChatGPT-4o voice automation tools to navigate pages, enter data, and manage two-factor authentication codes.
Because GPT-4o sometimes refuses to process sensitive data, including credentials, the researchers used simple hacking techniques to bypass the defenses. They manually interacted with the AI agent, impersonating the victim, and used real websites, such as Bank of America, to confirm the success of transactions.
As the authors of the paper noted, success rates ranged from 20% to 60%, with each attempt requiring up to 26 browser actions and lasting up to 3 minutes in the most complex scenarios. Most failures were due to transcription errors or complex site navigation requirements. However, stealing Gmail credentials was successful 60% of the time, while crypto transfers and Instagram credential theft** only worked 40% of the time.
The researchers note that these scams are relatively inexpensive to pull off, with each successful scam costing an average of $0.75. Bank transfer scams, which are more complex, cost $2.51.
OpenAI told BleepingComputer that its latest model, o1, which supports “advanced reasoning,” has better protections against this kind of abuse. It also noted that similar research is helping to improve ChatGPT. GPT-4o already includes a number of anti-abuse measures, including limiting generation to a set of pre-approved votes to prevent the creation of deepfakes.
The o1 model scored higher in OpenAI's jailbreak, which measures how well an AI resists generating unsafe content. The AI scored 84% versus GPT-4o's 22%. When tested using a more demanding set of security assessments, o1 scored 93% versus GPT-4o's 71%.
Earlier, journalists analyzed AI tools from the OpenAI GPT Store marketplace and found out that they publish chatbots that directly violate the company's policies. Thus, in the bot store you can find porn generators, fraud tools, and medical "experts".
Source
ChatGPT-4o offers text, voice and visual input and output.
With these features, OpenAI has integrated various security measures to detect and block malicious content, including to combat voice deepfakes.
As UIUC researchers Richard Fung, Dylan Bowman, and Daniel Kang demonstrate in their paper, these tools are not enough to protect against potential abuse by cybercriminals and fraudsters.
The article looks at various types of scams, such as wire transfers, gift card theft, crypto transfers, and theft of social media or Gmail credentials. The AI agents that carry out the scams use ChatGPT-4o voice automation tools to navigate pages, enter data, and manage two-factor authentication codes.
Because GPT-4o sometimes refuses to process sensitive data, including credentials, the researchers used simple hacking techniques to bypass the defenses. They manually interacted with the AI agent, impersonating the victim, and used real websites, such as Bank of America, to confirm the success of transactions.
As the authors of the paper noted, success rates ranged from 20% to 60%, with each attempt requiring up to 26 browser actions and lasting up to 3 minutes in the most complex scenarios. Most failures were due to transcription errors or complex site navigation requirements. However, stealing Gmail credentials was successful 60% of the time, while crypto transfers and Instagram credential theft** only worked 40% of the time.
The researchers note that these scams are relatively inexpensive to pull off, with each successful scam costing an average of $0.75. Bank transfer scams, which are more complex, cost $2.51.
OpenAI told BleepingComputer that its latest model, o1, which supports “advanced reasoning,” has better protections against this kind of abuse. It also noted that similar research is helping to improve ChatGPT. GPT-4o already includes a number of anti-abuse measures, including limiting generation to a set of pre-approved votes to prevent the creation of deepfakes.
The o1 model scored higher in OpenAI's jailbreak, which measures how well an AI resists generating unsafe content. The AI scored 84% versus GPT-4o's 22%. When tested using a more demanding set of security assessments, o1 scored 93% versus GPT-4o's 71%.
Earlier, journalists analyzed AI tools from the OpenAI GPT Store marketplace and found out that they publish chatbots that directly violate the company's policies. Thus, in the bot store you can find porn generators, fraud tools, and medical "experts".
Source