Man
Professional
- Messages
- 3,051
- Reaction score
- 575
- Points
- 113
Hackers are counting on the popular chatbot in vain.
OpenAI, the developer of the popular chatbot ChatGPT, has presented a report on the unusual (and even funny) effect of using AI for malicious purposes. It turns out that recently, the attackers' attempts to use ChatGPT have only helped the investigation: the system has revealed a lot of valuable information about their intentions and methods.
In the study, OpenAI identified 20 cases of misuse of its products. Among them are the development of more sophisticated malware and the creation of fake posts on social networks. The report cites the actions of groups from countries such as China, Iran and Israel.
One of the most telltale cases involves the SweetSpecter group, allegedly operating out of China. The hackers used requests to ChatGPT to prepare a phishing campaign targeting government employees and employees of OpenAI itself. In emails, they, posing as ordinary users, reported that they encountered technical problems on the platform. The attachment in the email contained the SugarGh0st RAT, a malicious program capable of instantly seizing control of the infected computer.
By tracking SweetSpecter's requests to ChatGPT, OpenAI was able to detect the group's first known attack on an American company that also deals with AI solutions. Hackers asked the chatbot about topics that could be of interest to government officials and about ways to bypass attachment filtering systems.
Another case occurred with the CyberAv3ngers group, allegedly associated with the Iranian military. The attackers, known for their devastating attacks on infrastructure in the United States, Ireland, and Israel, focused on collecting data on programmable logic controllers (PLCs) and trying to find standard combinations of logins and passwords to crack them. Some of the requests indicated interest in properties in Jordan and Central Europe. An analysis of requests to the chatbot revealed additional technologies and programs that the group could use in future operations.
The activities of the Iranian hacker group STORM-0817 were suppressed in a similar way. The company discovered the group's first attempts at AI models and gained unique insights into the infrastructure and capabilities under development. For example, requests to the chatbot revealed that the group was testing code to collect data from Instagram profiles on an Iranian journalist critical of the country's government. In addition to ChatGPT, the criminals also tried to use OpenAI's DALL-E image generator.
The report also said that other countries, including Iran and Israel, have used ChatGPT to create misinformation on social media and fake articles on websites.
Although the OpenAI report documents attempts to abuse AI tools, the authors seem to downplay the potential harm from the use of chatbots. The company repeatedly notes that their models did not give attackers any new capabilities that they could not get from other public sources.
Indeed, the more attackers rely on AI tools, the easier it is to identify and neutralize them. As an example, OpenAI cites the case of interference in the elections this summer. Since the beginning of 2024, attackers have been trying to use AI to create content that can affect the outcome of the presidential race. Attempts to generate fake news, posts on social networks and other materials were recorded. After blocking access to the system in early June, the social media accounts associated with this operation ceased to be active for the entire critical period.
The company has blocked many more networks engaged in the spread of fakes. One of them was based in Rwanda, where fake accounts were created and content related to local elections was published. In August, the accounts of an Iranian group that generated articles about the U.S. election and the Gaza conflict were discovered. Operations aimed at elections in India and the European Parliament were also recorded. None of these fake campaigns have attracted significant attention.
As models improve, ChatGPT will be taught to analyze malicious attachments that criminals send to company employees during phishing attacks. The developers believe that this is a very significant step forward in countering cyber threats.
Source
OpenAI, the developer of the popular chatbot ChatGPT, has presented a report on the unusual (and even funny) effect of using AI for malicious purposes. It turns out that recently, the attackers' attempts to use ChatGPT have only helped the investigation: the system has revealed a lot of valuable information about their intentions and methods.
In the study, OpenAI identified 20 cases of misuse of its products. Among them are the development of more sophisticated malware and the creation of fake posts on social networks. The report cites the actions of groups from countries such as China, Iran and Israel.
One of the most telltale cases involves the SweetSpecter group, allegedly operating out of China. The hackers used requests to ChatGPT to prepare a phishing campaign targeting government employees and employees of OpenAI itself. In emails, they, posing as ordinary users, reported that they encountered technical problems on the platform. The attachment in the email contained the SugarGh0st RAT, a malicious program capable of instantly seizing control of the infected computer.
By tracking SweetSpecter's requests to ChatGPT, OpenAI was able to detect the group's first known attack on an American company that also deals with AI solutions. Hackers asked the chatbot about topics that could be of interest to government officials and about ways to bypass attachment filtering systems.
Another case occurred with the CyberAv3ngers group, allegedly associated with the Iranian military. The attackers, known for their devastating attacks on infrastructure in the United States, Ireland, and Israel, focused on collecting data on programmable logic controllers (PLCs) and trying to find standard combinations of logins and passwords to crack them. Some of the requests indicated interest in properties in Jordan and Central Europe. An analysis of requests to the chatbot revealed additional technologies and programs that the group could use in future operations.
The activities of the Iranian hacker group STORM-0817 were suppressed in a similar way. The company discovered the group's first attempts at AI models and gained unique insights into the infrastructure and capabilities under development. For example, requests to the chatbot revealed that the group was testing code to collect data from Instagram profiles on an Iranian journalist critical of the country's government. In addition to ChatGPT, the criminals also tried to use OpenAI's DALL-E image generator.
The report also said that other countries, including Iran and Israel, have used ChatGPT to create misinformation on social media and fake articles on websites.
Although the OpenAI report documents attempts to abuse AI tools, the authors seem to downplay the potential harm from the use of chatbots. The company repeatedly notes that their models did not give attackers any new capabilities that they could not get from other public sources.
Indeed, the more attackers rely on AI tools, the easier it is to identify and neutralize them. As an example, OpenAI cites the case of interference in the elections this summer. Since the beginning of 2024, attackers have been trying to use AI to create content that can affect the outcome of the presidential race. Attempts to generate fake news, posts on social networks and other materials were recorded. After blocking access to the system in early June, the social media accounts associated with this operation ceased to be active for the entire critical period.
The company has blocked many more networks engaged in the spread of fakes. One of them was based in Rwanda, where fake accounts were created and content related to local elections was published. In August, the accounts of an Iranian group that generated articles about the U.S. election and the Gaza conflict were discovered. Operations aimed at elections in India and the European Parliament were also recorded. None of these fake campaigns have attracted significant attention.
As models improve, ChatGPT will be taught to analyze malicious attachments that criminals send to company employees during phishing attacks. The developers believe that this is a very significant step forward in countering cyber threats.
Source