Teacher
Professional
- Messages
- 2,670
- Reaction score
- 776
- Points
- 113
A new tactic for hackers or a simple security check?
JFrog experts have discovered at least 100 malicious AI models on the popular open Hugging Face platform.
Hugging Face allows AI and machine learning researchers to publish their developments and share them with the community. Tens of thousands of models for natural language processing, computer vision, and other tasks are available in the service.
It turns out that some of the presented algorithms contain malicious code. In particular, models with the ability to install "backdoors" - hidden remote access channels that allow attackers to gain control over the victim's computer-were found.
One of the most dangerous threats was the recently uploaded PyTorch model from the user "baller423", which was later removed. It has integrated a malicious payload that can establish a reverse connection to a given remote host (210.117.212.93).
To mask malicious code, the attackers used the "__reduce__" method of the Python pickle module. It allowed you to execute arbitrary commands when loading a PyTorch file, hiding them inside the serialization process. Thus, the detection systems did not recognize this trick.
Similar bookmarks were found in models associated with many other IP addresses.
"I would like to emphasize that by 'malicious models' we mean exactly those that carry real dangerous loads, " the JFrog report notes.
"This number does not include false positives of the system, so we have a complete picture of the number of malicious models for PyTorch and Tensorflow on the Hugging Face platform."
According to JFrog, some of these algorithms could have been uploaded by researchers as part of testing the Hugging Face security system. Specialists are often rewarded for discovering vulnerabilities. However, even in this case, publishing dangerous models is extremely risky and unacceptable, as they become available for download to all users.
To search for malware, JFrog experts have developed a special scanning system that takes into account the specifics of AI. This system made it possible to detect hidden bookmarks in the code for the first time, despite the fact that Hugging Face already uses security measures.
Standard security tools are not always able to detect suspicious elements hidden inside files with AI algorithms.
Our findings demonstrate the potential risks of using models from unverified sources. Experts urge developers to exercise increased vigilance and implement additional security measures to protect the AI ecosystem from cyber attacks.
JFrog experts have discovered at least 100 malicious AI models on the popular open Hugging Face platform.
Hugging Face allows AI and machine learning researchers to publish their developments and share them with the community. Tens of thousands of models for natural language processing, computer vision, and other tasks are available in the service.
It turns out that some of the presented algorithms contain malicious code. In particular, models with the ability to install "backdoors" - hidden remote access channels that allow attackers to gain control over the victim's computer-were found.
One of the most dangerous threats was the recently uploaded PyTorch model from the user "baller423", which was later removed. It has integrated a malicious payload that can establish a reverse connection to a given remote host (210.117.212.93).
To mask malicious code, the attackers used the "__reduce__" method of the Python pickle module. It allowed you to execute arbitrary commands when loading a PyTorch file, hiding them inside the serialization process. Thus, the detection systems did not recognize this trick.
Similar bookmarks were found in models associated with many other IP addresses.
"I would like to emphasize that by 'malicious models' we mean exactly those that carry real dangerous loads, " the JFrog report notes.
"This number does not include false positives of the system, so we have a complete picture of the number of malicious models for PyTorch and Tensorflow on the Hugging Face platform."
According to JFrog, some of these algorithms could have been uploaded by researchers as part of testing the Hugging Face security system. Specialists are often rewarded for discovering vulnerabilities. However, even in this case, publishing dangerous models is extremely risky and unacceptable, as they become available for download to all users.
To search for malware, JFrog experts have developed a special scanning system that takes into account the specifics of AI. This system made it possible to detect hidden bookmarks in the code for the first time, despite the fact that Hugging Face already uses security measures.
Standard security tools are not always able to detect suspicious elements hidden inside files with AI algorithms.
Our findings demonstrate the potential risks of using models from unverified sources. Experts urge developers to exercise increased vigilance and implement additional security measures to protect the AI ecosystem from cyber attacks.