Hugging Face conversion service - a loophole for hacking AI models

Teacher

Professional
Messages
2,673
Reputation
9
Reaction score
692
Points
113
A vulnerability in Safetensors causes supply chains to be compromised.

Information security company HiddenLayer has identified a vulnerability in Hugging Face's Safetensors conversion service that allows an attacker to intercept AI models uploaded by users and compromise the supply chain.

According to the HiddenLayer report, an attacker can send malicious merge requests from the Hugging Face service to any repository on the platform, as well as intercept any models transmitted through the conversion service. This technique opens the way for modifying any repository on the platform, masquerading as a conversion bot.

Hugging Face is a popular collaboration platform that helps users store, deploy, and train pre-trained machine learning models and datasets. Safetensors is a format developed by the company for secure storage of tensors.

Analysis of HiddenLayer showed that a cybercriminal can use the malicious PyTorch binary file to intercept the conversion service and compromise the system on which it is hosted. Moreover, the token of the official SFConvertbot bot, designed for creating merge requests, can be stolen to send malicious requests to any repository on the site, which allows an attacker to interfere with models and embed backdoors in them.

The researchers note that an attacker can execute any arbitrary code when a user tries to convert their model, while remaining invisible to the user himself. If the victim tries to convert their own private repository, this can lead to theft of the Hugging Face token, access to internal models and datasets, and their possible "poisoning".

The problem is compounded by the fact that any user can submit a conversion request for a public repository, which opens up the possibility for interception or modification of widely used models, creating a significant risk for supply chains.
 
Top