Friend
Professional
- Messages
- 2,653
- Reaction score
- 850
- Points
- 113
Microsoft has released fixes to prevent anyone from stealing your emails.
Microsoft has patched serious vulnerabilities in its Copilot AI assistant that allowed attackers to steal emails and other personal information from users. This was reported by security researcher Johann Rehberger, who previously discovered and revealed the details of the attack.
The exploit developed by Rehberger is a chain of malicious actions specific to language models (LLMs). It all starts with a phishing email containing a malicious Word document. This document triggers a so-called prompt injection attack, a special type of attack on AI systems in which an attacker tries to trick a model with specially crafted inputs.
In this case, the document contained instructions to trick Copilot into pretending to be a rogue program called 'Microsoft Defender for Copirate.' This allowed the attacker to take control of the chatbot and use it to interact with the user's email.
The next stage of the attack was the automatic use of Copilot tools. The attacker gave the chatbot commands to search for additional emails and other confidential information. For example, Rehberger asked the bot to make a list of key points from a previous email. The neural network found and extracted Slack two-factor authentication codes if they were present in the email.
To extract the data, the researcher used ASCII smuggling techniques. This method uses a set of Unicode characters that mimic ASCII but are invisible in the user interface. In this way, an attacker can hide the instructions for the model in a hyperlink that looks completely innocent.
In the attack, Copilot generates a "harmless-looking" URL link that actually contains hidden Unicode characters. If the user clicks on this link, the content of his emails is sent to a server controlled by the attacker. You can steal Slack two-factor authentication codes or any other sensitive data from emails.
Rehanger also developed the ASCII Smuggler tool, which allows you to detect Unicode tags and "decode" messages that would otherwise remain invisible. Microsoft confirms that the vulnerabilities have been fixed, but the company has not disclosed the exact details of the fixes.
This exploit chain illustrates the current challenges in securing language models. They are particularly vulnerable to prompt attacks and other newly developed hacking techniques. Rehberger emphasizes the novelty of these techniques, noting that they are "not yet two years old".
Experts urge companies that develop their own applications based on Copilot or other language models to pay close attention to these issues in order to avoid problems with data security and privacy.
Source
Microsoft has patched serious vulnerabilities in its Copilot AI assistant that allowed attackers to steal emails and other personal information from users. This was reported by security researcher Johann Rehberger, who previously discovered and revealed the details of the attack.
The exploit developed by Rehberger is a chain of malicious actions specific to language models (LLMs). It all starts with a phishing email containing a malicious Word document. This document triggers a so-called prompt injection attack, a special type of attack on AI systems in which an attacker tries to trick a model with specially crafted inputs.
In this case, the document contained instructions to trick Copilot into pretending to be a rogue program called 'Microsoft Defender for Copirate.' This allowed the attacker to take control of the chatbot and use it to interact with the user's email.
The next stage of the attack was the automatic use of Copilot tools. The attacker gave the chatbot commands to search for additional emails and other confidential information. For example, Rehberger asked the bot to make a list of key points from a previous email. The neural network found and extracted Slack two-factor authentication codes if they were present in the email.
To extract the data, the researcher used ASCII smuggling techniques. This method uses a set of Unicode characters that mimic ASCII but are invisible in the user interface. In this way, an attacker can hide the instructions for the model in a hyperlink that looks completely innocent.
In the attack, Copilot generates a "harmless-looking" URL link that actually contains hidden Unicode characters. If the user clicks on this link, the content of his emails is sent to a server controlled by the attacker. You can steal Slack two-factor authentication codes or any other sensitive data from emails.
Rehanger also developed the ASCII Smuggler tool, which allows you to detect Unicode tags and "decode" messages that would otherwise remain invisible. Microsoft confirms that the vulnerabilities have been fixed, but the company has not disclosed the exact details of the fixes.
This exploit chain illustrates the current challenges in securing language models. They are particularly vulnerable to prompt attacks and other newly developed hacking techniques. Rehberger emphasizes the novelty of these techniques, noting that they are "not yet two years old".
Experts urge companies that develop their own applications based on Copilot or other language models to pay close attention to these issues in order to avoid problems with data security and privacy.
Source