Friend
Professional
- Messages
- 2,653
- Reaction score
- 850
- Points
- 113
The tech giant reveals a secret weapon against digital blackmail.
Microsoft has updated its policy to combat the unauthorized distribution of intimate images, focusing on the threats posed by the abuse of generative AI. In recent years, the problem of the use of fake digital images, such as deepfakes, has reached alarming proportions. Most often, such materials become an instrument of pressure and blackmail, which especially affects women and adolescents.
The company has published a document with recommendations for lawmakers, which emphasizes the need to modernize legislation to protect victims, especially women and children. Among other steps, Microsoft is implementing new protection measures on its platforms to prevent the spread of such images.
One of the key innovations was a partnership with the StopNCII platform, which helps users protect their images from illegal distribution. This platform allows you to create digital "fingerprints" of images (hashes), which are then used to identify them on the network. Microsoft is already applying this technology to the Bing search engine, preventing unwanted images from appearing in search results.
Since the end of August, the company has taken measures to block 268,899 images. It is expected that cooperation with StopNCII will be expanded, and new technologies will be used on a wider range of platforms.
Microsoft also recalls its general policy against the creation and distribution of intimate images without consent. This applies to both real and generated images. The Company strictly prohibits any threats to distribute such materials and takes measures to block them on all its services.
Victims of unauthorized distribution of images are encouraged to use the Microsoft portal to submit requests to remove such material. This applies to both photos and videos, including AI-generated content.
The company emphasizes that the problem of AI abuse concerns not only adults, but also children. As a result, Microsoft is working with nonprofits to combat the spread of content related to the exploitation of minors and encourages users to report such incidents.
Microsoft continues to develop its initiatives in collaboration with leaders and experts around the world to find effective solutions and support those affected. The company also participates in new working groups aimed at developing best practices to prevent AI abuse.
In this way, Microsoft demonstrates its commitment to protecting users from dangerous content, including material created with modern technologies, and continues to work on improving existing solutions.
Source
Microsoft has updated its policy to combat the unauthorized distribution of intimate images, focusing on the threats posed by the abuse of generative AI. In recent years, the problem of the use of fake digital images, such as deepfakes, has reached alarming proportions. Most often, such materials become an instrument of pressure and blackmail, which especially affects women and adolescents.
The company has published a document with recommendations for lawmakers, which emphasizes the need to modernize legislation to protect victims, especially women and children. Among other steps, Microsoft is implementing new protection measures on its platforms to prevent the spread of such images.
One of the key innovations was a partnership with the StopNCII platform, which helps users protect their images from illegal distribution. This platform allows you to create digital "fingerprints" of images (hashes), which are then used to identify them on the network. Microsoft is already applying this technology to the Bing search engine, preventing unwanted images from appearing in search results.
Since the end of August, the company has taken measures to block 268,899 images. It is expected that cooperation with StopNCII will be expanded, and new technologies will be used on a wider range of platforms.
Microsoft also recalls its general policy against the creation and distribution of intimate images without consent. This applies to both real and generated images. The Company strictly prohibits any threats to distribute such materials and takes measures to block them on all its services.
Victims of unauthorized distribution of images are encouraged to use the Microsoft portal to submit requests to remove such material. This applies to both photos and videos, including AI-generated content.
The company emphasizes that the problem of AI abuse concerns not only adults, but also children. As a result, Microsoft is working with nonprofits to combat the spread of content related to the exploitation of minors and encourages users to report such incidents.
Microsoft continues to develop its initiatives in collaboration with leaders and experts around the world to find effective solutions and support those affected. The company also participates in new working groups aimed at developing best practices to prevent AI abuse.
In this way, Microsoft demonstrates its commitment to protecting users from dangerous content, including material created with modern technologies, and continues to work on improving existing solutions.
Source