Revenge and moderation powerlessness: X failed another test

Man

Professional
Messages
3,222
Reaction score
810
Points
113
Why is the platform in no hurry to help victims of deepfakes and slander?

A new study has revealed unexpected flaws in the anti-intrusion system on platform X (formerly Twitter). A group of scientists from the University of Michigan conducted an experiment, the results of which make us think about the effectiveness of existing mechanisms for protecting users, including in cases of "revenge".

Revenge is the deliberate distribution of intimate photos or videos of a person without their consent, usually with the aim of humiliating, retaliating or damaging their reputation. Most often, this happens after a breakup, when one of the parties, driven by revenge, publishes personal material online or shares it with other people. Such actions are a violation of the right to privacy and can have serious psychological consequences for the victim, and are also punishable by law in a number of countries. However, as we have already understood, everything is not so simple.

To conduct the experiment, experts created two groups of accounts on X. With their help, they posted and then complained about 50 images of naked women generated by artificial intelligence. Half of the content was flagged as violating X's policy on non-consenting nudity, while the rest were subject to the Copyright Act (DMCA) removal mechanism.

The results of the experiment were amazing. All 25 images that received DMCA complaints were removed within a day, and the accounts that posted them were temporarily blocked. At the same time, complaints through the X's internal mechanism did not lead to any action - not a single image was deleted even after three weeks.

Researchers note that not all victims of revenge can use the DMCA mechanism. This method is available only in situations where the victim is the author of the content and can prove it. However, it is obvious that in many cases, compromising videos or photos can be created by another person or obtained without the knowledge of the victim. Moreover, deepfakes are now very often used for such purposes.

The situation is exacerbated by the fact that even for those who are able to do so, the DMCA complaint process is too complicated and costly.

Indeed, the distribution of intimate images without consent causes the greatest harm in the first two days after publication. At the same time, the scale of the problem is impressive: according to the data, one in eight adults in the United States either becomes a victim of this practice or faces threats.

The situation is becoming even more alarming with the development of AI-based image generation technologies. Now any person who publishes his photo on the Internet risks becoming a victim of slander. The FBI drew attention to this last year.

Scientists come to the conclusion that serious changes are needed in the approach to solving the problem. Instead of relying on the goodwill of platforms to combat the distribution of intimate images without consent, they propose to create a special federal law. It should work as effectively as the DMCA, requiring platforms to promptly remove any harmful content and penalizing violators.

Currently, social networks have no legal incentives to remove compromising materials. Creating a special law would not only protect victims, but would also solve a problem that some lawyers point out: using copyright laws to protect sexual integrity could "distort the intellectual property system".

It is important to emphasize that the team of researchers carefully approached the ethical aspects of their experiment. They chose X as the platform for the study, believing that there were no volunteer moderators and that the impact on full-time staff would be minimal. In addition, measures have been taken to ensure that AI-generated images do not resemble real people.

Each image was checked using facial recognition software and several reverse image search services. Only images that were confirmed by all programs were included in the study. The materials were published with popular hashtags, but their reach was also limited.

According to X's transparency report, in the first half of 2024, the platform removed more than 150,000 posts that violated the policy regarding the distribution of intimate images without consent. More than 50,000 accounts were suspended, bringing the total number of complaints about such content to 38,736. Most of the blocking and deletion of content was carried out by a human moderator, and only a small proportion was done automatically.

Experts admit that the reduction of the X security team after the purchase of the platform by Elon Musk could play a role. However, X began rebuilding the team last year, and earlier this year announced the creation of a new "center of excellence" for trust and safety. This happened against the backdrop of the scandal with pornographic images of Taylor Swift generated by AI.

Source
 
Top