Introduced a simple way to bypass illegal materials detection systems

Brother

Professional
Messages
2,590
Reaction score
511
Points
83
The existing perceptual hashing algorithms are not reliable enough to combat the spread of illegal content.

5c50cdd98a4a4edfba666.jpg


A team of researchers at Imperial College London have presented a simple method to bypass Apple's Child Sexual Abuse Material (CSAM).

As reported in August of this year, Apple has created a system for scanning iPhone users' photos for illegal images called CSAM. It was assumed that the system will work on the devices of users of Apple products. If suspicious content was found in the gallery, the images were to be sent to the company for verification.

CSAM caused great concern among human rights defenders, and in September Apple decided to postpone its implementation in devices of its users until 2022, promising to improve the system and make its development process more transparent.

CSAM works by matching hashes of images privately shared by iOS device users with a database of hashes from the US National Center for Missing and Exploited Children and other child rights organizations. If a match is found, Apple will examine the content and notify authorities about the distribution of child pornography.

However, according to experts at Imperial College London, neither CSAM nor any other system of this type is capable of effectively detecting illegal materials and can be easily circumvented. Scientists have demonstrated how in 99.9% of cases it is possible to deceive illegal image detection algorithms without modifying the image itself.

The idea is to apply a special hash filter to the photos. In this case, the image will look different for CSAM, although nothing will change for the human eye.

The researchers presented three attacks on algorithms based on discrete cosine transform that can successfully modify the unique image signature on a device and help it bypass detection.

The specialists also presented several methods of protection against the attacks they described. One of them is to use a higher detection threshold, but this will lead to an increase in the number of false positives.

The second method is to tag users only after the image ID matches reach a certain threshold, but this introduces probabilistic complications.

Applying additional transformations to the image before calculating its perceptual hash is also unlikely to improve detection efficiency.

In some cases, it would be possible to increase the hash size from 64 to 256, but this would cause confidentiality problems as longer hashes encode more information about the image.

Overall, the study demonstrates that existing perceptual hashing algorithms are not robust enough to combat the proliferation of illegal content.
 
Top