From War Crimes to Trial: How AI Turns Records into Evidence

Friend

Professional
Messages
2,675
Reaction score
987
Points
113
AI can make recordings usable in court.

Engineers and scientists have long been trying to solve the so-called "cocktail party" problem - the ability of a person to distinguish the speech of one interlocutor among many voices in a noisy environment. For humans, this is a relatively simple task, but the technology has not been able to replicate such a skill for a long time, which is especially important when using audio recordings in court. If multiple voices are heard in a recording at once, it becomes difficult to pinpoint exactly who said what, which could render the recording useless as evidence.

Keith McElvin, founder and CTO of Wave Sciences, became interested in this issue when he was working for the U.S. government in a war crimes investigation. He had to work with recordings where many voices spoke at the same time, which made it difficult to identify key phrases and determine who said which words.

Previously, McElvin had been able to successfully remove noises, such as the sounds of cars or air conditioners, from recordings. However, removing speech from speech turned out to be a much more difficult task. Echoes and reflections of sounds in the room create additional difficulties, making the process mathematically extremely confusing.

The solution was found with the help of artificial intelligence. The company's development made it possible to track where the sound was coming from and suppress any sounds that could not have come from a person in a certain position. This is like a camera that focuses on a single subject, blurring the foreground and background.

While the results of such technologies may not sound perfect, they have already found use in court cases. In one case in the U.S., where two hitmen were arrested, the use of audio recordings using Wave Sciences algorithms was key evidence.

The technology is also used in other areas, such as sonar signal analysis, crisis negotiation, and even beep-based equipment fault prediction. Wave Sciences plans to implement its development in audio recording devices, voice interfaces for cars and smart speakers, as well as in hearing aids and augmented and virtual reality technologies.

Studies have shown that the Wave Sciences algorithm works even better than human hearing, especially when more microphones are added. What's more, the mathematical models behind the algorithm are remarkably similar to those used by the human brain to process sounds, which they believe could unlock the mysteries of how our hearing works.

Source
 
Top