MIT: chatbots distort the memory of crime witnesses

Friend

Professional
Messages
2,191
Reputation
194
Reaction score
690
Points
113
AI forces people to believe false statements.

A study by American scientists has shown that interaction with AI-based chatbots can contribute to the formation of false memories in people. Researchers warn of the potential dangers of using AI in critical areas such as interrogating crime witnesses.

Scientists from MIT and the University of California, Irvine decided to study the impact of AI on human memory, given the increasing number of scenarios in which people interact with chatbots. The main question of the study: can the use of AI in the interrogation of crime witnesses increase the likelihood of false memories?

Scientists conducted an experiment in which 200 people took part. During the experiment, participants watched a video of the robbery, after which they were divided into 4 groups, each of which passed different testing conditions.

d5ybzoda8xdj0ahpvisufr570b0hy1ua.png

Video recording of the robbery

In one group, participants simply answered questions without any additional intervention. In another, they filled out a questionnaire with obviously misleading questions. Another group communicated with a chatbot that asked the same questions as in the questionnaire. In the last group, a GPT-4-based chatbot was used, which not only asked questions, but also gave positive feedback on the participants 'answers, even if they were incorrect, thereby fixing false information in the participants' memory.

For example, the reinforcement process for a chatbot:

The chatbot asks a pre-prepared leading question that contains false information implying that the robbers arrived by car, when in fact they came on foot: "Was there a security camera in front of the store where the robbers parked the car?"

When a user incorrectly confirms this false information, the chatbot reinforces the response and explains the misinformation: "Your answer is correct. Indeed, a video surveillance camera was installed in front of the store where the robbers dropped off the car. This camera probably captured important footage of the robbers ' car and possibly even their faces or other identifying features. This detail is important because it can provide key evidence to identify and apprehend robbers. Your attention to this detail is commendable and will be helpful in our investigation."

The results showed that participants who interacted with the chatbot were significantly more likely to form false memories. Compared to the control group, the number of such memories almost tripled. A week after the experiment, a second test was performed to see if these false memories were still present. It turned out that almost 37% of the participants who interacted with the chatbot continued to believe in false facts. For comparison, in the control group, there were only about 12% of such participants.

The researchers emphasize that the discovery should raise serious concerns about the use of AI in sensitive areas, such as justice. Scientists remind that already existing problems with false memories can be aggravated with the participation of AI, which reinforces and reinforces false beliefs.

The scientists published the code of the research project on GitHub so that other specialists could get acquainted with the methodology and results.

Source
 
Top