AI as a Lie Factory: A New Challenge for Disinformation Analysts

Brother

Professional
Messages
2,565
Reputation
3
Reaction score
362
Points
83
The study shows how LLM models create misinformation that is difficult to recognize.

Researchers at the Illinois Institute of Technology have found that misinformation generated by Large Language Models (LLM) is a more serious threat than human-generated misinformation. The researchers research will be presented at the upcoming International Conference on Learning Representations.

The problem is caused by the fact that LLM models actively saturate the Internet with questionable content. For example, the analytical company NewsGuard found 676 sites that generate news with minimal human involvement, and also tracks false narratives created using AI.

The misinformation in the study comes from the fact that ChatGPT and other open-source LLMs, including Llama and Vicuna, create content based on human-created disinformation datasets such as Politifact, Gossipcop, and CoAID. Then, 8 LLM detectors evaluated human-and machine-generated samples. The LLM and human misinformation samples had the same semantic details, but they differed in style and wording due to different authors and tips for generating content. The researchers emphasized that AI's style of misinformation makes it harder to detect compared to human texts.

The authors identify 4 strategies for creating LLM disinformation: information paraphrasing, text rewriting, open generation, and information manipulation. Experts also note that LLMs can be instructed to write arbitrary misinformation without a reference source and can create factually incorrect material as a result of an internal error, which the industry calls an AI hallucination.

In conclusion, the researchers call for a joint effort by various parties, including the scientific community, governments, web services, and the public, to combat the spread of misinformation generated by LLM. After all, such information poses a serious threat to Internet security and public trust, especially given the ease with which attackers can use LLM to mass create deceptive content.
 
Top