NEW CARDING CHAT IN TELEGRAM

AI as a disinformation tool: the development of technology caused a 1000% increase in fakes on the Internet

Brother

Professional
Messages
2,590
Reputation
3
Reaction score
476
Points
83
AI can fill websites on its own without human input, which leads to political embarrassments.

According to a report by NewsGuard, an organization that tracks the spread of disinformation, since May 2023, the number of websites publishing fake articles created using artificial intelligence has increased by more than 1,000%. NewsGuard found 614 such sites that function with minimal or no human control. Previously, this indicator was 49 sites.

Experts note that the introduction of generative AI tools has become a godsend for both "content farms" and distributors of false information. Spreading propaganda or false narratives about elections, wars, and natural disasters is now easier than ever.

Previously, armies of low-paid workers from so-called "troll farms"were used to conduct propaganda campaigns. Now, AI allows almost anyone – whether it's an intelligence agency or a teenage geek-to create similar resources.

Most people do not have sufficient critical news analysis skills, which makes this development especially dangerous. Such websites often share common names, such as iBusiness Day, Ireland Top News, and Daily Time Update, which at first glance may seem like real news sites.

Unreliable news and information websites created with the help of artificial intelligence are called UAINS (Unreliable AI-Generated News). NewsGuard emphasizes that such sites operate with minimal or no human control and publish articles written mostly or entirely by bots, rather than traditionally created and edited by journalists.

For example, one of the articles created by artificial intelligence told a fictional story about the psychiatrist Benjamin Netanyahu, which said that he died and left behind a note suggesting that the Israeli Prime Minister was involved. Although the psychiatrist was fictional, the story was shown on an Iranian TV show and shared on other reputable media outlets.

A Chinese government website has also been identified that uses generated text about the US running a biological weapons laboratory in Kazakhstan that infects camels to endanger people in China.

NewsGuard points out that part of the blame for the rise of such sites lies with brands that are ready to advertise almost everywhere. The revenue model of these sites is often based on programmatic advertising, in which advertising technologies serve ads without taking into account the quality and content of the site, as a result of which well-known brands unintentionally support them.

In order to make sure that the news you read is authentic, NewsGuard recommends paying attention to certain signs, such as the presence of error messages or other chatbot-specific expressions that indicate that content is created using AI without editing.

NewsGuard assumes that many reliable news sites will soon also start using AI tools, but they will also provide effective control over people and will not produce hundreds or thousands of articles per day.

Particularly worrisome is the rise of fake content sites in the run-up to the presidential election, as the flood of propaganda can easily affect the chances of a particular political candidate. While social media is already awash with misinformation, Meta has banned political campaigns from using its generative AI products for advertising.
 
Top