Elon Musk's New Chatbot risks becoming a hotbed of misinformation

Lord777

Professional
Messages
2,583
Reputation
15
Reaction score
1,249
Points
113
The Grok model launched by xAI is sure to lead the company to lawsuits over time.

Cybersecurity experts have expressed concerns about the Grok artificial intelligence model developed by Elon Musk's xAI startup. In their opinion, this AI, despite its unique capabilities, can become a tool for abuse by cybercriminals and other individuals with malicious intentions.

The Grok model, trained on the text of the science fiction novel "The Hitchhiker's Guide to the Galaxy" by Douglas Adams, is designed to answer any questions with sarcasm and humor and is even able to suggest what questions you can ask it.

However, the real "killer feature" of the new model is that it receives information about the outside world through the X * platform (formerly Twitter*), which automatically gives AI up-to-date information, unlike ChatGPT, which is still limited to April 2022, even in the Plus version.

However, no matter how cool this option is, the researchers note that using X user data to train the model is controversial and can lead to the spread of bias and inaccurate information, which is especially disturbing against the background of the Palestinian-Israeli conflict, which is actively discussed on the platform.

Elon Musk, known for his criticism of the efforts of large technology companies in the field of AI and censorship, previously announced his intention to launch an AI focused on finding truth and understanding the universe. However, his company's approach to training Grok on data containing clear biases and misinformation raises concerns.

AppOmni's Joseph Tucker says that while Grok may seem more "human" because of the specifics of the training data, it increases the risks of toxicity and bias in the model. Such characteristics of data are often found in social networks and can affect the behavior of AI.

Grok attracts attention with its "sharp" responses, but over time, this can lead to ethical and legal problems, for example, if the AI is asked to provide information about illegal actions or sensitive topics.

Despite the convenience of access to up-to-date X data, which can give Grok an advantage in some cases, for example, in situations that require an urgent response, the risk of spreading false news and misinformation remains high.

According to Joseph Tucker, Grok is also inferior to other modern AI implementations, such as Google's Bard, which have more extensive functionality, including web indexing and integration with data analysis tools.
 
Top