The AI assistant on your smartphone may turn out to be a traitor

Teacher

Professional
Messages
2,673
Reputation
9
Reaction score
688
Points
113
Bard developers do not advise users to be friends with generative neural networks.

The integration of generative AI into the smartphone apps we use on a daily basis is rapidly gaining momentum. This was predictable - ChatGPT last year already demonstrated a huge interest of users in such technologies. And now we're seeing a real boom in handheld AI assistants.

However, experts believe that we should slow down-along with convenience, there are also serious security and privacy risks.

It seems that we all have one common weakness for generative AI-based chatbots. We may exercise caution when installing apps, granting permissions, selecting browsers, and sharing personal content on Facebook. But as soon as we start a conversation with a chatbot, we immediately forget about all the precautions. People feel that they are communicating with a new useful friend who can be trusted and asked for anything.

But this is just the interface of a huge computing infrastructure, ultimately funded by advertising and data trading.

After the announcement of Google's Bard, which will soon be transformed into Gemini, and a host of new apps, Google has asked all Android and iPhone users to be as careful as possible.

"Please do not mention confidential information or facts in your conversations that you do not want to share with any of the conversation reviewers. Or with Google, which can use this information to improve its own tools, services, and machine learning technologies, " the developers warn.

Fortunately, the company says that for the time being, conversations in Gemini Apps will not be used to serve targeted ads, although this may change in the future.

The real problem is that users share highly personal and confidential information with AI assistants who help them write business plans, sales presentations, or even cheat sheets for school. When a person leaves the chat, the questions they asked and the answers they received are stored in internal databases and can potentially be viewed.

This scale of data collection and use is quite typical for the entire new generative AI sector as a whole. It is necessary to develop approaches to assessing the levels of security and privacy provided by various companies in this area, such as Google or OpenAI.

An important difference will be whether the data is processed on the user's device or in the cloud. Apple is likely to tap into the capabilities of its apps and services running on smartphones, if this really proves to be an effective method. It is known that the company is already conducting experiments to compare performance in both modes. Google, given its different infrastructure and priorities, is likely to rely more heavily on cloud computing.

Millions of owners of Android and iPhone smartphones are already faced with a difficult choice - to use new applications with integrated artificial intelligence like Gemini or not. Soon, everyone will have to make such a choice.

On the one hand, we can't completely protect ourselves and our data by continuing to enjoy all the benefits of AI. Although in recent years, huge progress has been made in protecting privacy when using the Internet and mobile applications. On the other hand, the opportunities that open up to users of neural networks are so tempting that it is difficult to give them up. We will have to carefully consider what is more important for us.

Google says it's possible to disable long-term data storage in the Gemini settings. But even in this case, conversations will be stored in the database for 72 hours. We see similar problems in other services.
 
Top