Man
Professional
- Messages
- 3,093
- Reaction score
- 634
- Points
- 113
Ecovacs' robot vacuums, which suffer from serious cybersecurity vulnerabilities, have been found to collect photos, videos, and voice recordings taken in users' homes to train the company's AI models. Ecovacs, the Chinese maker of popular Deebot models, claims that users voluntarily participate in the product improvement program, although when connected through an app on a smartphone, it is not specified what data will be collected. Users are offered to familiarize themselves with the details through a link that is not on the specified page.
According to Ecovacs' privacy policy, the company may collect 2D and 3D maps of homes, voice recordings from microphones, and images from device cameras for research purposes. Even after such data is deleted, it can remain on Ecovacs servers and be used for further development. Representatives of the company confirmed that the collected data is used to train artificial intelligence, while the information is allegedly anonymized at the device level.
However, cybersecurity experts have raised concerns about Ecovacs' ability to protect such data. Earlier, researcher Dennis Giese identified vulnerabilities that allowed remote access to vacuum cleaner cameras, which called into question the security of users' confidential information. The researcher also stressed that even if a company does not have malicious intent, it can become a victim of industrial espionage or the actions of other states.
Ecovacs, which is estimated to cost $4.6 billion, promised to fix the problems found in its flagship models as early as November. The company also said that the data collected as part of the product improvement program is anonymized before being sent to servers, and access to it is limited by strict management protocols.
There have already been cases of leakage of images taken by robot vacuum cleaners. In 2022, intimate photos taken by iRobot devices were made public on Facebook, prompting a flurry of criticism regarding the security of the data collected by such devices. These leaks occurred due to the actions of contractors hired to analyze the collected data. One of the companies responsible for the data breach, Scale AI, specializes in creating data to train algorithms, and its contractors have previously leaked user images to social media.
Meanwhile, researchers at the Australian Robotics Centre have developed a technology that can prevent such incidents. It changes the way the robot's camera works, making images unrecognizable, while maintaining enough information for navigation. This solution could be the key to increasing privacy levels in the future, but it is not yet ready for mass production.
Source
According to Ecovacs' privacy policy, the company may collect 2D and 3D maps of homes, voice recordings from microphones, and images from device cameras for research purposes. Even after such data is deleted, it can remain on Ecovacs servers and be used for further development. Representatives of the company confirmed that the collected data is used to train artificial intelligence, while the information is allegedly anonymized at the device level.
However, cybersecurity experts have raised concerns about Ecovacs' ability to protect such data. Earlier, researcher Dennis Giese identified vulnerabilities that allowed remote access to vacuum cleaner cameras, which called into question the security of users' confidential information. The researcher also stressed that even if a company does not have malicious intent, it can become a victim of industrial espionage or the actions of other states.
Ecovacs, which is estimated to cost $4.6 billion, promised to fix the problems found in its flagship models as early as November. The company also said that the data collected as part of the product improvement program is anonymized before being sent to servers, and access to it is limited by strict management protocols.
There have already been cases of leakage of images taken by robot vacuum cleaners. In 2022, intimate photos taken by iRobot devices were made public on Facebook, prompting a flurry of criticism regarding the security of the data collected by such devices. These leaks occurred due to the actions of contractors hired to analyze the collected data. One of the companies responsible for the data breach, Scale AI, specializes in creating data to train algorithms, and its contractors have previously leaked user images to social media.
Meanwhile, researchers at the Australian Robotics Centre have developed a technology that can prevent such incidents. It changes the way the robot's camera works, making images unrecognizable, while maintaining enough information for navigation. This solution could be the key to increasing privacy levels in the future, but it is not yet ready for mass production.
Source