Friend
Professional
- Messages
- 2,653
- Reaction score
- 863
- Points
- 113
Data security remains a weak point for companies when interacting with AI.
According to a recent study by CybSafe and the National Cyber Security Alliance (NCA), 38% of employees in organizations use artificial intelligence tools to share sensitive work information without the permission of employers. This practice is especially common among younger generations.
For example, 46% of Gen Z and 43% of millennials admitted to sharing work data with AI without management's knowledge. As part of the study, CybSafe surveyed more than 7,000 people from the United States, the United Kingdom, Canada, Germany, Australia, India, and New Zealand.
It turned out that 52% of the surveyed employees did not receive any training on the safe use of AI. Among students, this figure is even higher - 58%. In addition, 84% of the unemployed and 83% of pensioners were also not trained in the safe handling of AI.
Oz Alash, CEO and founder of CybSafe, noted that the emergence of AI has given rise to a new category of risks for information security and business as a whole. While security professionals are aware of the threats posed by AI, awareness doesn't always translate into the right behavior among employees, he said.
Ronan Murphy, a member of the Irish Government's AI Advisory Council, believes that the access of AI tools to organisations' data poses the "biggest threat" to cybersecurity, governance and compliance ever. Murphy emphasizes that before implementing AI in their workflows, companies need to make sure that their data is properly prepared.
About 65% of respondents expressed concern about the use of AI in cybercrime, including the creation of more convincing phishing emails. More than half of respondents (52%) believe that AI will make it more difficult to detect fraud, and 55% are confident that AI technologies will create difficulties for safe use of the Internet.
Respondents also share attitudes towards companies adopting AI, with 36 percent expressing high confidence in how organizations are using AI, while 35 percent expressed low confidence. The remaining 29% have not decided on their position.
Only 36% of respondents believe that companies ensure the impartiality of AI technologies, while 30% doubt this. Opinions were also divided on the ability to recognize AI-generated content, with 36% expressing high confidence in being able to identify generation, and 35% saying it was low.
The introduction and use of artificial intelligence poses new challenges for companies in the field of data security and trust from employees and society. To reap the full benefits of AI while minimizing risk, businesses around the world need to focus on training staff, building transparent AI policies, and ensuring that sensitive information is handled securely.
Source
According to a recent study by CybSafe and the National Cyber Security Alliance (NCA), 38% of employees in organizations use artificial intelligence tools to share sensitive work information without the permission of employers. This practice is especially common among younger generations.
For example, 46% of Gen Z and 43% of millennials admitted to sharing work data with AI without management's knowledge. As part of the study, CybSafe surveyed more than 7,000 people from the United States, the United Kingdom, Canada, Germany, Australia, India, and New Zealand.
It turned out that 52% of the surveyed employees did not receive any training on the safe use of AI. Among students, this figure is even higher - 58%. In addition, 84% of the unemployed and 83% of pensioners were also not trained in the safe handling of AI.
Oz Alash, CEO and founder of CybSafe, noted that the emergence of AI has given rise to a new category of risks for information security and business as a whole. While security professionals are aware of the threats posed by AI, awareness doesn't always translate into the right behavior among employees, he said.
Ronan Murphy, a member of the Irish Government's AI Advisory Council, believes that the access of AI tools to organisations' data poses the "biggest threat" to cybersecurity, governance and compliance ever. Murphy emphasizes that before implementing AI in their workflows, companies need to make sure that their data is properly prepared.
About 65% of respondents expressed concern about the use of AI in cybercrime, including the creation of more convincing phishing emails. More than half of respondents (52%) believe that AI will make it more difficult to detect fraud, and 55% are confident that AI technologies will create difficulties for safe use of the Internet.
Respondents also share attitudes towards companies adopting AI, with 36 percent expressing high confidence in how organizations are using AI, while 35 percent expressed low confidence. The remaining 29% have not decided on their position.
Only 36% of respondents believe that companies ensure the impartiality of AI technologies, while 30% doubt this. Opinions were also divided on the ability to recognize AI-generated content, with 36% expressing high confidence in being able to identify generation, and 35% saying it was low.
The introduction and use of artificial intelligence poses new challenges for companies in the field of data security and trust from employees and society. To reap the full benefits of AI while minimizing risk, businesses around the world need to focus on training staff, building transparent AI policies, and ensuring that sensitive information is handled securely.
Source