A new era of cybersecurity: AI protected by international protocols

Brother

Professional
Messages
2,565
Reputation
3
Reaction score
363
Points
83
The combined efforts of 18 countries are designed to stop the abuse of artificial intelligence.

The United Kingdom and the United States, in collaboration with partners from 16 other countries, presented new recommendations for creating secure artificial intelligence systems.

The US Cybersecurity and Infrastructure Security Agency (CISA) emphasized that the new approach includes ensuring security based on the interests of users , radical transparency and responsibility, as well as creating organizational structures in which safe design is a priority.

The purpose of these recommendations, as the National Cyber Security Center of the United Kingdom (NCSC) added, is to increase the level of AI security and ensure the secure development, implementation and use of this technology.

The guidance builds on the U.S. government's ongoing efforts to manage AI-related risks, which includes thoroughly testing new tools before they are publicly released, having safeguards in place to prevent risks such as bias and discrimination, and privacy concerns. In addition, reliable methods are being introduced to identify materials created with the help of AI.

Companies are also required to facilitate third-party detection and reporting of vulnerabilities in their AI systems through Bug Bounty programs.

According to the NCSC, the new guidelines will help developers integrate AI security into the development process from the very beginning, which includes safe design, development, implementation and support, covering all important areas in the life cycle of AI systems. Organizations should regularly model threats to their systems, as well as ensure the security of supply chains and infrastructure.

The agencies goal is also to combat hostile attacks aimed at AI systems and machine learning, which can cause undesirable behavior of models in various ways to perform unauthorized actions and extract confidential information.

As the NCSC notes, there are many ways to achieve these effects, including query injection attacks in large language models (LLMs), or deliberate distortion of training data or user feedback, known as "data poisoning."

The security of artificial intelligence is a critical aspect for the entire modern society. New international guidelines for creating secure AI systems are definitely the right step to implement a universal standard for protecting and preventing abuse of new technologies.
 
Top