Biden creates secret AI security alliance: Who's in it and why?

Teacher

Professional
Messages
2,673
Reputation
9
Reaction score
688
Points
113
More than 200 industry leaders will create new rules of life in the world of AI models.

The Biden administration has announced the creation of the U.S. AI Safety Institute Consortium (AISIC), the first of its kind dedicated to AI security. This move follows the appointment of the director of the new AI Safety Institute ( USAII ) at the NIST Institute.

The AISIC consortium brings together more than 200 companies and organizations, including major technology firms such as Google, Microsoft and Amazon, leading companies in the development of large language models such as OpenAI, Cohere and Anthropic, as well as research laboratories, civic and academic groups, local and state governments and non-profit organizations.

According to a NIST blog post, AISIC will be the largest gathering of testing and evaluation teams and will help develop the foundations for a new measurement science in AI security. The consortium will operate under the auspices of USAISI and contribute to the priority actions outlined in the US Executive Order, including the development of guidelines for conducting penetration testing, capability assessment, risk management, security and protection, and watermarking generated content.

The development of the consortium was announced on October 31, 2023. Participation in the consortium is open to all interested organizations that can contribute their experience, products, data and/or models to the consortium's activities. Selected Consortium members are required to pay an annual fee of $1,000.

According to NIST, members of the Consortium will contribute to the implementation of the following principles:
  • Develop new guidelines, tools, methods, protocols, and best practices to promote the development of industry standards for developing or implementing AI in safe, reliable, and trustworthy ways.;
  • Develop recommendations and criteria for identifying and evaluating AI capabilities, with a particular focus on capabilities that can potentially cause harm;
  • Develop approaches for implementing secure development practices for generative AI, including special considerations for basic dual-use models, including:;
  • Guidance related to evaluating and managing the security, security, and reliability of models, as well as machine learning that ensures confidentiality;
  • Guide to making test environments available;
  • Develop and ensure availability of test environments;
  • Develop recommendations, methods, skills, and practices for successful Red Team integration and machine learning while maintaining confidentiality.;
  • Develop guidelines and tools for digital content authentication;
  • Develop recommendations and criteria for employees ' AI skills, including risk identification and management, testing, evaluation, validation and verification (TEVV), and domain-specific expertise;
  • Explore the complexities at the intersection of society and technology, including the science of how people understand and interact with AI in different contexts;
  • Develop guidelines for understanding and managing the interdependencies between AI participants throughout the entire life cycle.

Despite the announcement of the AI Security Institute and its accompanying Consortium in November, there is still little information about how the institute will function and where the funds for its funding will come from, especially given that NIST itself is experiencing a lack of funding.

It is worth noting that in January, a bipartisan group of senators asked the Senate to allocate $10 million to help create USAISI. However, it is not yet clear at what stage the request for funding is. In addition, the House of Representatives also sent a letter to NIST criticizing the agency for its lack of transparency and for failing to announce a competitive process for planned research grants related to USAISI.
 
Top