Fight for trillions: what employees of AI corporations are forced to keep silent about

Tomcat

Professional
Messages
2,379
Reputation
4
Reaction score
407
Points
83
Open letter: nondisclosure agreements harm the security of algorithms.

A group of former and current employees of OpenAI, a developer of products such as ChatGPT, published an open letter criticizing the artificial intelligence industry. In the letter, they expressed concern about the rapid development of AI technologies in the absence of proper regulation and supervision by the state and society.

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe that our own internal corporate governance measures will be sufficient to change the situation," the letter says.

According to the authors, OpenAI, Google, Microsoft, Meta and other tech giants are leading a real arms race in the field of generative AI. This market is projected to generate more than a trillion dollars in annual revenue in the next decade.

OpenAI employees claim that companies have a "substantial amount of confidential information" about the real capabilities of their AI products, security measures, and risk levels for potential harm. However, there are almost no effective laws that would oblige developers to share this data with governments and civil society.

One of the key complaints was the lack of proper protection for whistleblowers in the AI industry. "The nondisclosure agreement does not allow us to openly express concerns about this issue to anyone other than the companies themselves, who can ignore existing problems," the authors of the letter complain.

Traditional whistleblower protection mechanisms are ineffective because they are focused on countering illegal activities. At the same time, many risks associated with AI are not yet regulated by law, which creates a legal vacuum.

In this regard, the authors called on AI companies to take a number of measures, including the rejection of non-disclosure agreements, the creation of anonymous channels for current and former employees, non-interference in public discussions when internal processes fail, and supporting a culture of open criticism.

The letter was signed by four anonymous employees and seven former employees, including Daniel Kokotailo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler. The letter was also supported by Ramana Kumar of Google DeepMind, Neil Nanda of Anthropic and Google DeepMind, as well as famous scientists Geoffrey Hinton, Joshua Benjio and Stuart Russell.

OpenAI said it agrees with the importance of the discussion and will continue to engage with the government, the public and other stakeholders. The company has an anonymous hotline and a security committee headed by board members.

The open letter follows a series of scandals surrounding OpenAI in recent months. In May, after a wave of criticism, the company lifted the requirement for former employees to sign indefinite non-disclosure agreements to retain a stake in the company. Previously, in this way, OpenAI tried to force retired employees to keep silent about their working conditions.

A month earlier, OpenAI had disbanded its long-term AI risk research group after the departure of its co — founder Ilya Sutskever and Jan Leike. Leike also sharply criticized the corporation, saying that security has taken a back seat due to the race for new products.

After launching a new AI model and an updated version of ChatGPT with a voice interface, OpenAI recalled one of the voice assistants called Sky due to the similarity with the voice of actress Scarlett Johansson. Johansson accused the company of copying her voice without permission for use in the movie "Her."

Thus, OpenAI has found itself at the center of a growing controversy over the ethical issues of AI development. Critics of the company have called for increased regulation and oversight in this fast-growing industry.
 
Top