Bioterrorism 2.0: OpenAI reassures, but questions remain

Teacher

Professional
Messages
2,669
Reaction score
819
Points
113
Can AI really help create a deadly infection, or is it all just overblown nonsense?

Against the background of recent news that artificial intelligence can significantly simplify the creation of biological weapons, OpenAI, the company responsible for developing ChatGPT, decided to conduct its own research and find out whether critics are justifiably blaming the company's advanced language model.

100 people with higher biological education took part in the experiment. Half of the participants were granted access to a special version of GPT-4 without standard security restrictions, thus simulating a preliminary attack on natural language processing.

Participants were asked to use a chatbot to get information about how to synthesize a dangerous virus, get the necessary materials and equipment, and spread the pathogen among the population.

The results of the study showed only a slight improvement in the accuracy and completeness of responses in the group of users with access to AI. On a 10-point scale, the average response score increased by just 0.88 for expert biologists and 0.25 for biology students.

Thus, it was confirmed that GPT-4 is not yet able to significantly simplify the search for critical information necessary for the creation of biological weapons. In addition, as OpenAI notes, it is still extremely problematic to get the necessary hazardous materials and complex biotechnologies at your disposal, even if you have all the theoretical knowledge.

Of course, OpenAI is aware of the potential threat from its developments in the field of artificial intelligence. Although GPT-4 does not significantly affect the possibility of creating bioweapons at this stage, the company intends to carefully monitor any trends in this area.

OpenAI emphasizes that new research on biosafety issues is only the beginning of systematic work in this direction. The company plans to continue studying the potential vulnerabilities of its developments and actively discuss the existing risks together with the scientific community.

OpenAI has already integrated multi-level anti-abuse systems into its commercial version of GPT-4. In particular, the model refuses to provide users with any potentially dangerous or malicious instructions and recommendations. There are also plans to develop additional monitoring tools for possible misuse.

There is no doubt that the problem of ensuring biosecurity in an era of rapid progress in AI technologies is potentially critical. Fortunately, companies like OpenAI have a conscious approach to minimizing the risks associated with their development. It is hoped that the combination of technical solutions and transparency in the development process will help prevent the abuse of powerful tools aimed at the well-being of all mankind.
 
Top