Man
Professional
- Messages
- 3,066
- Reaction score
- 593
- Points
- 113
Google has been able to bridge the gap between the machine and traditional solutions.
Google reported that the corporation's AI model first discovered a memory security vulnerability in real-world conditions. We are talking about a stack in which a buffer overflow was detected in SQLite — the vulnerability was fixed even before the release of the vulnerable code.
Big Sleep, an LLM tool for finding errors, was developed in collaboration with DeepMind. According to the company, the development is an evolution of the previous Naptime project, presented in June.
SQLite, a popular open-source database engine, faced an issue that could allow attackers to crash the system or even execute arbitrary code. The vulnerability was due to a bug where the value -1 was used as an array index. In the debug version of the program, the detection of such values was provided, but there is no such mechanism in the final assembly.
In the last test, the team collected the last few commits of the SQLite repository and manually weeded out trivial changes to direct the AI to analyze the remaining data. As a result, a model based on Gemini 1.5 Pro identified a bug related to changes in the [1976c3f7] commit.
Exploitation of the vulnerability could occur through a specially created database that the attacker would provide to the victim, or using SQL injection. However, Google admits that the bug is difficult enough to exploit. Despite this, the company considers the success of its AI to be a breakthrough.
Traditional methods of vulnerability detection, such as fuzzing, could not find this problem. However, the AI model has discovered a previously unknown vulnerability in widely used software for the first time in the world. Big Sleep recorded the flaw in early October, analyzing changes in the source code of the project, and SQLite developers promptly fixed the vulnerability on the same day, preventing it from getting into the official release.
Google emphasizes that despite the significant progress in fuzzing, methods are needed to help defenders find vulnerabilities that are inaccessible to fuzzing, and the company hopes that AI will be able to close this gap. Big Sleep is at the research stage and is still used to analyze small programs with known vulnerabilities. Google emphasizes that the results obtained are still experimental.
Source
Google reported that the corporation's AI model first discovered a memory security vulnerability in real-world conditions. We are talking about a stack in which a buffer overflow was detected in SQLite — the vulnerability was fixed even before the release of the vulnerable code.
Big Sleep, an LLM tool for finding errors, was developed in collaboration with DeepMind. According to the company, the development is an evolution of the previous Naptime project, presented in June.
SQLite, a popular open-source database engine, faced an issue that could allow attackers to crash the system or even execute arbitrary code. The vulnerability was due to a bug where the value -1 was used as an array index. In the debug version of the program, the detection of such values was provided, but there is no such mechanism in the final assembly.
In the last test, the team collected the last few commits of the SQLite repository and manually weeded out trivial changes to direct the AI to analyze the remaining data. As a result, a model based on Gemini 1.5 Pro identified a bug related to changes in the [1976c3f7] commit.
Exploitation of the vulnerability could occur through a specially created database that the attacker would provide to the victim, or using SQL injection. However, Google admits that the bug is difficult enough to exploit. Despite this, the company considers the success of its AI to be a breakthrough.
Traditional methods of vulnerability detection, such as fuzzing, could not find this problem. However, the AI model has discovered a previously unknown vulnerability in widely used software for the first time in the world. Big Sleep recorded the flaw in early October, analyzing changes in the source code of the project, and SQLite developers promptly fixed the vulnerability on the same day, preventing it from getting into the official release.
Google emphasizes that despite the significant progress in fuzzing, methods are needed to help defenders find vulnerabilities that are inaccessible to fuzzing, and the company hopes that AI will be able to close this gap. Big Sleep is at the research stage and is still used to analyze small programs with known vulnerabilities. Google emphasizes that the results obtained are still experimental.
Source