Man
Professional
- Messages
- 3,077
- Reaction score
- 614
- Points
- 113
Vulnhuntr detects exploits faster than hackers create them.
The new Vulnhuntr tool makes a breakthrough in finding vulnerabilities in open source projects. Developed by Protect AI, it uses the power of large language models (LLMs) to detect complex multi-stage vulnerabilities, including remote zero-day exploits.
Vulnhuntr has already shown impressive results, identifying more than a dozen 0day vulnerabilities in a matter of hours. Among the projects in which vulnerabilities were found are gpt_academic, ComfyUI, FastChat, and Ragflow.
This tool features an approach that splits the code into small chunks for analysis, which avoids LLM overload and significantly reduces false positives. Vulnhuntr analyzes the code many times, building a complete path from data input to output on the server and generating detailed reports with examples of exploits.
The focus is on high-risk vulnerabilities: LFI, AFO, RCE, XSS, SQLi, SSRF, and IDOR. Techniques such as thought chain and XML hints direct the LLM to find vulnerabilities, narrowing the scope of analysis to critical functions in the code.
So far, Vulnhuntr only supports Python, but the developers plan to expand the tool's capabilities to other programming languages. Despite its limitations, its capabilities are far superior to traditional static analyzers, providing more accurate detection and reduction of false positives.
The future of vulnerability hunting looks promising. With the development of LLMs, their contextual windows can reach millions of tokens, which will minimize the need for static analysis. However, Vulnhuntr will continue to use manual code parsing to minimize errors when searching for vulnerabilities.
The tool is already available on the Huntr platform, where participants can use Vulnhuntr and be rewarded for helping secure AI-powered projects. The tool can also be downloaded from GitHub.
Vulnhuntr significantly streamlines the process of identifying complex vulnerabilities, making it more accurate and responsive. This tool not only improves the protection of individual projects, but also contributes to ensuring the security of the entire open AI ecosystem, supporting its development and resilience to new threats.
Source
The new Vulnhuntr tool makes a breakthrough in finding vulnerabilities in open source projects. Developed by Protect AI, it uses the power of large language models (LLMs) to detect complex multi-stage vulnerabilities, including remote zero-day exploits.
Vulnhuntr has already shown impressive results, identifying more than a dozen 0day vulnerabilities in a matter of hours. Among the projects in which vulnerabilities were found are gpt_academic, ComfyUI, FastChat, and Ragflow.
This tool features an approach that splits the code into small chunks for analysis, which avoids LLM overload and significantly reduces false positives. Vulnhuntr analyzes the code many times, building a complete path from data input to output on the server and generating detailed reports with examples of exploits.
The focus is on high-risk vulnerabilities: LFI, AFO, RCE, XSS, SQLi, SSRF, and IDOR. Techniques such as thought chain and XML hints direct the LLM to find vulnerabilities, narrowing the scope of analysis to critical functions in the code.
So far, Vulnhuntr only supports Python, but the developers plan to expand the tool's capabilities to other programming languages. Despite its limitations, its capabilities are far superior to traditional static analyzers, providing more accurate detection and reduction of false positives.
The future of vulnerability hunting looks promising. With the development of LLMs, their contextual windows can reach millions of tokens, which will minimize the need for static analysis. However, Vulnhuntr will continue to use manual code parsing to minimize errors when searching for vulnerabilities.
The tool is already available on the Huntr platform, where participants can use Vulnhuntr and be rewarded for helping secure AI-powered projects. The tool can also be downloaded from GitHub.
Vulnhuntr significantly streamlines the process of identifying complex vulnerabilities, making it more accurate and responsive. This tool not only improves the protection of individual projects, but also contributes to ensuring the security of the entire open AI ecosystem, supporting its development and resilience to new threats.
Source