I have already stated many times that search engines are very different. And honest Osinter is not satisfied with Google alone. Today I will tell you about an incredibly useful tool that I have previously mentioned on the channel, but in passing. I am talking about search engines by code.
How do they differ from their vanilla counterparts? The main difference is that they do not clear information from the entire HTML code of the page, but on the contrary, carefully save it so that it can be searched. This is extremely convenient and useful.
How is this used in investigations? Firstly, it is obvious that this is used to search for pieces of a specific when. From personal practice. We saw that with the help of a specific service that organizes logging through a page on social networks, phishing sites steal credits from accounts. A simple search by a piece of code with authorization allows you to identify a whole cluster of pages with approximately the same functionality. The same applies to pieces of "bad code" that can be used to search for infected pages. Secondly, this is searching for intersections and groups of sites via Google Ads and other identifiers. Thirdly, this is analyzing the technology stack on various pages.
There are also more prosaic tasks. For example, alternative search on GitHub, searching for API and RSS, identifying databases, various vulnerabilities and much, much more. Here, it's as far as your imagination goes.
What to search with? Probably the best solution is PublicWWW [ 1 ]. But it, the bastard, is paid, and even very much so. You can only search for free on 3 million sites. If you want to search the entire network, pay $ 109 per month. Or 49 bucks for a day of work. And this is still the minimum tariff! And no trial for the full set of functions. Stinginess, and nothing more! There is another alternative in the form of Searchcode [ 2 ], but it is more for source code repositories than for sites. So, there is no sane alternative. I'll have to eat what they give me.
P.S. If you find an alternative, let me know, I'll find this very useful!
Best wishes to all!
How do they differ from their vanilla counterparts? The main difference is that they do not clear information from the entire HTML code of the page, but on the contrary, carefully save it so that it can be searched. This is extremely convenient and useful.
How is this used in investigations? Firstly, it is obvious that this is used to search for pieces of a specific when. From personal practice. We saw that with the help of a specific service that organizes logging through a page on social networks, phishing sites steal credits from accounts. A simple search by a piece of code with authorization allows you to identify a whole cluster of pages with approximately the same functionality. The same applies to pieces of "bad code" that can be used to search for infected pages. Secondly, this is searching for intersections and groups of sites via Google Ads and other identifiers. Thirdly, this is analyzing the technology stack on various pages.
There are also more prosaic tasks. For example, alternative search on GitHub, searching for API and RSS, identifying databases, various vulnerabilities and much, much more. Here, it's as far as your imagination goes.
What to search with? Probably the best solution is PublicWWW [ 1 ]. But it, the bastard, is paid, and even very much so. You can only search for free on 3 million sites. If you want to search the entire network, pay $ 109 per month. Or 49 bucks for a day of work. And this is still the minimum tariff! And no trial for the full set of functions. Stinginess, and nothing more! There is another alternative in the form of Searchcode [ 2 ], but it is more for source code repositories than for sites. So, there is no sane alternative. I'll have to eat what they give me.
P.S. If you find an alternative, let me know, I'll find this very useful!
Best wishes to all!