A quick guide to OSINT

Carder

Professional
Messages
2,619
Reputation
9
Reaction score
1,695
Points
113
We learn to break through everyone and everything.

Dear carders!
Today we'll talk about OSINT. In this article, we will provide useful material in this area.
OSINT is an incredibly monetary direction, for a dossier per person they are ready to pay $ 500. Interesting? Then let's go into the topic faster!

The classic OSINT methodology you'll find all over the place is simple:
1. Define the requirements: what are you looking for?
2. Get the data
3. Analyze the collected information
4. Summary data and reporting. Next, either repeat your search using a combination of the data you just collected, or complete your investigation and write a report.


Very often during investigations, we get lost in the amount of collected data, and it is difficult to know in which direction to dig. In this case, I think it will be helpful to take a break and go back to steps 3 and 4: analyze and summarize what you found, list what might help you develop, and identify new (or more specific) questions that are still needed. answers.

Tips from me:
Never give up: you may feel like you've explored all the possibilities for gaining information about your goal. But don't give up. Take a break (an hour or a day doing something else), then re-analyze the input and try to look at it from a different perspective.

Store evidence: information disappears online very quickly. Imagine that the person you are looking for starts to suspect something, all of a sudden all of his social media accounts and websites can be removed from the network. So keep the evidence: screenshots, web archives, not just links to online resources. In forensics, the key is collecting events that happen at the same time. When was the site created? When was the FB account created? When was the last blog post written?

There are two other methods that I find useful.
The first
is a flowchart that describes a workflow for finding additional information by data type (such as email). For example, here's a workflow for exploring information for an email address:

scale_1200

I think it's a good idea to start developing your own Investigation Model and gradually improve it over time with the new tricks and services you find. You can design a flowchart online for free at draw.io

The last methodology that I would recommend for long-term studies is the analysis of competing hypotheses.

This methodology was developed by the CIA in the 70s to help the analyst remove bias from his analysis and carefully evaluate various hypotheses.

Prepare your OS
Before starting your investigation, there are several aspects of job safety that need to be considered so as not to alarm the people and companies you are researching. Visiting the target's personal website may give them your IP address and therefore your location, the use of your personal social network account in the browser from which the investigation is being conducted may result in an automatic click on the like implemented by the vulnerability.

I follow these guidelines when conducting investigations:
Use a commercial VPN or Tor for all connections from your browser. Most commercial VPNs provide servers in different countries, and Tor allows you to select the country of the exit node, so I choose a country that won't stand out in this context (US for US investigations, etc.). Use a fictitious social media account to avoid accidentally de-anonymizing in front of the target.

OSINT tools don't matter, what you do with those tools is much more important. Test tools, read their code, create your own tools, etc., but make sure you understand what you are doing. The consequence of this is that there is no perfect toolbox. The best toolkit is the one you know, love, and master.

Chrome and plugins
I use Chrome as my investigative browser, mainly because Hunchly is only available for Chrome. I am adding some useful plugins to it:
archive.is Button lets you quickly save a web page to archive.is (more on that later)
Wayback Machine to find an archived page at archive.org Wayback Machine
OpenSource Intelligence provides quick access to many OSINT tools
EXIF Viewer EXIF Viewer allows you to quickly view EXIF data in images
FireShot allows you to quickly take a screenshot

Hunchly
I love using Hunchly. This is a great tool. Hunchly is a Chrome extension that allows you to save and sign all web data you find during your investigation. Basically, you just need to hit Capture on the extension when you start your investigation, and Hunchly will save all the web pages you visit to a database, allowing you to add notes and tags to them.

scale_1200


Maltego
Basically, Maltego offers a graphical interface for graph presentation and transformations to find new data in the graph (for example, domains associated with an IP address from the Passive DNS database). You can use the Maltego Community Edition, which limits your search coverage, but that should be enough for a little research.

Harpoon
This tool started out as a threat analysis utility, but now it has many commands for OSINT.

Python
Very often you are faced with specific data collection and visualization tasks that cannot be easily accomplished with known tools. In this case, you will have to write your own code. I use python for this, any modern programming language will work equally well, but I love the flexibility of python and the sheer amount of libraries available. Justin Seitz (author of Hunchly) - Written a reference to Python and OSINT, you should definitely check out his blog and his book Black Hat Python. SpiderFoot is an intelligence tool that gathers information through many different modules. It has a nice web interface and generates graphs showing relationships between different types of data.

Recon-ng is a good CLI tool for querying different platforms, social media or threat analysis platforms.

Buscador is a Linux virtual machine that has many different OSINT tools built into it. I always prefer to have my own systems, but this is a good way to try out new tools without having to install them one by one.

Now let's take a look at what can help you in your OSINT investigations.

Technical infrastructure
Technical infrastructure analysis is at the crossroads between hacking and OSINT, but it is definitely an important part of investigations. Here's what you should look for: IPs and Domains: There are many different tools for this, but I find https://community.riskiq.com/ to be one of the best sources of information. Free access gives you 15 requests per day through the web interface and 15 through the API.

Certificates:
Censys.io is a great tool

crt.sh is also a very good certificate database Scanning: It is often useful to know which services are running on IP, you can do the scan yourself with nmap, but you can also rely on platforms to scan all IPv4 addresses regularly. The two main platforms, Censys and Shodan, both focus on different aspects (more IoT for Shodan, more TLS for Censys), so it's good to know and use both of them.

Another source of information is Rapid7 Open Data , but you will have to download the scan files and do the research yourself.

Subdomains: There are many different ways to find a list of subdomains for a domain. PassiveTotal and BinaryEdge implement this functionality directly.

Google Analytics and Social Media: The last piece of information that is really interesting is to check if the same Google Analytics / Adsense ID is used across multiple websites. This method was discovered in 2015 and is well described by the Bellingcat community . To search for these connections, I mainly use SpyOnWeb and NerdyData.

Search engines
Depending on the context, you may want to use a different search engine while investigating. I mainly rely on Google and Bing (for Europe or North America), Baidu (for Asia) and Yandex (for Russia and Eastern Europe).

Images
For images, there are two things you want to know: how to find additional information about an image, and how to find similar images.

To find more information, the first step is to look at the exif data. Exif data is data embedded in the image when it was created, and it often contains interesting information about the creation date, the camera used, sometimes GPS data, etc. To test this, I like to use the command utility ExifTool, but the Exif Viewer extension (for Chrome and Firefox) is pretty handy too.

To find similar images, you can use Google Images, Bing Images, or TinyEye. TinyEye has a convenient API and Bing has a very useful feature that lets you search for a specific part of an image (https://www.cnet.com/news/bing-visual-search-within-images/). There is no easy way to analyze the content of an image itself and, for example, find its location. You will need to find certain elements in the image that will allow you to guess which country it may be in, and then do an online survey and compare it with satellite images and Google Street View photos.

Social networks
There are many tools available for social media, but they are highly platform dependent. Here's a quick excerpt from some interesting tools and techniques:

https://github.com/x0rz/tweets_analyzer is a great way to get an overview of your Twitter account activity. Facebook: The best resource for Facebook investigation is https://inteltechniques.com/osint/facebook.html

LinkedIn: The most useful trick I've found on LinkedIn is how to find a LinkedIn profile based on an email address

Web archives
There are several website caching platforms that can be an excellent source of information while investigating, either because a website is down or to analyze the historical development of a website. These platforms either automatically cache sites or cache a site on demand.

Search Engines: Most search engines cache the content of websites as they crawl. This is really useful and many websites are accessible this way, but keep in mind that you have no control over when it was last cached (very often less than a week ago) and it will most likely be removed soon, so, if you find anything interesting there, consider saving the cached page.

https://archive.org/ is a great project that aims to preserve everything that is published on the internet, including automatic crawling of web pages and saving the evolution of pages into a huge database. It is important to know that the Internet Archive removes content on demand (they did this for Stalkerware Flexispy, for example), so you must save the content somewhere else.

Other manual caching platforms. I really like archive.today, which allows you to save snapshots of web pages and search for snapshots taken by other people.

It is sometimes inconvenient to manually query all of these platforms to check if a web page has been cached or not. I have implemented a simple command in Harpoon to do this: harpoon cache https://citizenlab.ca

Collecting evidence
You will definitely get bogged down with a lot of data that gets duplicated, web pages change, Twitter accounts disappear, and so on. You cannot rely on the Internet Archive alone, use other cache platforms and, if possible, local copies.

- Save images, documents, etc.
- Take screenshots
- Save social media data

Naturally, these are not all ways to help find this or that information.

If you want to have what you never had, you have to do what you never did.
 
Top