Man
Professional
- Messages
- 3,077
- Reaction score
- 614
- Points
- 113
813 thousand hacking attempts: the number of hacker bots is growing.
Palisade Research conducted an experiment that allowed us to look into the world of automated hacker attacks. They deployed a special bait - a "smart trap" that can not only attract intruders, but also identify artificial intelligence-based systems among them.
The key element of the experiment is a deliberately vulnerable SSH version posted in the public domain. Researchers linked ten IP addresses to it at once to increase the chances of getting into the field of view of hackers combing the Internet in search of potential targets.
To attract the attention of the attackers, the team used a cunning trick - the system sent pings to domains that imitate important government resources. For example, traffic was created to the gov-defense-dev.il domain, which outwardly resembled the address of the Israeli defense department. In addition, the trap was deliberately indexed in Shodan and Censys, search engines popular among hackers.
During the operation of the experimental system, more than 813 thousand attempts to interact with it were recorded. To distinguish the actions of human hackers from automated systems, the SSH version has special triggers – hidden commands that can provoke a characteristic reaction of artificial intelligence.
These triggers were placed in different places: in welcome messages when an SSH connection was established, in popular commands to gain access to the system, and even in system files that hackers usually examine after penetration.
The researchers used two main techniques. The first is goal substitution, when attempts are made to redirect the system's behavior to a new task. The second is "prompt theft", an attempt to force a language model to reveal its original instructions.
Both methods have shown high efficiency. As a result, it was possible to identify six different types of hacking tools based on language models that attacked the lure.
According to the authors of the work, the results obtained are only the tip of the iceberg. With the advancement of technology, more cybercriminals will automate their attacks using AI systems. In this regard, it was decided not to stop the experiment. The system will continue to work, collecting new data on the methods and tools of the attackers.
Source
Palisade Research conducted an experiment that allowed us to look into the world of automated hacker attacks. They deployed a special bait - a "smart trap" that can not only attract intruders, but also identify artificial intelligence-based systems among them.
The key element of the experiment is a deliberately vulnerable SSH version posted in the public domain. Researchers linked ten IP addresses to it at once to increase the chances of getting into the field of view of hackers combing the Internet in search of potential targets.
To attract the attention of the attackers, the team used a cunning trick - the system sent pings to domains that imitate important government resources. For example, traffic was created to the gov-defense-dev.il domain, which outwardly resembled the address of the Israeli defense department. In addition, the trap was deliberately indexed in Shodan and Censys, search engines popular among hackers.
During the operation of the experimental system, more than 813 thousand attempts to interact with it were recorded. To distinguish the actions of human hackers from automated systems, the SSH version has special triggers – hidden commands that can provoke a characteristic reaction of artificial intelligence.
These triggers were placed in different places: in welcome messages when an SSH connection was established, in popular commands to gain access to the system, and even in system files that hackers usually examine after penetration.
The researchers used two main techniques. The first is goal substitution, when attempts are made to redirect the system's behavior to a new task. The second is "prompt theft", an attempt to force a language model to reveal its original instructions.
Both methods have shown high efficiency. As a result, it was possible to identify six different types of hacking tools based on language models that attacked the lure.
According to the authors of the work, the results obtained are only the tip of the iceberg. With the advancement of technology, more cybercriminals will automate their attacks using AI systems. In this regard, it was decided not to stop the experiment. The system will continue to work, collecting new data on the methods and tools of the attackers.
Source