Carding 4 Carders
Professional
- Messages
- 2,724
- Reaction score
- 1,588
- Points
- 113
ChatGPT tried its hand at creating phishing emails.
Artificial intelligence continues to evolve, with each new study demonstrating amazing progress. But will a machine ever be able to surpass a human in the art of deception and manipulation? This question interested researchers from IBM.
The IBM X-Force team conducted an experiment to compare AI and human skills in creating phishing emails. The goal is to create the most convincing message that will encourage the recipient to click on a malicious link.
The ChatGPT model was given a difficult task. In the end, the AI completed the task in 5 minutes, while it took humans 16 hours to complete the same task.
However, the final results were not in favor of the neural network. Emails created by people were more effective, with a 14% click-through rate compared to ChatGPT's 11%.
According to IBM experts, the victory was brought to people by an emotional approach, careful analysis of the target audience and competent text construction. Although the speed of a human is significantly inferior to a machine, it is still ahead of AI in psychological subtleties.
Nevertheless, experts predict that soon AI will be able to not only match, but also surpass the ability of hackers in social engineering. This poses new challenges for cybersecurity professionals.
John Carruthers, a prominent information security expert, says: "We simply need to study and analyze the methods that attackers can use to exploit generative AI. By understanding how attackers can use technology, we can help organizations minimize risks and effectively defend against threats."
Artificial intelligence continues to evolve, with each new study demonstrating amazing progress. But will a machine ever be able to surpass a human in the art of deception and manipulation? This question interested researchers from IBM.
The IBM X-Force team conducted an experiment to compare AI and human skills in creating phishing emails. The goal is to create the most convincing message that will encourage the recipient to click on a malicious link.
The ChatGPT model was given a difficult task. In the end, the AI completed the task in 5 minutes, while it took humans 16 hours to complete the same task.
However, the final results were not in favor of the neural network. Emails created by people were more effective, with a 14% click-through rate compared to ChatGPT's 11%.
According to IBM experts, the victory was brought to people by an emotional approach, careful analysis of the target audience and competent text construction. Although the speed of a human is significantly inferior to a machine, it is still ahead of AI in psychological subtleties.
Nevertheless, experts predict that soon AI will be able to not only match, but also surpass the ability of hackers in social engineering. This poses new challenges for cybersecurity professionals.
John Carruthers, a prominent information security expert, says: "We simply need to study and analyze the methods that attackers can use to exploit generative AI. By understanding how attackers can use technology, we can help organizations minimize risks and effectively defend against threats."