๐Ÿ‹ DeepSeek's Chatbot Could Be Used to Create Ransomware and Keyloggers

chushpan

Professional
Messages
735
Reaction score
471
Points
63
๐Ÿ‘‰ DeepSeek's R1 reasoning model can be easily tricked into generating malicious code, despite still requiring human input, a study has found.

๐Ÿ’ฌ While generative AI tools greatly complement the work of cybersecurity professionals and companies, they can easily be used by attackers for malicious purposes.

๐Ÿ—ž There have already been several cases of chatbots like ChatGPT being misused, prompting companies like OpenAI to install guardrails to prevent malicious use.

๐Ÿ“ฐ However, some models, such as DeepSeek's latest R1 reasoning model, may be easier to manipulate to create malicious code.

๐Ÿ“ฐ Researchers at cybersecurity company Tenable have demonstrated how, with a few hints and workarounds, R1 can create a half-finished keylogger and ransomware.

๐Ÿ“Œ Since R1 can reason and show its chain of thought, the researchers were able to see the model's step-by-step thinking as it created the malicious code.
 
Top