🐋 DeepSeek's Chatbot Could Be Used to Create Ransomware and Keyloggers

chushpan

Professional
Messages
1,300
Reaction score
1,512
Points
113
👉 DeepSeek's R1 reasoning model can be easily tricked into generating malicious code, despite still requiring human input, a study has found.

💬 While generative AI tools greatly complement the work of cybersecurity professionals and companies, they can easily be used by attackers for malicious purposes.

🗞 There have already been several cases of chatbots like ChatGPT being misused, prompting companies like OpenAI to install guardrails to prevent malicious use.

📰 However, some models, such as DeepSeek's latest R1 reasoning model, may be easier to manipulate to create malicious code.

📰 Researchers at cybersecurity company Tenable have demonstrated how, with a few hints and workarounds, R1 can create a half-finished keylogger and ransomware.

📌 Since R1 can reason and show its chain of thought, the researchers were able to see the model's step-by-step thinking as it created the malicious code.
 
Top