• Jailbreaking AI chatbots

    From Mike Powell@1:2320/105 to All on Thursday, March 20, 2025 09:12:00
    Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware

    Date:
    Wed, 19 Mar 2025 14:36:56 +0000

    Description:
    Cato CTRL researchers can jailbreak LLMs with no prior malware coding experience.

    FULL STORY ======================================================================
    - Security researchers have developed a new technique to jailbreak AI
    chatbots
    - The technique required no prior malware coding knowledge
    - This involved creating a fake scenario to convince the model to craft an attack

    Despite having no previous experience in malware coding, Cato CTRL threat intelligence researchers have warned they were able to jailbreak multiple
    LLMs, including ChatGPT-4o, DeepSeek-R1, DeepSeek-V3, and Microsoft Copilot, using a rather fantastical technique.

    The team developed Immersive World which uses narrative engineering to bypass LLM security controls by creating a detailed fictional world to normalize restricted operations and develop a "fully effective" Chrome infostealer. Chrome is the most popular browser in the world, with over 3 billion users, outlining the scale of the risk this attack poses.

    Infostealer malware is on the rise , and is rapidly becoming one of the most dangerous tools in a cybercriminal's arsenal - and this attack shows that the barriers are significantly lowered for cybercriminals, who now need no prior experience in creating malicious code.

    AI for attackers

    LLMs have fundamentally altered the cybersecurity landscape, the report
    claims, and research has shown that AI-powered cyber threats are becoming a much more serious concern for security teams and businesses by allowing criminals to craft more sophisticated attacks with less experience and at a higher frequency.

    Chatbots have many guardrails and safety policies, but since AI models are designed to be as helpful and compliant to the user as possible, researchers have been able to jailbreak the models, including persuading AI Agents to
    write and send phishing attacks with relative ease.

    We believe the rise of the zero-knowledge threat actor poses high risk to organizations because the barrier to creating malware is now substantially lowered with GenAI tools, said Vitaly Simonovich, threat intelligence researcher at Cato Networks.

    Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises. Our new LLM jailbreak technique, which weve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with ease.

    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/ai-chatbots-jailbroken-to-create-a-chro me-infostealer

    $$
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)