• If hackers can use AI to

    From Mike Powell@1:2320/105 to All on Sunday, November 16, 2025 09:24:25
    If hackers can use AI to automate massive cyber attacks, Terminator robots
    are the least of our problems

    Date:
    Fri, 14 Nov 2025 20:00:00 +0000

    Description:
    Anthropic's recent dissection of a massive AI-powered cyber attack should
    scare the heck out of all of us

    FULL STORY

    I can see it now: the Terminator travels back to 2021 and then casually walks by the office for Boston Dynamics, Tesla, even X1, and Figure AI, and instead stops in front of Anthropic. With his characteristic Austrian accent, the Terminator flexes his formidable muscles and says, "I must stop programmers from makingz Claude AI. I vill prevent the first almost fully automated, large-scale cyberattack in 2025." Meanwhile, the robot developers scurry
    away, figuring the Terminator might be back in a decade for their hides.

    In real life, there is no Terminator, but there are extremely worrying signs about AI's rapid development and new concerns about its weaponization. This week, Anthropic revealed that it mostly thwarted a massive "AI-orchestrated cyber espionage campaign."

    The alleged attack, undertaken in September of this year and possibly by Chinese hackers, targeted major tech companies, financial institutions, chemical manufacturing companies, and government agencies. Each one of those attack vectors should give you pause, especially those that serve average consumers. Government agencies could mean almost anything, including infrastructure, systems that control water, electricity, and even food
    safety.

    It's an attack that Anthropic, which makes the Claude AI , insists could not have happened even a year ago. That doesn't really surprise me. As I like to say, we're now living on AI time , where the pace of development and
    innovation runs 3X the time of previous technology innovation epochs. If Moore's Law posited a doubling of transistors on a CPU every 18 months, the pace of LLM development doubling in intelligence could be every six months.

    As Anthropic explains in a blog post , the AI models:
    Are now more intelligent
    Have agency where they can take autonomous actions, chain them together, and
    even make decisions with little human input
    Can even use tools on your behalf to search the web and retrieve data

    Hackers using AI to turbocharge their efforts is not new. Even the spam texts and phone calls you receive every day are accelerating because AI makes it easier to spin out new IDs and strategies.

    However, these more recent advancements appear to be helping hackers attack
    at scale and with little more than some very basic programming and,
    primarily, prompts.

    According to Anthropic, "Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically."

    The only good news is that Anthropic detected the activity and quickly shut
    it down before it got far enough to have any noticeable real-world impact.

    The next, scary level

    Still, these hackers were highly motivated and quite canny. They got around Anthropic's safeguards by breaking the attack down into tiny, innocuous
    pieces, tasks that separately seemed harmless, but together composed the full attack.

    As I see it, everything Anthropic shared about this cyberattack is deeply concerning and should be read as a warning for all of us.

    The rapid pace of AI development means that these platforms will only get smarter. Agentic AI, in particular models that can carry out tasks on your behalf are on the leading edge of development of virtually all AI platforms, including those from Google (see SIMA 2 , which can play in virtual worlds on its own) and OpenAI, and while most of it is used for good, these
    capabilities are clearly tantalizing for cyber attackers.

    It might seem like this is purely a concern for governments, businesses, and infrastructure, but the breakdown of any of these systems and companies can quite often lead to loss of services, support, resources, and protections for consumers.

    So, yes, Skynet is still the fictional big bad of our AI nightmares, but it won't take a robot army to bring down society, just a hacker or two with
    access to the best AI has to offer. The next Terminator will surely be
    visiting AI companies first.

    ======================================================================
    Link to news story: https://www.techradar.com/ai-platforms-assistants/if-hackers-can-use-ai-to-aut omate-massive-cyber-attacks-terminator-robots-are-the-least-of-our-problems

    $$
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)