selinkocalar 13 minutes ago

This was inevitable. AI lowers the barrier to entry for cybercrime just like it does for everything else. The concerning part isn't that someone used AI for attacks - it's how "unprecedented" the scale became. Automation lets bad actors operate at a level that would have required entire teams before. Defense needs to scale up accordingly. Manual security reviews can't keep pace with automated attacks.

general1726 a day ago

And this is why local run models are absolutely necessary. Sure Claude is better than whatever you can run locally, but to avoid being eavesdropped on every keystroke, just buy older enterprise server with enough compute for 3k USD and run similar model there.

  • j45 a day ago

    Perhaps design with public model and then convert to a local one.

scorpioxy a day ago

There's a part I didn't understand. How did the model know which companies are vulnerable to attack? I get the part where the LLM was used to analyze documents and create "malicious" software but the biggest missing step seems to be the first one. Someone please correct me if I'm wrong but usually that's either targeted at a specific company or you do a port scan on IP ranges to find any target and proceed from there.

  • quacksilver a day ago

    Often you will obtain a vulnerability in some software and then search for companies using it. You can often use Google or Shodan to do the searching, but perhaps ingested LLM data could also work.

    In the simplest case if you get remote code execution in SuperServer9000 (made up product) and that has a banner on error / status pages that reads "Powered with pride by SuperServer9000 version 2.1", then you could just search for that string (or part of it) and use your remote code execution bug against any sites that come up.

    It can get behavior based or more complicated than that though, or rely on information that an LLM has ingested about a company from public sources.

    Then either grab data and sell it or sell your access to a broker or whatever else.

ElijahLynn a day ago

Good on Anthropic for disclosing this and leading the way ethically. I could see other companies trying to keep this buried.

  • tartuffe78 a day ago

    There’s no such thing as bad publicity. This is basically an advertisement for how useful their service is.

    • sigmoid10 a day ago

      Yeah this is not responsible disclosure, it's a not-so-humble brag marketing gag.

  • j45 a day ago

    Anthropic is sharing their learnings while others may not.

  • miltonlost a day ago

    It's good on them to put out a trash can on fire after they set the city ablaze

dehugger a day ago

The article is entirely devoid of detail. Is there a better source for this?

upghost a day ago

Man, part of me wonders if the same AI arguments are playing out across the criminal underworld. Like, are some criminals afraid of their jobs getting automated? And the old school guys are like, "AI just makes slop crime". And are junior criminals are having a hard time breaking into the industry because they've stopped hiring for intro level gang jobs because the Crime Lords are really pushing their henchmen into using AI for everything?

  • sudahtigabulan 21 hours ago

    This reminded me of a Terry Pratchett's book. There was a guild of thieves or something like that. Apparently they were so inefficient at what they did that the author's conclusion was it would be easier if they just did honest work instead.

rkagerer a day ago

In a sense, is Anthropic an accomplis?