Ask HN: Are we ready for vulnerabilities to be words instead of code?

Until now, security has been math. Buffer overflows, SQL injections, crypto flaws — deterministic, testable, formally verifiable.

But we're giving agents terminal access and API keys now. The attack vector is becoming natural language. An agent gets "socially engineered" by a prompt; another hallucinates fake data and passes it down the chain.

Trying to secure these systems feels like trying to write a regex that catches every possible lie. We've shifted the foundation of security from numbers to words, and I don't think we've figured out what that means yet.

Is anyone thinking about actual architectural solutions to this? Not just "use another LLM to guard the LLM" — that feels like circular logic. Something fundamentally different.

(Not a native English speaker, used AI to clean up the grammar.)

2 points | by lielcohen 3 hours ago

3 comments

  • raw_anon_1111 1 hour ago
    It’s really not that hard to secure agents. Just give them tightly scope API Keys, put them in front of your API and treat it like you would a user instead of behind your API.

    If I were to ever use Claude in a production environment for an AWS account for instance, you best believe the role it was running with with temporary access keys would be the bare minimum.

  • lielcohen 3 hours ago
    To be clear - I'm not really talking about my personal laptop. I'm thinking about where this is going at scale. When companies start replacing entire teams with agents (and looking at the layoffs, that's clearly the direction), those agents will need real access to production systems. That's the scenario where "just don't give it access" stops being an answer.
  • nine_k 3 hours ago
    Scams and "social engineering", as known for a long time, could be a good approximation.
    • lielcohen 3 hours ago
      Right, but with scams you trick a human into doing something. With agents, you give them the keys upfront - terminal, file system, API keys - because otherwise what's the point? You can't have an agent that asks permission for every action, you'd just be babysitting it all day. So the question isn't "how do we stop someone from being tricked." It's "how do we secure something that already has root access and runs on vibes instead of logic."
      • codingdave 3 hours ago
        Don't give it root access.

        That answer hasn't changed since day one of LLMs, despite some of the thing people are attempting to build these days: If you don't want to get in trouble, don't give LLMs access to anything that can cause actual harm, nor give them autonomy.

        • lielcohen 3 hours ago
          Sure, that works today. But Meta is cutting 20% of its workforce. So is everyone else. The whole bet is that agents replace human work - and that only works if they can actually do things. Deploy, access databases, call APIs.

          "Don't give it access" is like saying "don't connect to the internet" in 1995. The question isn't whether agents get these permissions. They will. The question is what happens when they do.

          • nine_k 2 hours ago
            Let's see how well it works for them. Apparently Salesforce had been a bit overly enthusiastic about layoffs, and recently had to backtrack.
      • nine_k 2 hours ago
        How do we expect that everything goes all right if we give prod access to a pack of very smart dogs that know some key tricks? Now the same, when humans actually leave the room?

        My answer is simple: it just won't be all right this way. The problems will cost the management who drank too much kool-aid; maybe they already do (check out what was happening at Cloudflare recently). Sanity will return, now as a hard-won lesson.