\

Ask HN: Are we ready for vulnerabilities to be words instead of code?

3 points - yesterday at 9:07 PM


Until now, security has been math. Buffer overflows, SQL injections, crypto flaws β€” deterministic, testable, formally verifiable.

But we're giving agents terminal access and API keys now. The attack vector is becoming natural language. An agent gets "socially engineered" by a prompt; another hallucinates fake data and passes it down the chain.

Trying to secure these systems feels like trying to write a regex that catches every possible lie. We've shifted the foundation of security from numbers to words, and I don't think we've figured out what that means yet.

Is anyone thinking about actual architectural solutions to this? Not just "use another LLM to guard the LLM" β€” that feels like circular logic. Something fundamentally different.

(Not a native English speaker, used AI to clean up the grammar.)

  • raw_anon_1111

    yesterday at 11:52 PM

    It’s really not that hard to secure agents. Just give them tightly scope API Keys, put them in front of your API and treat it like you would a user instead of behind your API.

    If I were to ever use Claude in a production environment for an AWS account for instance, you best believe the role it was running with with temporary access keys would be the bare minimum.

    • lielcohen

      yesterday at 9:43 PM

      To be clear - I'm not really talking about my personal laptop. I'm thinking about where this is going at scale. When companies start replacing entire teams with agents (and looking at the layoffs, that's clearly the direction), those agents will need real access to production systems. That's the scenario where "just don't give it access" stops being an answer.

      • nine_k

        yesterday at 9:13 PM

        Scams and "social engineering", as known for a long time, could be a good approximation.

          • lielcohen

            yesterday at 9:21 PM

            Right, but with scams you trick a human into doing something. With agents, you give them the keys upfront - terminal, file system, API keys - because otherwise what's the point? You can't have an agent that asks permission for every action, you'd just be babysitting it all day. So the question isn't "how do we stop someone from being tricked." It's "how do we secure something that already has root access and runs on vibes instead of logic."

              • codingdave

                yesterday at 9:29 PM

                Don't give it root access.

                That answer hasn't changed since day one of LLMs, despite some of the thing people are attempting to build these days: If you don't want to get in trouble, don't give LLMs access to anything that can cause actual harm, nor give them autonomy.

                  • lielcohen

                    yesterday at 9:35 PM

                    Sure, that works today. But Meta is cutting 20% of its workforce. So is everyone else. The whole bet is that agents replace human work - and that only works if they can actually do things. Deploy, access databases, call APIs.

                    "Don't give it access" is like saying "don't connect to the internet" in 1995. The question isn't whether agents get these permissions. They will. The question is what happens when they do.

                      • nine_k

                        yesterday at 10:50 PM

                        Let's see how well it works for them. Apparently Salesforce had been a bit overly enthusiastic about layoffs, and recently had to backtrack.

                • nine_k

                  yesterday at 10:55 PM

                  How do we expect that everything goes all right if we give prod access to a pack of very smart dogs that know some key tricks? Now the same, when humans actually leave the room?

                  My answer is simple: it just won't be all right this way. The problems will cost the management who drank too much kool-aid; maybe they already do (check out what was happening at Cloudflare recently). Sanity will return, now as a hard-won lesson.

          • stephenr

            today at 5:37 AM

            If at this point you (where you may be a person or a company) still think relying on spicy autocomplete is a smart decision, I can't fucking help you, and you deserve whatever bad things happen to you.

            This is akin to saying "we are fully committed to slapping together sql queries directly from request data, but I wonder if it's risky?"

            Part of security awareness is knowing when something is simply not worth the risks.

            • iam_circuit

              today at 3:40 AM

              [dead]