simonw 5 hours ago

I added this section to my post just now: https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...

> On thinking about this further there’s one aspect of the Rule of Two model that doesn’t work for me: the Venn diagram above marks the combination of untrustworthy inputs and the ability to change state as “safe”, but that’s not right. Even without access to private systems or sensitive data that pairing can still produce harmful results. Unfortunately adding an exception for that pair undermines the simplicity of the “Rule of Two” framing!

  • wunderwuzzi23 6 minutes ago

    Good point. Few thoughts I would add from my perspective:

    - The model is untrusted. Even if prompt injection is solved, we probably still would not be able to trust the model, because of possible backdoors or hallucinations. Anthropic recently showed that it takes only a few hundred documents to have trigger words trained into a model.

    - Data Integrity. We also need to talk about data integrity and availability (full CIA triad, not not just confidentiality), e.g. private data being modified during inference. Which leads us to the third....

    - Prompt injection which is aimed to have the AI produce output that makes humans take certain actions (not tool invocations)

    Generally, I call the deviation from don't trust the model, the "Normalization of Deviance in AI" where seem to start trusting the model more and more over time - and I'm not sure if that is the right thing in the long term.

  • kloud 3 hours ago

    Also in the context of LLMs I think model weights themselves could be considered an untrusted input, because who knows what was in the training dataset. Even an innocent looking prompt could potentially trigger a harmful outcome.

    In that regard it reminds me of the CAP theorem, which also has three parts. However, in practice partitioning in distributed systems is given, so the choice is just between availability or consistency.

    So in the case of lethal trifecta it is either private data or external communication, but the leg between these two will always have some risk.

  • causal an hour ago

    I think the rule of 2 would work if it kept the 3 from your lethal trifecta. "Change state" should be not be paired with "communicate externally".

    And even then that's just to avoid data exfiltration- if you can't communicate externally but can change state, damage can still be done.

  • ArcHound 4 hours ago

    I love to see this. As much as we try for simple security principles, the damn things have a way to become complicated quickly.

    Perhaps the diagram highlights the common risky parts of these apps and we gain more risk as we keep increasing the scope? Maybe we can do some handovers and protocols to separate these concerns?

jFriedensreich 3 hours ago

I am confused this article does not talk about taint tracking. If state was mutated by an agent with untrustworthy input the taint would transfer to the state, making it untrustworthy input too, so the reasoning of the original trifecta with taint tracking is more general and practical. I am also also investigating the direction of tracking taints as scores rather than binary as most use cases would otherwise be impossible to do at all autonomous. Eg. with sensitivity scores to data, trust scores to inputs (that can be improved by eg. human review). One important limit that needs way more research is how to transfer the minimal needed information from a tainted context into an untainted fresh context without transferring all the taints. The only solution i currently have is by compaction and human review, if possible aided with schema enforcement and optimised UI for the use case. This unfortunately cannot solve encoded information that humans cannot see, but it seems that issue will never be solvable outside alignment research.

PS: An example how scores are helpful: Using browser tab titles in the context would by definition have the worst trust score possible. But truncating titles to only the user-visible parts could lower this to acceptable for autonomous execution if the data was just mildly sensitive.

  • simonw 15 minutes ago

    Have you seen the DeepMind CaMeL paper? It describes a taint tracking system that works by generating executable code that can have the source of data tracked as it moves through the program: https://simonwillison.net/2025/Apr/11/camel/

  • causal an hour ago

    Totally. I think the original "Lethal trifecta" post by OP only pertained to data exfiltration and never included changing state (maybe was implied by sensitive data access).

    Rule of 2 model has holes.

gs17 an hour ago

> [A] An agent can process untrustworthy inputs

> [B] An agent can have access to sensitive systems or private data

> [C] An agent can change state or communicate externally

Somewhat reminds me of the CAP theorem, where you can pick two of three, but one is effectively required for something useful. It seems more like the choice is really between "untrustworthy inputs" and "sensitive systems", which makes sense.

behnamoh 8 hours ago

I actually want prompt injection to remain possible. So many lazy academic paper reviewers nowadays delegate the review process to AI. It'd be cool if we could inject prompts in the paper that would stop the AI from aiding in such situations. In my experience, prompt injection techniques work for non-reasoning models but gpt-5-high easily ignores them...

  • simonw 5 hours ago

    There was a minor scandal about exactly that a few months ago: https://asia.nikkei.com/business/technology/artificial-intel...

    "Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found."

    Amusingly I tried an experiment with some of those papers with hidden text against frontier models at the time and found that the trick didn't actually work! The models spotted the tricks and didn't fall for them.

    At least one conference has an ethics policy saying you shouldn't attempt this though: https://icml.cc/Conferences/2025/PublicationEthics

    "Submitting a paper with a "hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion."

    • cubefox an hour ago

      Intuitively it does excuse it though.

ares623 9 hours ago

I don’t know if it’s just me but doesn’t a huge value of LLMs for the general population necessitate all 3 of the circles?

Having just 2 circles requires a person in the loop, and that person will still need knowledge and experience and a low enough throughput to meaningfully action the workload otherwise they would just rubber stamp everything (which is essentially the 3rd circle with extra steps)

  • QuadmasterXLII 5 hours ago

    Most current consumer LLM uses are run only once or a few times, before changing prompt and task. This causes the attacker to have to move first: they put malicious injected documents onto the internet, which are then ingested by ephemeral systems, the details of which the attacker doesn't observe.

    On the other hand, something like an AI mcdonalds drive through order taker runs over and over again. This property of running repeatedly is what allows the attacker to move second and gain the advantage.

  • pprotas 9 hours ago

    The HITL is needed to pin the accountability on an employee you can fire

    • ares623 9 hours ago

      Yeah that seems likely. But still even in that dystopian scenario, the incentives of the human will lead them to go through the back log very thoroughly, which IMO defeats the productivity gains.

      Maybe there will still be some productivity gains even with the human being the bottleneck? Or the humans can be scaled out and parallelized more easily?

    • boxed 8 hours ago

      Given the incentives here, I'd bet this is mathematically identical to throwing dice and firing people.

  • mercer 8 hours ago

    wouldn't that still add a lot of value, where the person in the loop (sadly, usually) becomes little more than the verifier, but can process a lot more work?

    Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping.

    For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works.

    I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place.

    For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic.

kubb 8 hours ago

I’m sorry, what kind of rule is that? How does it guarantee security?

It sounds like we’re making things up at this point.

  • bawolff 6 hours ago

    It kind of sounds like a weak version of airgapping. If you cant persist state, access private data, or exfiltrate data, there is not much point to jailbreaking the llm.

    However, its deeply unsatisying in the same way that securing your laptop by not turning it on, is.

  • imtringued 5 hours ago

    Yeah it's nonsense, because the author has described the standard "read, process, write" flow of computation and decided that if you remove one of these three, then everything is safe.

    The correct solution is to have the system prompt be mechanically decoupled from untrustworthy data, the same it was done with CSP (content security policy) against XSS and named parameters for SQL.

ArcHound 5 hours ago

I'm sorry, but the rule of two is just not enough, not even as a rule of thumb.

We know how to work with security risks, the issue is they depend both on the business and the technicalities.

This can actually do a lot of harm as security now needs to dispel this "great approach" to ignoring security that is supported by a "research paper they read".

Please don't try to reinvent the wheel and if you do, please learn about the current state (Chesterton's fence and all that).

  • jFriedensreich 4 hours ago

    Can you explain what you mean? How is Chesterton's fence applied to AI security helpful here? Are you just talking about not removing the "Non-AI" security architecture of the software itself? I think no one ever proposed that?

    • ArcHound 4 hours ago

      Right, what got me going is the reduction of plenty cyber security concepts into a simple "safe" label in the diagram.

      So what I meant is that before you discard all of the current security practices, it's better to learn about the current approach.

      From another angle, maybe the diagram could be fixed with changing "safe" to "danger" and "danger" to "OMG stop". But that also discards the business perspective and the nature of the protected asset.

      I am also happy to see the edit in the article, props to the author for that!

      And to address the last question, no one proposed that right now, yes. But I was in plenty of discussions about security approaches. And let me tell you, sometimes it only takes one sentence that the leadership likes to hear to detail the whole approach (especially if it results in cost savings). So I might be extra sensitive to such ideas and I try to uproot them before they bloom fully.

      • jFriedensreich 3 hours ago

        Hmm, what do you mean by current approach? This is new territory and agent safety is an unsolved problem, there is no current approach, except you mean not doing agent systems and using humans. The trifecta is just a tool on the level of physics saying "ignore friction", we assume the model itself is trustworthy and not poisoned most of the time too, but of course when designing a real world system you need to factor that in too.

        • ArcHound 3 hours ago

          Yes, by current approach I mean security best practices for non-LLM apps. Plenty of those are directly applicable.

          And yes, LLMs have some challenges. But discarding all of the lessons and principles we've discovered over the years is not the way. And if we need to discard some of them, we should understand exactly why they are no longer applicable.

          EDIT: I know that models need to omit stuff to be useful. But this model omits too much - claiming that something is "safe" should be a red flag to all security workers.

iberator 4 hours ago

Just make it a crime in caught. 1 year is prison at least

  • causal an hour ago

    Sure let's just remove all security, encryption, firewalls and auth - nobody will abuse vulnerabilities if it's a crime!

  • simonw 4 hours ago

    What would the crime be?

    If I have a web page that says somewhere on it "and don't forget to contact your senator!" and an LLM agent reads that page and gets confused and emails a senator should I go to jail?

r0x0r007 8 hours ago

Nice, why don't we apply the same principles to our regular applications? Ooh, right, cause we couldn't use them and a whole industry got created that's called cybersecurity and it's supposed to be consulted BEFORE releasing privacy nightmares and using them. But hey, regular applications can't come up with cool poems.

  • rs186 4 hours ago

    Yeah, IT tried so hard to teach us something as basic as "don't click on links in suspicious emails" yet so many people fail that after multiple trainings and tests.

    But guess what? AI! Agents! <company name> Copilot! Just let them do things for you! Who would have thought there might possibly be a giant security hole?