One thing that jumps out in these incidents is how quickly we shift from "package integrituy" to "operator integrity." Once an LLM is in the loop (even as a helper0, its effectevly acting as an operator that can influence time-critical actions like who you contact, what you run, and what you trust.
In more regulated environments we deal with this by separating advice, authority and evidence (or the receipts). The useful analogue here is to keep the model in the "propose" role. but require deterministic gates for actions with side effects, and log the decisions as an auditable trail.
I personally don't think this eliminates the problem (attackers will still attack), but it changes the failure mode from "the assistant talked me into doing a danerous thing" to "the assistant suggested it and the policy/gate blocked it." That's the big difference between a contained incident and a big headline.
In more regulated environments we deal with this by separating advice, authority and evidence (or the receipts). The useful analogue here is to keep the model in the "propose" role. but require deterministic gates for actions with side effects, and log the decisions as an auditable trail.
I personally don't think this eliminates the problem (attackers will still attack), but it changes the failure mode from "the assistant talked me into doing a danerous thing" to "the assistant suggested it and the policy/gate blocked it." That's the big difference between a contained incident and a big headline.