This means that the model itself does not have access to credentials in the 1Password vault. So, when prompted to log into AWS, for instance, it must follow a separate, well-defined, permissioned flow via OAuth or a dedicated credentials broker tool. Users must see prompts that they can understand, and know exactly what they are granting access to.
Additionally, raw credentials should never enter the LLM context. “Raw secrets have no place in prompts, embeddings, or fine-tuning data,” Srinivas emphasized. Sending credentials over an LLM-driven data channel is “the equivalent of typing into its prompt: ‘My API token for AWS is XXXXX, please go ahead and use it on my behalf.’”
Auditability is also critical; all actions must leave an audit trail so that enterprises have visibility into who (human or synthetic) accessed what, and what actions took place, according to Srinivas. “There will be no hidden AI decision-making, silent escalations, or vague ‘powered by AI’ labels without explanation.”
This story originally appeared on Computerworld