As organisations move from AI assistants to more agentic workflows, the risk picture starts to change. A document is no longer just something an employee reads. It may now be something an AI system reads, interprets, and acts on. That sounds efficient, and often it is. But it also creates a new control question for GRC teams: what if the document contains instructions designed to manipulate the workflow itself?
What is it?
Prompt injection is often described as a technical issue, but that framing is now too narrow.
In simple terms, prompt injection is when content presented to an AI system tries to influence how that system behaves. Sometimes it is obvious, such as text that says “ ignore previous instruction. ” Sometimes it is more subtle, hidden inside otherwise normal looking business content.
This matters more in agentic AI than it did in earlier chatbot use cases. With a simple assistant, the damage might be limited to a bad answer. With agentic workflows, the AI may be extracting requirements, summarising documents, classifying issues, triggering tasks, or feeding downstream systems. That means untrusted content can affect not just output, but process.
This is where the shift becomes important for GRC leaders. Many organisations still think of uploaded documents as business inputs. In an agentic workflow, they should also be treated as potential attack surfaces. An RFx document, vendor response, policy submission, contract, or emailed attachment may contain content that is harmless to a human reader but problematic for an AI-driven process.
The misconception is that prompt injection is just a model problem for technical teams to solve. In practice, it is also a governance, risk, and control issue. It sits alongside data quality, access control, auditability, human oversight, and exception handling.
What does it mean from a business perspective?
This is not just a security issue. It is an operating model issue.
- Trusted business content may need to be treated as untrusted machine input. A document that looks ordinary from a human point of view may still contain instructions that influence an AI workflow.
- The risk is broader than inaccurate output. In agentic settings, the concern is not only that the AI gets something wrong. It is that the workflow itself may be nudged, redirected, or corrupted.
- Existing controls may not be enough. Traditional document review, approval, and retention controls were not designed for systems that interpret and act on content autonomously or semi-autonomously.
- Auditability becomes more important. If an AI-driven process is influenced by malicious or manipulative content, organisations need logs, flagged paths, confidence scores, and clear evidence of what happened.
- Control design needs to move closer to execution. It is not enough to have policy statements about responsible AI. The controls need to exist inside the workflow, at the points where content is ingested, interpreted, and passed downstream.
- Human oversight still matters, but it needs structure. “Human in the loop” is not helpful if the person reviewing the output has no visibility into what was flagged or why.
- This will increasingly become a GRC conversation. As organisations deploy more agents across procurement, operations, customer service, and internal processes, prompt injection will become part of control assessments, internal audit planning, and AI risk registers.
What do I do with it?
The good news is that this is manageable. But it does need to be addressed deliberately.
- Start by updating your mental model. Treat external documents and uploaded content as untrusted input when they are being processed by AI systems.
- Map where agentic workflows ingest business content. Look for places where documents, forms, emails, or attachments are being read and acted on by AI.
- Add control gates before downstream action. High-risk workflows should include checks before extracted content is used by other agents, automation steps, or decision-support processes.
- Require inspectable controls. Where possible, use mechanisms that produce reviewable outputs such as flagged fields, severity ratings, confidence scores, and audit logs.
- Define abort and escalation paths. If suspicious content is detected, the workflow should stop safely and route the case for review rather than continue by default.
- Bring GRC into design earlier. This is not something to assess only after deployment. Risk, audit, compliance, and control teams should be involved while workflows are being designed.
- Include prompt injection in AI risk assessments. If your current AI governance approach covers privacy, bias, and acceptable use but not workflow manipulation, there is a gap.
- Review human oversight for quality, not just presence. Make sure reviewers have the information they need to judge whether the workflow should proceed.
As AI systems become more agentic, the control conversation needs to become more practical. Prompt injection is not just a technical curiosity. It is a real governance and assurance issue wherever AI reads business content and acts on it.
The organisations that handle this well will not be the ones that avoid agentic AI. They will be the ones that build it with better control points, better visibility, and a healthier respect for untrusted input. If you work in GRC, now is a good time to start asking where your documents may be doing more than informing the workflow.
