From Excitement to Assurance: Making Your AI Programme Compliant by Design

Glean

Dec 1, 2025

The question that matters now

Is your AI programme creating a compliance risk—or building a competitive advantage? For many decision‑makers, the thrill of AI pilots meets the hard edge of governance: prompt injection, data residency, and auditability. The answer isn’t to slow down. It’s to design in controls so innovation and compliance move together.

Why AI goes wrong in regulated environments

Most incidents aren’t science‑fiction, they’re governance gaps:

  • Prompt injection & insecure output handling can hijack agent behaviour and leak data.

  • Unclear data residency complicates GDPR and sector duties.

  • Opaque processes erode trust with Legal, Security and Works Councils.

Principle: Treat AI like any high‑impact system—threat model it, restrict permissions, monitor it, and prove what happened.

What’s changed in 2025 (good news for compliance)

  • Stronger patterns for LLM security: OWASP’s LLM Top‑10 calls out Prompt Injection (LLM01), Insecure Output Handling (LLM02) and more—giving teams a shared checklist.

  • Risk frameworks you can adopt: NIST’s AI RMF provides a practical spine for policies, controls, and testing.

  • Real data residency options: Major vendors now offer in‑region storage/processing for ChatGPT Enterprise/Edu and Microsoft 365 Copilot, with EU/UK boundary commitments and expanding in‑country processing.

  • Management standards: ISO/IEC 42001 formalises an AI Management System—helpful when auditors ask, “What’s your documented approach?”

Three risks, and how to reduce them fast

1) Prompt injection (and friends)

The risk: Crafted inputs cause the assistant to ignore rules, exfiltrate data, or execute unsafe actions.

Defences that work:

  • Prompt partitioning (separate system policy, tools, and user input) and strip instructions from retrieved content.

  • Retrieval allow‑lists + provenance tags; restrict tool scope and run with least privilege.

  • Input/output filtering (PII, secrets, URLs, code) and content security policies for agent browsing.

  • Continuous red‑teaming using OWASP tests; log and replay attacks as unit tests.

2) Data residency & sovereignty

The risk: Data processed or stored outside approved regions creates regulatory exposure and procurement friction.

What to implement:

  • Choose in‑region at‑rest storage where available (EU/UK/US and expanded regions for ChatGPT Enterprise/Edu/API).

  • For Microsoft estates, align to the EU Data Boundary and plan for in‑country Copilot processing in the UK and other markets as it rolls out.

  • Document where prompts, outputs, embeddings and logs reside; set retention, encryption, and access reviews.

3) Transparency & auditability

The risk: If you can’t show your workings, you can’t prove compliance.

What to implement:

  • Citations and link‑back to the source of truth in every high‑stakes answer.

  • Signed logs for prompts, sources, actions and outputs; privacy‑preserving where appropriate.

  • Adopt an AI Management System (ISO/IEC 42001) to turn practice into policy.

Compliance‑by‑design in 90 days

Weeks 1–3: Baseline & policy

  • Threat model top use‑cases (OWASP LLM risks). Define data classes and allowed sources.

  • Select residency regions; set retention for prompts, outputs, and caches.

  • Draft lightweight AI policy tied to NIST AI RMF functions (Map–Measure–Manage–Govern).

Weeks 4–8: Pilot with guardrails

  • Enable in‑region storage/processing; mirror SSO/SCIM permissions.

  • Implement prompt partitioning, filtering, allow‑lists and audit logs.

  • Red‑team weekly; fix findings; create runbooks for incidents.

Weeks 9–12: Prove & scale

  • Produce an assurance pack (controls, diagrams, DPIA, records of processing).

  • Train teams on "how to check" and escalation paths.

  • Schedule quarterly control reviews; track incidents and MTTR.

The Bottom Line

You don’t have to choose between innovation and compliance. With clear guardrails, security patterns, residency choices, and audit trails, AI becomes safer and more valuable. Build trust by showing your workings and storing data where it should live.

FAQs

What exactly is prompt injection? It’s malicious input that manipulates an AI system’s behaviour (e.g., overriding rules or leaking data). Treat it like a new form of injection with layered mitigations and testing.

Can we keep AI data in the UK or EU? Yes. Eligible enterprise tiers now support in‑region storage/processing, with expanded options across multiple geographies. Confirm availability for your plan and turn it on per project/tenant.

Do we need ISO/IEC 42001? Not required, but it helps auditors and partners understand your management system for AI. It pairs well with NIST AI RMF and existing ISO 27001 controls.

Will this slow us down? No. Most controls are configuration and process. The result is fewer escalations, faster approvals, and less rework.

Ready to get the support your organisation needs to successfully use AI?

Miro Solutions Partner
Asana Platinum Solutions Partner
Notion Platinum Solutions Partner
Glean Certified Partner

Ready to get the support your organisation needs to successfully use AI?

Miro Solutions Partner
Asana Platinum Solutions Partner
Notion Platinum Solutions Partner
Glean Certified Partner

Generation
Digital

UK Office
33 Queen St,
London
EC4R 1AP
United Kingdom

Canada Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
United States

EMEA Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo

Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy

Generation
Digital

UK Office
33 Queen St,
London
EC4R 1AP
United Kingdom

Canada Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
United States

EMEA Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo


Company No: 256 9431 77
Terms and Conditions
Privacy Policy
Copyright 2026


<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [ { "@type": "Question", "name": "What exactly is prompt injection?", "acceptedAnswer": { "@type": "Answer", "text": "Prompt injection is malicious input that manipulates an AI system to ignore rules, leak data or take unsafe actions. Treat it like an injection class: layer mitigations, monitor, and test continuously." } }, { "@type": "Question", "name": "How do we mitigate prompt injection risks?", "acceptedAnswer": { "@type": "Answer", "text": "Partition prompts (system, tools, user), strip instructions from retrieved content, restrict tools with least privilege, allow‑list retrieval sources, filter inputs/outputs for PII and secrets, and red‑team regularly using OWASP LLM tests." } }, { "@type": "Question", "name": "Can we keep AI data in the UK or EU?", "acceptedAnswer": { "@type": "Answer", "text": "Yes. Eligible enterprise tiers support in‑region storage/processing. Configure data residency per tenant or project, and document where prompts, outputs, embeddings and logs are stored, plus retention and encryption." } }, { "@type": "Question", "name": "Which frameworks help us prove compliance?", "acceptedAnswer": { "@type": "Answer", "text": "Use NIST AI RMF as the policy spine and consider ISO/IEC 42001 for an AI Management System. Map controls to existing ISO 27001 practices, keep signed logs, and enable citations for high‑stakes answers." } }, { "@type": "Question", "name": "Will adding controls slow down delivery?", "acceptedAnswer": { "@type": "Answer", "text": "No—most controls are configuration and process. With residency, permissions and audit logs in place, approvals are faster and rework drops. The net effect is quicker, safer delivery." } } ] } </script>