From Excitement to Assurance: Making Your AI Programme Compliant by Design

From Excitement to Assurance: Making Your AI Programme Compliant by Design

Glean

Dec 1, 2025

A man is sitting at a desk in a modern office, using dual monitors displaying code and a collaborative platform, illustrating the use of Glean Code Search and Writing Tools.
A man is sitting at a desk in a modern office, using dual monitors displaying code and a collaborative platform, illustrating the use of Glean Code Search and Writing Tools.

Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.

Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.

➔ Start the AI Readiness Pack

The question that matters now

Is your AI programme creating a compliance risk—or building a competitive advantage? For many decision‑makers, the thrill of AI pilots meets the hard edge of governance: prompt injection, data residency, and auditability. The answer isn’t to slow down. It’s to design in controls so innovation and compliance move together.

Why AI goes wrong in regulated environments

Most incidents aren’t science‑fiction, they’re governance gaps:

  • Prompt injection & insecure output handling can hijack agent behaviour and leak data.

  • Unclear data residency complicates GDPR and sector duties.

  • Opaque processes erode trust with Legal, Security and Works Councils.

Principle: Treat AI like any high‑impact system—threat model it, restrict permissions, monitor it, and prove what happened.

What’s changed in 2025 (good news for compliance)

  • Stronger patterns for LLM security: OWASP’s LLM Top‑10 calls out Prompt Injection (LLM01), Insecure Output Handling (LLM02) and more—giving teams a shared checklist.

  • Risk frameworks you can adopt: NIST’s AI RMF provides a practical spine for policies, controls, and testing.

  • Real data residency options: Major vendors now offer in‑region storage/processing for ChatGPT Enterprise/Edu and Microsoft 365 Copilot, with EU/UK boundary commitments and expanding in‑country processing.

  • Management standards: ISO/IEC 42001 formalises an AI Management System—helpful when auditors ask, “What’s your documented approach?”

Three risks, and how to reduce them fast

1) Prompt injection (and friends)

The risk: Crafted inputs cause the assistant to ignore rules, exfiltrate data, or execute unsafe actions.

Defences that work:

  • Prompt partitioning (separate system policy, tools, and user input) and strip instructions from retrieved content.

  • Retrieval allow‑lists + provenance tags; restrict tool scope and run with least privilege.

  • Input/output filtering (PII, secrets, URLs, code) and content security policies for agent browsing.

  • Continuous red‑teaming using OWASP tests; log and replay attacks as unit tests.

2) Data residency & sovereignty

The risk: Data processed or stored outside approved regions creates regulatory exposure and procurement friction.

What to implement:

  • Choose in‑region at‑rest storage where available (EU/UK/US and expanded regions for ChatGPT Enterprise/Edu/API).

  • For Microsoft estates, align to the EU Data Boundary and plan for in‑country Copilot processing in the UK and other markets as it rolls out.

  • Document where prompts, outputs, embeddings and logs reside; set retention, encryption, and access reviews.

3) Transparency & auditability

The risk: If you can’t show your workings, you can’t prove compliance.

What to implement:

  • Citations and link‑back to the source of truth in every high‑stakes answer.

  • Signed logs for prompts, sources, actions and outputs; privacy‑preserving where appropriate.

  • Adopt an AI Management System (ISO/IEC 42001) to turn practice into policy.

Compliance‑by‑design in 90 days

Weeks 1–3: Baseline & policy

  • Threat model top use‑cases (OWASP LLM risks). Define data classes and allowed sources.

  • Select residency regions; set retention for prompts, outputs, and caches.

  • Draft lightweight AI policy tied to NIST AI RMF functions (Map–Measure–Manage–Govern).

Weeks 4–8: Pilot with guardrails

  • Enable in‑region storage/processing; mirror SSO/SCIM permissions.

  • Implement prompt partitioning, filtering, allow‑lists and audit logs.

  • Red‑team weekly; fix findings; create runbooks for incidents.

Weeks 9–12: Prove & scale

  • Produce an assurance pack (controls, diagrams, DPIA, records of processing).

  • Train teams on "how to check" and escalation paths.

  • Schedule quarterly control reviews; track incidents and MTTR.

The Bottom Line

You don’t have to choose between innovation and compliance. With clear guardrails, security patterns, residency choices, and audit trails, AI becomes safer and more valuable. Build trust by showing your workings and storing data where it should live.

FAQs

What exactly is prompt injection? It’s malicious input that manipulates an AI system’s behaviour (e.g., overriding rules or leaking data). Treat it like a new form of injection with layered mitigations and testing.

Can we keep AI data in the UK or EU? Yes. Eligible enterprise tiers now support in‑region storage/processing, with expanded options across multiple geographies. Confirm availability for your plan and turn it on per project/tenant.

Do we need ISO/IEC 42001? Not required, but it helps auditors and partners understand your management system for AI. It pairs well with NIST AI RMF and existing ISO 27001 controls.

Will this slow us down? No. Most controls are configuration and process. The result is fewer escalations, faster approvals, and less rework.

The question that matters now

Is your AI programme creating a compliance risk—or building a competitive advantage? For many decision‑makers, the thrill of AI pilots meets the hard edge of governance: prompt injection, data residency, and auditability. The answer isn’t to slow down. It’s to design in controls so innovation and compliance move together.

Why AI goes wrong in regulated environments

Most incidents aren’t science‑fiction, they’re governance gaps:

  • Prompt injection & insecure output handling can hijack agent behaviour and leak data.

  • Unclear data residency complicates GDPR and sector duties.

  • Opaque processes erode trust with Legal, Security and Works Councils.

Principle: Treat AI like any high‑impact system—threat model it, restrict permissions, monitor it, and prove what happened.

What’s changed in 2025 (good news for compliance)

  • Stronger patterns for LLM security: OWASP’s LLM Top‑10 calls out Prompt Injection (LLM01), Insecure Output Handling (LLM02) and more—giving teams a shared checklist.

  • Risk frameworks you can adopt: NIST’s AI RMF provides a practical spine for policies, controls, and testing.

  • Real data residency options: Major vendors now offer in‑region storage/processing for ChatGPT Enterprise/Edu and Microsoft 365 Copilot, with EU/UK boundary commitments and expanding in‑country processing.

  • Management standards: ISO/IEC 42001 formalises an AI Management System—helpful when auditors ask, “What’s your documented approach?”

Three risks, and how to reduce them fast

1) Prompt injection (and friends)

The risk: Crafted inputs cause the assistant to ignore rules, exfiltrate data, or execute unsafe actions.

Defences that work:

  • Prompt partitioning (separate system policy, tools, and user input) and strip instructions from retrieved content.

  • Retrieval allow‑lists + provenance tags; restrict tool scope and run with least privilege.

  • Input/output filtering (PII, secrets, URLs, code) and content security policies for agent browsing.

  • Continuous red‑teaming using OWASP tests; log and replay attacks as unit tests.

2) Data residency & sovereignty

The risk: Data processed or stored outside approved regions creates regulatory exposure and procurement friction.

What to implement:

  • Choose in‑region at‑rest storage where available (EU/UK/US and expanded regions for ChatGPT Enterprise/Edu/API).

  • For Microsoft estates, align to the EU Data Boundary and plan for in‑country Copilot processing in the UK and other markets as it rolls out.

  • Document where prompts, outputs, embeddings and logs reside; set retention, encryption, and access reviews.

3) Transparency & auditability

The risk: If you can’t show your workings, you can’t prove compliance.

What to implement:

  • Citations and link‑back to the source of truth in every high‑stakes answer.

  • Signed logs for prompts, sources, actions and outputs; privacy‑preserving where appropriate.

  • Adopt an AI Management System (ISO/IEC 42001) to turn practice into policy.

Compliance‑by‑design in 90 days

Weeks 1–3: Baseline & policy

  • Threat model top use‑cases (OWASP LLM risks). Define data classes and allowed sources.

  • Select residency regions; set retention for prompts, outputs, and caches.

  • Draft lightweight AI policy tied to NIST AI RMF functions (Map–Measure–Manage–Govern).

Weeks 4–8: Pilot with guardrails

  • Enable in‑region storage/processing; mirror SSO/SCIM permissions.

  • Implement prompt partitioning, filtering, allow‑lists and audit logs.

  • Red‑team weekly; fix findings; create runbooks for incidents.

Weeks 9–12: Prove & scale

  • Produce an assurance pack (controls, diagrams, DPIA, records of processing).

  • Train teams on "how to check" and escalation paths.

  • Schedule quarterly control reviews; track incidents and MTTR.

The Bottom Line

You don’t have to choose between innovation and compliance. With clear guardrails, security patterns, residency choices, and audit trails, AI becomes safer and more valuable. Build trust by showing your workings and storing data where it should live.

FAQs

What exactly is prompt injection? It’s malicious input that manipulates an AI system’s behaviour (e.g., overriding rules or leaking data). Treat it like a new form of injection with layered mitigations and testing.

Can we keep AI data in the UK or EU? Yes. Eligible enterprise tiers now support in‑region storage/processing, with expanded options across multiple geographies. Confirm availability for your plan and turn it on per project/tenant.

Do we need ISO/IEC 42001? Not required, but it helps auditors and partners understand your management system for AI. It pairs well with NIST AI RMF and existing ISO 27001 controls.

Will this slow us down? No. Most controls are configuration and process. The result is fewer escalations, faster approvals, and less rework.

Get practical advice delivered to your inbox

By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.

Ready to get the support your organisation needs to successfully use AI?

Miro Solutions Partner
Asana Platinum Solutions Partner
Notion Platinum Solutions Partner
Glean Certified Partner

Ready to get the support your organisation needs to successfully use AI?

Miro Solutions Partner
Asana Platinum Solutions Partner
Notion Platinum Solutions Partner
Glean Certified Partner

Generation
Digital

UK Office

Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom

Canada Office

Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada

USA Office

Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States

EU Office

Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland

Middle East Office

6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy

Generation
Digital

UK Office

Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom

Canada Office

Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada

USA Office

Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States

EU Office

Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland

Middle East Office

6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Company No: 256 9431 77
Terms and Conditions
Privacy Policy
Copyright 2026