Why This Matters

Teams accidentally paste personal, confidential, or contract-relevant data into AI prompts — creating data leakage and audit risk. Without clear policies and controls, AI adoption becomes a compliance liability. This happens across all departments: a lawyer pasting client names and case details into a public AI tool to summarize a document; an HR manager uploading employee performance data to generate a review draft; a finance analyst sharing contract terms to produce a summary. In each case the intent is productive, but the data handling is non-compliant.

GDPR requires a lawful basis for processing personal data, and sending that data to an external AI model constitutes processing. Without documented policies, approved tools, and evidence of user training, organizations cannot demonstrate compliance during an audit or investigation. ISO 27001 clauses covering information classification and information security policies apply directly to AI tool usage decisions. Establishing prompting governance before AI adoption scales is significantly easier than retrofitting it after incidents occur or regulators inquire.

Governance Framework

Prompt Policy & Guardrails

  • What may be prompted
  • What must be anonymized/redacted
  • Approved tools and environments

Prompt Logging & Governance

  • Traceability for audits
  • Role-based access controls
  • Incident detection and response

Data Classification in Prompts

  • Sensitivity labels integration
  • DLP controls for AI tools
  • Microsoft Purview alignment

Built for Regulated Environments

  • GDPR and ISO 27001 alignment as baseline requirements
  • NIS2 considerations for security posture
  • DLP, Information Protection, Audit Logging via Microsoft Purview
  • MIP/Sensitivity Labels integration for data classification

What You Get

AI Prompting Standard

Comprehensive policy document defining acceptable use, prohibited practices, and governance requirements. The standard specifies which data classification levels may be entered into which AI tools, which tools are approved for which use cases, and what anonymization or redaction is required before prompting with sensitive content.

Prompt Templates & Training

Ready-to-use templates for common use cases and user training materials. Templates are designed for the most frequently requested AI tasks — document summarization, draft generation, policy lookup — and are pre-cleared for use with specified data classification levels, removing the need for users to make individual compliance judgments each time.

Governance Dashboard Concept

KPIs, incident tracking, and adoption metrics for ongoing governance. The dashboard concept defines which signals to monitor through Microsoft Purview and Entra ID audit logs, what thresholds constitute a compliance event requiring review, and how to report AI usage patterns to information security leadership on a regular cadence.

Regulatory Alignment

AI prompting governance is required across multiple regulatory frameworks applicable to regulated organizations operating in the EU. GDPR mandates that personal data is not transferred to AI systems without a lawful basis and appropriate safeguards. ISO 27001 requires documented policies for data handling, including AI tool usage. NIS2 introduces additional requirements for cybersecurity risk management that extend to AI system interactions.

Microsoft Purview provides the technical layer for enforcing prompting policies through sensitivity labels, DLP rules, and audit logging integrated with Microsoft 365. Organizations that establish compliant AI prompting standards reduce audit risk, demonstrate regulatory maturity, and enable AI adoption without exposing sensitive or personal data to external model providers.

When this is the right fit: AI prompting governance is the correct starting point when an organization is already using or planning to introduce AI tools — such as Microsoft Copilot, ChatGPT Enterprise, or similar — and has not yet defined which data categories may be entered into those tools, which tools are approved for which use cases, or how usage is monitored and audited. It is particularly relevant for teams in legal, HR, finance, and customer-facing roles where confidential or personal data is routinely handled.

What this doesn't replace: Prompting governance defines policies and trains users — it does not replace technical data loss prevention controls, identity and access management configuration, or AI integration architecture. A prompting policy alone cannot prevent a determined user from entering restricted data; it must be complemented by DLP rules, sensitivity label enforcement, and access controls configured at the platform level. For technical enforcement, see the GDPR-Compliant AI Integrations page.

Best fit and known limitations

Best for

Teams already using ChatGPT, Claude, or Copilot who need policies, guardrails, prompt logging, and data classification to make daily use defensibly compliant with GDPR and ISO 27001.

Not the right fit

Greenfield AI build-out without existing usage (engage AI Implementation instead); air-gapped or sovereign workloads (use the localLLM project).

Known limitations

Cloud LLMs cannot be made fully sovereign by policy alone — high-sensitivity data still benefits from on-prem inference; guardrail effectiveness scales with the discipline of training, review, and policy enforcement after rollout.

Need AI prompting governance?

Book a session to assess your current AI usage and implement compliant policies.