When I ask companies about their AI policy, I get one of three answers: "we don't have one" (most common), "we told everyone not to use AI" (counterproductive — they're using it anyway, just secretly), or "we have a 20-page policy that nobody has read" (worse than no policy, because it creates a false sense of governance).

The companies that handle AI governance well have a one-page policy that every employee can understand, backed by technical controls that make compliance easy and violations hard.

The One-Page Framework

Section 1: Approved Tools. List every AI tool that's sanctioned for company use, including the specific tier. Be explicit.

Approved for all company work: [Company Anthropic API account], [GitHub Copilot Business via company org], [Company Cursor Teams license]. Approved for non-sensitive work only: [Perplexity Pro via company account]. Not approved for any company work: [ChatGPT free tier], [Google Gemini free tier], [Claude.ai free tier], [any personal AI accounts].

If it's not on the approved list, it's not approved. Simple.

Section 2: Data Classification for AI Use. Map your existing data classification to AI tool permissions.

Public information (published marketing materials, public documentation, open-source code): any approved AI tool. Internal information (internal wiki content, non-sensitive code, project plans): approved enterprise AI tools only. Confidential information (client data, source code with business logic, financial data, competitive analysis): approved enterprise AI tools with manager awareness. Restricted information (PII, health data, credentials, board materials, M&A activity): approved enterprise AI tools with explicit compliance sign-off per instance.

Section 3: Prohibited Actions. Short, specific, memorable.

Never paste credentials (API keys, passwords, tokens) into any AI tool. Never upload databases or spreadsheets containing customer PII. Never share board materials, financial projections, or M&A plans with AI tools — even enterprise tier — without CFO approval. Never use AI-generated code in production without human review. Never publish AI-generated content to customers without human review. Never use AI to make decisions about hiring, firing, or employee performance.

Section 4: Accountability. Who's responsible when AI is involved?

The person using the AI tool is responsible for the output. AI-generated code must pass the same review process as human-written code. AI-generated customer communications must be reviewed by a human before sending. AI-generated analysis must be validated before presenting to clients or executives.

Making It Enforceable

A policy that relies on employee memory and good intentions will fail. Layer technical controls on top.

Network controls. Block access to consumer AI tool domains on the corporate network. This isn't foolproof (employees have phones), but it creates friction and signals organizational seriousness.

Centralized AI gateway. Route all AI API calls through a company-managed proxy that logs queries, checks for sensitive data patterns (credit card numbers, SSNs, API key formats), and blocks requests that violate policy. This is the most effective technical control because it catches violations in real time.

Account management. Require all AI tool access through company-managed accounts. No personal ChatGPT accounts for work use. Company accounts provide audit trails that personal accounts don't.

Code scanning. Add AI-generated code detection to your CI pipeline. Several tools can identify patterns characteristic of AI-generated code, flagging it for additional review. This doesn't prevent AI use — it ensures AI-generated code gets appropriate scrutiny.

The Quarterly Review

AI capabilities, pricing, and policies change rapidly. A policy written today may be obsolete in six months because a consumer tool added enterprise-grade data handling, or a new AI service emerged that the policy doesn't address.

Review and update the policy quarterly. The review should cover: have any approved tools changed their data handling policies? Are there new AI tools the team wants to use that need evaluation? Have there been any policy violations, and do they indicate a gap in the policy or the controls? Are the technical controls still effective?

The review takes 2 hours. It's not a bureaucratic exercise — it's how you keep the balance between productivity and protection as the AI landscape evolves.


Related: Enterprise AI vs. Consumer AI | Protecting Your IP in the AI Era | The CEO's Guide to AI Guardrails