How to Write an AI Policy for Your Team
Your team is already using AI. The question isn't whether to allow it — it's whether you'll have clear guidelines before something goes wrong. An AI policy isn't bureaucracy for its own sake. It's the difference between a team that uses AI confidently and one where people are either reckless or paralyzed by uncertainty.
Here's how to write one that actually works.
Why you need an AI policy now
If you don't have a policy, you have an implicit one: "figure it out yourself." That leads to predictable problems:
- Data leaks. Someone pastes customer data into a free-tier AI tool with no data retention guarantees.
- Quality incidents. AI-generated content goes to clients without review and contains errors or hallucinations.
- Inconsistency. One team uses AI freely; another bans it. New hires don't know what's expected.
- Legal exposure. AI-generated code or content creates IP questions nobody thought to address.
- Shadow AI. People use AI tools anyway but hide it, making it impossible to manage risk or share best practices.
A policy doesn't need to be a 40-page legal document. For most teams, 2-3 pages covering the key areas is enough to eliminate ambiguity and let people move fast with guardrails.
What to include
Approved tools
Be explicit about which AI tools are sanctioned — this is about ensuring data handling standards, not restricting choice.
- List approved tools by name (e.g., "ChatGPT Team, Claude for Work, GitHub Copilot").
- Specify approved tiers. Free tiers often use your inputs for model training. Enterprise/team tiers typically don't.
- Define the approval process for new tools. Keep it lightweight — a Slack message to IT with a link to the tool's data policy works for most teams.
Data handling rules
This is the most important section. Be concrete and specific.
What can be shared with AI tools:
- Publicly available information
- Internal drafts and brainstorming content
- Anonymized or synthetic data
- Your own code (with appropriate review)
What must never be shared:
- Customer PII (names, emails, addresses, payment info)
- Credentials, API keys, passwords
- Confidential financial data, unreleased earnings
- Data covered by NDA or regulatory requirements (HIPAA, SOX, etc.)
- Proprietary source code (define what "proprietary" means in your context)
Grey areas and how to handle them:
- Internal communications — generally fine if no PII or confidential data
- Aggregate business metrics — usually fine; use judgment
- When in doubt, anonymize first, then use the tool
Review requirements
Define when AI-generated output needs human review before use.
- Always review: Customer-facing content, legal documents, financial reports, published code, anything with your name or company brand on it.
- Light review: Internal drafts, brainstorming outputs, code used only in development/testing.
- No review needed: Personal productivity (rewriting your own notes, summarizing for your own use, learning).
The key principle: the higher the stakes and the wider the audience, the more review is needed. AI output should be treated as a first draft from a capable but unreliable junior colleague.
Prohibited uses
Some things should be off-limits regardless of context.
- Don't use AI to make hiring or firing decisions.
- Don't use AI to generate performance reviews without substantial human rewriting — it doesn't know your people.
- Don't submit AI-generated work as your own in contexts where original authorship is expected (research papers, expert testimony, some client deliverables — define which ones).
- Don't use AI to monitor or surveil employees unless explicitly disclosed and legally compliant.
- Don't rely on AI for legal, medical, or financial advice without professional verification.
Disclosure expectations
This is team-dependent, but pick an approach and be explicit:
- Always disclose to external clients when AI assisted with deliverables.
- Disclose when asked for internal work.
- No disclosure needed for personal productivity use.
Accountability
AI doesn't change who's responsible. Make this crystal clear:
- The person who submits AI-generated work is responsible for its accuracy, quality, and appropriateness.
- "The AI got it wrong" is not an acceptable explanation for errors in published work.
- Managers are responsible for ensuring their team understands and follows the policy.
Common policy mistakes
Being too restrictive
A policy that bans all AI use or makes every interaction require manager approval will be ignored. People will use AI anyway — they'll just hide it from you. Set guardrails, not roadblocks.
Being too vague
"Use AI responsibly" is not a policy. If someone can't read your policy and immediately know whether a specific action is allowed, it's too vague. Use concrete examples.
Ignoring the policy after writing it
A policy that lives in a wiki nobody reads is worse than no policy — it gives a false sense of security. Reference it in onboarding. Bring it up in team meetings. Make it findable.
Forgetting to update it
AI capabilities change fast. A policy written 12 months ago may not account for new tool features, new risks, or new opportunities. Build in a review cadence.
Not involving the team
A policy imposed from above without input breeds resentment. People who helped shape the policy follow it. People who had it dumped on them look for workarounds.
A practical framework for drafting your policy
Step 1: Audit current usage
Send a short anonymous survey: which AI tools are people using, for what tasks, what data have they shared, what concerns do they have? Takes a day, saves you from writing a disconnected policy.
Step 2: Identify your risks
A healthcare company has HIPAA concerns. A law firm has confidentiality obligations. A startup may prioritize speed over process. Map your specific risks before writing generic rules.
Step 3: Draft with input
Write a first draft, then circulate to a small group: someone from legal, a power user, a skeptic, and a manager. Incorporate feedback.
Step 4: Keep it short
If your policy is longer than 3 pages, people won't read it. Aim for 1-2 pages of clear rules plus a short FAQ.
Step 5: Announce and train
Don't just email the policy. Walk through it in a team meeting. The goal is comprehension, not compliance theater.
Sample policy outline
Here's a skeleton you can adapt. Fill in the brackets with your specifics.
[Company Name] AI Use Policy
Effective date: [Date] | Review date: [Date + 6 months]
1. Purpose This policy provides guidelines for using AI tools at [Company Name]. Our goal is to enable productivity while protecting company data, client information, and work quality.
2. Approved tools
- [Tool 1 — e.g., ChatGPT Team] — approved for [use cases]
- [Tool 2 — e.g., Claude Pro] — approved for [use cases]
- [Tool 3 — e.g., GitHub Copilot] — approved for [use cases]
- Using unapproved tools for work tasks requires approval from [role/person]. To request approval, [process].
3. Data rules
- NEVER input into AI tools: [list — customer PII, credentials, NDA-covered data, etc.]
- OK to input: [list — public info, internal drafts, anonymized data, etc.]
- When in doubt: anonymize the data first, or ask [role/person].
4. Review requirements
- Customer-facing outputs: must be reviewed by [role] before delivery.
- Internal outputs: author is responsible for accuracy.
- Personal productivity use: no review required.
5. Prohibited uses
- [List your prohibited uses]
6. Disclosure
- External deliverables: disclose AI assistance to clients.
- Internal work: disclosure encouraged, not required.
7. Accountability
- You are responsible for the accuracy and quality of any AI-assisted work you submit.
8. Questions Contact [person/channel] with questions about this policy.
Getting buy-in
Lead with enablement, not restriction. Frame it as "here's how to use AI confidently" rather than "here's what you can't do."
Show risks concretely. Share real examples of AI-related incidents — data leaks, hallucinated citations in legal briefs, confidential code in training data. Specific stories motivate; abstract risk doesn't.
Involve early adopters. If your power users endorse the policy, others follow. If they think it's unreasonable, it probably is.
Make compliance easy. If the approved tool is harder to access than the unapproved free one, guess which people will use. Remove friction from the compliant path.
When to revisit
Set a review date when you publish. Every 6 months is reasonable. Check if tools have changed their data policies, assess whether new tools should be on the list, review any incidents, and get team feedback.
AI governance isn't a one-time project. But the first version doesn't need to be perfect — it needs to be clear, short, and better than the implicit policy of "wing it."