top of page
Search

How to Build an Ethical AI Policy for Your Charity

Most charities are already using AI informally — drafting emails, outlining reports, or summarising meetings.Without a clear policy, this can quickly become inconsistent, risky, or confusing for staff.


An ethical AI policy doesn’t have to be long. It should simply give people clarity, boundaries, and confidence so they can use AI safely and effectively.


1. What an AI policy should do

A good policy:

  • explains your organisation’s approach to AI

  • protects supporter trust and data

  • gives staff simple guidance

  • ensures humans stay responsible

It does not need to be technical or restrictive.

2. The essential components


Purpose & Principles

Focus on values: trust, accessibility, accountability.


Approved and unapproved use

List the types of tools staff can use (e.g., Copilot, approved chat tools) and those they shouldn’t (public AI tools that store data).Keep it general, not tool-by-tool.

Safe data handling

The most important section.


Make it clear that staff must not upload:

  • personal or supporter information

  • confidential documents

  • anything sensitive

Encourage the use of enterprise or closed versions where possible.

Human oversight

AI can assist with drafting, structure, and summaries — but humans remain responsible for accuracy, tone, and decisions.

Transparency

Include a simple statement for internal documents:

“This draft was assisted by AI and reviewed by [Name].”

This builds trust and avoids confusion.

3. Useful rules to include

Short policies work best when they offer clear, memorable rules, such as:

  1. No personal data in public AI tools.

  2. Always review AI output before using it.

  3. Keep tone aligned with organisational voice.

  4. Don’t present AI content as factual without checking.

  5. Update the policy every six months as tools change.

Simple, predictable guidelines reduce risk and encourage responsible use.

4. How to write it quickly

You can build a usable policy in a week:

  1. Gather examples of how staff already use AI.

  2. Identify risks (data, tone, accuracy).

  3. Draft the 5 core sections.

  4. Share with fundraising, comms, HR, IT for feedback.

  5. Add a 1-page “quick guide” for everyday use.

  6. Publish and introduce it at a team meeting.

Keep the tone supportive, not restrictive.

5. Why it matters

Charities rely on trust. AI can genuinely help teams save time — often several hours a week — but only when people understand how to use it safely.

A clear, ethical AI policy gives staff confidence to experiment responsibly and protects the organisation when tools evolve.

If your organisation wants help understanding the risks and opportunities of AI for charities, you’re welcome to join our next free Bitesize AI for Charities session:👉 https://www.theaitrainingcompany.com/events/bitesize-ai-a-free-webinar-for-charities-3

 
 
 

Recent Posts

See All

Comments


The AI Training Company is a wecanhelpdigital LTD company

Registered Office: Wessex House, Newton Abbot, Devon

© Wecanhelpdigital LTD 2025. All rights reserved

Company registration number: 14383368

Privacy Policy & Cookies

Contact your trainer, Alex:

alex@wecanhelp.digital

07765444263

bottom of page