GuideCurious MindsTween ThinkersTeen Innovators

How to Teach Your Child to Fact-Check AI

AI makes things up — confidently and convincingly. Here's a step-by-step guide to teaching children critical AI literacy.

April 13, 20263 min read

Why AI Lies (And Why It Sounds So Confident)

Large language models like ChatGPT and Claude generate text by predicting what words are likely to follow other words. They don't look facts up — they pattern-match from their training data. This means they can produce completely false information in a fluent, convincing, authoritative tone.

This is called hallucination — and it happens even in the best AI systems.

A 2024 Stanford study found that ChatGPT-4 produced factual errors in approximately 27% of responses on specialised topics. For children who trust authoritative-sounding text, this is a serious risk.

The 3-Source Rule

Teach children this simple rule: if AI tells you a fact, find it in at least one other reliable source before using it.

Reliable sources include:

  • Educational websites (.edu, .gov, .org)
  • Encyclopaedia Britannica, Wikipedia (as a starting point, with citations checked)
  • Textbooks and school resources
  • News organisations with editorial standards (BBC, Reuters, AP)

Unreliable as sole sources: other AI tools, random blogs, social media.

Step-by-Step: Teaching Fact-Checking as a Habit

Step 1: Pick a topic together

Start with something your child knows well — their favourite animal, sport, or historical figure. Ask an AI about it together and read the response.

Step 2: Spot the checkable claims

Identify the specific facts in the AI response. "The cheetah can run at 70 mph." "Marie Curie was born in Warsaw in 1867." These are checkable.

Step 3: Look them up

Use a search engine or encyclopaedia to verify 2–3 of those facts. You'll often find at least one that's slightly wrong or oversimplified.

Step 4: Discuss why it was wrong

Talk about why AI might get this wrong — not enough data, conflicting sources in training, or inherent uncertainty in the topic.

Step 5: Repeat with homework

Once children have done this exercise a few times, make it a habit: "Before you use that fact in your work, have you checked it?"

Red Flags to Teach Children

Help children spot these signs that an AI response might need extra scrutiny:

  • Very specific numbers — exact statistics are often made up or misremembered
  • Quotes from real people — AI frequently invents or misattributes quotes
  • Recent events — most AI has a knowledge cutoff and may be outdated
  • Highly specialised claims — medical, legal, or scientific details are high-risk
  • When it says "studies show" — ask for the specific study; it may not exist

Making It Fun

For younger children, turn it into a game: "Let's see if we can catch the AI making a mistake." Children who approach AI as something to be tested — rather than trusted — develop much healthier habits.

The goal isn't to make children suspicious of all information. It's to give them the tools to tell good information from bad — a skill that matters far beyond AI.