Is Your AI Tutor Lying to You? An Interview With Harry Southworth
In this article, you’ll discover:
- Why your brain trusts a chatbot even when it is wrong.
- The real reason AI lies about facts and data.
- How the internet is slowly becoming a copy of itself.
- Why students feel happier when using these tools.
- Three simple habits to spot fake answers instantly.
We have all been there. You type a difficult question into a chatbot, and it answers instantly. The sentences are perfect. The tone is confident and calm. It feels like talking to a smart friend who has read every book in the library. But there is a catch. That “friend” might be making everything up, and it does not even know it.

This is what experts call the AI Trust Paradox. We believe the computer because it sounds human, even when it is just guessing. To understand why this happens and how to fix it, we sat down with Harry Southworth, the Head of AI Development at Edubrain.ai.
He warns that while these tools are helpful, treating them like a human is a dangerous trap. When a machine speaks with total confidence, our brains tend to switch off their “doubt” filters, leaving us vulnerable to misinformation.
The Problem With “Smart” Computers
Harry Southworth has a simple way to describe modern AI. He calls it a pattern engine. It does not have a brain. It does not know the truth. It just predicts the next word in a sentence based on math. It is less like a thinker and more like a very advanced parrot.
“The mistaken assumption that a written voice means a mind is a trap. So I would put a simple mental model into classrooms and onboarding sessions. This thing is a pattern engine. It can sound empathetic. It can sound convincing. And it still might be randomly firing off guesses.”
Why does it lie? Southworth explains that if a dataset contains ten lies and only three facts about a topic, the AI will likely pick the lie. Why? Because it looks at statistical probability, not truth. It picks the pattern that appears most often.
Because the bot sounds so sure of itself, students often skip the most important step: checking the facts. Southworth says we need to stop thinking of AI as an answer machine and start treating it like a rough draft or a creative spark, something to be edited, not trusted blindly.
When the Internet Copies Itself

One of the biggest risks Southworth sees is something called model collapse. This sounds like a sci-fi movie plot, but it is happening right now.
As more people use AI to write articles, essays, and posts, the internet fills up with machine-made text. Then, new AI models come along and learn from that text. It becomes a copy of a copy. The information gets jumbled, like a game of “telephone” played by robots.
“The internet starts to reproduce itself—a copy of a copy of a copy. The information becomes jumbled until everything starts to look alike. That is an incentives problem, not just a computational issue.”
This makes it harder to find original ideas. Southworth believes the solution is not to ban AI, but to build better maps. We need platforms that reward people for showing where their information comes from, clear “origin labels” that help us distinguish a primary source from a synthetic mirage.
Why We Get Attached
It is not just about homework. It is about feelings. A recent report by Edubrain found that many students use chatbots to lower stress. Getting a quick answer gives a rush of dopamine, a “reward” chemical in the brain.
Surprisingly, the data showed that 25% of users view the AI as a “friend,” and 16% even treat it like a “therapist.” This creates an illusion of connection. Southworth thinks this emotional reliance is risky. If we rely on a bot for comfort, we might forget how to deal with real challenges or connect with actual humans.
“This thing is a pattern engine… It can sound empathetic. It can sound convincing. And it still might be randomly firing off guesses.”
Real human support involves empathy and understanding, neither of which AI possesses. It simply predicts the words that sound the most comforting.
How to Use AI Safely
So, should we stop using it? No. Southworth believes AI is a powerful tool if you use it the right way. He suggests three simple “habits of verification” to keep yourself safe from fake facts.
1. Ask for the Receipt Don’t just take the answer. Ask the bot to prove it. If the chatbot states a fact, make it point to a source.
“One thing to do is to ask for the receipt. If the chatbot is stating a fact, have it point to a source. Afterward, open the source itself. If the source is unable to cite the data, consider it like a sticky note on your monitor.”
2. The Two-Pass Check First, get the answer. Then, use the AI against itself. Ask the bot to act like a harsh critic.
“Pass one gets the output. Pass two is the opposite: the model is forced to think critically about the result. Ask the model to list possible errors, edge cases, and counterarguments. A bit like proofreading with a critical friend.”
3. Triangulate Check the answer against a real book, an official document, or a different website. If two independent checks fail to agree, stop. That disagreement is valuable data.
Bonus Tip: Southworth also suggests keeping a tiny error log. Record when the AI fooled you and how. Your brain learns by finding patterns, so you might as well learn the right ones.
A Vision for the Future

Harry Southworth wants to change how we see these tools. He doesn’t want AI to be a vending machine that dispenses easy A’s or polished paragraphs. He wants it to be a scaffolding that helps you build your own knowledge.
“I want the next generation to feel less like a vending machine and more like a good tutor who refuses to do your homework for you.”
The goal is to use technology to make us more curious, not more passive. By asking better questions and challenging what the screen tells us, we can keep our human thinking sharp in a world full of robots. We need to build a habit where honesty is the most important feature.

