The First Generation That May Never Need to Think for Themselves — And Why That Should Worry Us Less Than We Think
For parents of children aged 7–12. Because the protection your child needs isn’t a filter or a ban. It’s something you can build.
A child came home from school last month and told her parents about a project her class had been given. They had to research an historical figure and write a short essay. She’s 9. She sat down, opened a browser, typed a question into a chatbot, and had three paragraphs in front of her within about twelve seconds.
She looked up and said, “Done.”
She wasn’t being lazy. She wasn’t trying to cheat. She had done exactly what the tool was designed to do — she’d asked it a question and it had given her an answer. A fluent, confident, well-structured answer. Complete with dates, quotations, and a concluding paragraph that sounded like it had been written by someone who cared deeply about this historical figure.
The only problem was that two of the quotations didn’t exist. The chatbot had invented them. They sounded perfect. They were completely made up.
She had no idea. And — this is the part that stopped us — she had no reason to question them. The answer sounded right. It was written in the voice of authority. Why would a 9-year-old doubt it?
That evening, we didn’t feel afraid of AI. We felt afraid of the absence of something in that child that would have made AI safe.
The fear is real. But it’s pointed at the wrong thing.
There is a growing body of research — and an even faster-growing body of parental anxiety — about what artificial intelligence is doing to children’s minds. The headlines are alarming, and some of them are accurate.
In January 2026, the Brookings Institution published what may be the most important report on AI and children’s education to date. Their Center for Universal Education spent a year consulting hundreds of students, parents, teachers, and technologists across dozens of countries, and reviewed hundreds of research articles. Their finding: at this point, the risks of generative AI in children’s education overshadow its benefits. Students are showing measurable declines in critical thinking, creativity, and the ability to retain and apply knowledge independently.
The Brookings researchers described a pattern they called cognitive offloading — the process of delegating thinking to an external tool. This isn’t new, of course. Calculators offloaded arithmetic. Search engines offloaded memory. But generative AI, the researchers argued, has done something qualitatively different. It doesn’t just retrieve information or compute answers. It produces complete reasoning — fluent, persuasive, authoritative — and delivers it in a voice that sounds like an expert who has thought deeply about the question.
For an adult with decades of experience and a trained capacity to evaluate claims, this is useful. For a 9-year-old who has never been asked to evaluate anything — it is something else entirely.
A separate survey-based study, published in 2025 in the journal Societies by researcher Michael Gerlich at SBS Swiss Business School, found the same pattern with striking specificity. Examining over 650 participants across age groups, the study showed a significant negative correlation between AI tool usage and self-reported critical thinking scores. But the finding that should stop every parent was this: younger participants — those aged 17 to 25 — showed the highest dependence on AI tools and the lowest critical thinking scores of any group. The relationship was mediated by cognitive offloading. The more they relied on AI to do the thinking, the less they practised thinking for themselves.
And a 2025 preliminary study from researchers at MIT went further still, examining what happens inside the brain when people write with AI assistance versus without it. Using neural imaging, the researchers found that brain connectivity appeared to decrease as the level of external AI support increased. The group that wrote entirely on their own showed the strongest neural engagement. The group that relied most heavily on AI showed the weakest. The researchers used a phrase that has stayed with us: the convenience of letting AI think for you today may accumulate as a cost — a weakening of the cognitive capacity you would have built if you’d done the thinking yourself. The study is preliminary and awaits replication, but the direction of the finding is consistent with everything else the research is showing.
These studies describe adults and university students. Imagine what the same pattern means for a child whose capacity to think independently hasn’t been built yet.
But here’s what the research misses
We’ve read these studies carefully, and we believe them. But we also think they’re looking at the wrong end of the problem.
The research focuses on what happens when teenagers and young adults use AI. It measures the decline in students who already had some critical thinking capacity, and then lost it through disuse. That’s alarming. But it’s not the most important question for the parent of a 5-year-old or a 7-year-old.
The more important question is: what happens to the child who never builds the capacity in the first place?
The teenager who lets AI write their essay once had the ability to structure an argument — they’re just choosing not to exercise it. The 17-year-old who accepts AI-generated information without questioning it presumably once knew how to question — they’ve just stopped practising. There’s a loss happening, and it’s measurable, and it matters.
But the child who grows up in a world where fluent, authoritative, confident answers are available for every question — before they have ever practised generating their own answer, questioning a claim, or sitting with the discomfort of not knowing — that child is not losing a capacity. They are simply never building it.
And that is a fundamentally different problem.
The Brookings researchers came close to naming this distinction. They noted that professional adults use AI as a “cognitive partner” because they bring fully developed metacognitive and critical thinking capacities to the interaction. They have the foundation to evaluate what AI gives them. Young people, the report argued, lack this foundation — and so AI becomes not a partner but a “surrogate” that replaces the thinking process itself.
What the report doesn’t trace is when that foundation is supposed to be built. The answer, for anyone who works with children’s development, is clear: it’s built between ages 3 and 9. In the years before AI is even relevant. In the small, invisible moments at home where a child either practises reaching inside for an answer — or learns to reach outward for one.
The capacity AI cannot replace
There’s a word for what a child needs in a world where everything sounds credible. It’s not “digital literacy.” It’s not “media literacy.” It’s not even “critical thinking,” though that’s closer.
The word is reasoning. Specifically: the capacity to hear something that sounds true and ask, “But is it?”
Not scepticism. Scepticism is a posture — a refusal to believe. Reasoning is a practice — a willingness to evaluate. The child who hears an AI-generated answer and thinks “that sounds right, but let me check” is exercising reasoning. The child who hears the same answer and simply absorbs it is doing what most humans do when they encounter confidence: they trust it.
This is not a flaw in the child. It’s a feature of human psychology. Daniel Kahneman’s work on cognitive bias has shown for decades that humans default to what he called System 1 thinking — fast, automatic, trusting of fluent-sounding information. The effort required to engage System 2 — slow, deliberate, evaluative — is significant. Most adults don’t do it unless they’ve been trained to, or unless something triggers their suspicion.
Now consider a child. A child has no training. A child has no accumulated experience that tells them confident-sounding things are sometimes wrong. A child encountering an AI-generated answer that is fluent, complete, and authoritative has no internal signal that says “wait — check this.” Unless someone has built that signal, deliberately, through years of practice.
The practice is not complicated. It looks like this:
An 8-year-old tells you something they heard at school. Instead of correcting it or confirming it, you ask: “That’s interesting — why do you think that’s true?” Not dismissively. Genuinely. The child pauses. They think about why they believe it. Maybe they don’t know. Maybe they say “because everyone says so.” And you ask: “Is that the same as it being true?”
That exchange — which takes forty seconds and happens at the dinner table — is the child practising the one capacity that makes them safe in a world of AI. Not safe from AI. Safe with it. The capacity to evaluate before accepting. To notice that something sounds authoritative and still ask whether it is.
Developmental scientists sometimes call this critical reasoning or evaluative thinking. It begins to emerge around age 7 to 9 — the age when a child becomes capable of holding two ideas at once and comparing them. It does not develop automatically. It develops through practice. Through hundreds of small moments where someone asks the child not just what they think, but why they think it. Where the child’s own reasoning is treated as something worth examining — not just something to correct.
Jean Piaget described this developmental period as the shift from preoperational to concrete operational thinking — the moment when the child moves from accepting the world as it appears to being able to reason about why it appears that way. What Piaget observed in the 1950s has become, in 2026, the most urgent capacity a child can develop. Because in Piaget’s world, the information a child encountered at least came from humans with fallible, recognisable voices. In our world, it comes from machines that sound exactly like the most confident person in the room.
What we actually want to say to every parent reading this
We’ve shared the research because it matters and because you deserve to see it plainly. But we want to step back from the studies for a moment and say something simpler.
Your child is going to live in a world saturated with AI. This is not a prediction. It is already happening. By the time your 7-year-old is 17, AI will generate a significant portion of the information, entertainment, and advice they encounter daily. It will sound human. It will sound certain. It will sound helpful.
The question is not whether your child will use AI. They will. The question is whether, when they do, something inside them says: “Let me think about this before I accept it.”
That something is not an app. It is not a digital literacy course. It is not a school policy. It is a habit of mind — a reflex toward evaluation rather than absorption — that is built slowly, quietly, over years, through thousands of small interactions between a parent and a child.
And here is the part that should worry you less, not more:
You already have everything you need to build it.
You don’t need to understand how AI works. You don’t need to keep up with the latest research. You don’t need to ban anything or filter anything or monitor anything.
You need to ask your child questions they can’t find the answer to. Questions that require them to reach inside — to wonder, to reason, to evaluate, to generate something from within rather than retrieve something from without. One question a day. Five minutes. That’s the practice.
“If you could change one rule at school, what would it be — and what would happen?”
There’s no right answer. There’s no AI-answerable solution. The child has to think. And in the thinking — in the daily habit of being asked to generate rather than retrieve — they build the interior architecture that makes AI a tool rather than a master.
“Someone told you the moon is made of rock. How would you check if that’s true without asking anyone?”
Now the child is practising evaluation. Not scepticism — investigation. The habit of treating claims as things to examine rather than things to accept. This habit, practised at 8, becomes the reflex that protects them at 18 when an AI-generated article tells them something plausible and wrong.
“What’s the difference between knowing something and believing something?”
This is the kind of question that a 9-year-old can sit with for ten minutes and still not have a final answer. That’s the point. The practice is not in the answer. The practice is in the reaching.
The generation that could be the strongest
Here’s the thing we genuinely believe, and it’s the reason the title of this article ends the way it does.
This generation — the children who are 5, 6, 7, 8, 9 right now — is not destined to be the generation that never thinks for itself. They could be the generation that thinks more carefully than any generation before them.
Because no previous generation has ever needed to build this capacity so deliberately. In every previous era, the information a child encountered was limited, local, and usually verifiable. There was no machine producing unlimited confident-sounding content on every topic. There was no technology that could complete a child’s thinking before they’d even started.
This generation faces a new requirement: they must build the internal capacity to evaluate what they hear before they can trust it. And that requirement, far from being a burden, could produce the most mentally resilient, thoughtful, and discerning generation in history — if we build the right foundation in the right window.
Tina Grotzer, a research scientist at the Harvard Graduate School of Education who studies how children develop causal reasoning, has made a point that we keep returning to. Her work suggests that while AI operates computationally, human minds are capable of something qualitatively different — detecting critical distinctions, making intuitive leaps, and reasoning analogically in ways that current AI systems simply cannot. The capacity is there. It is extraordinary. But it must be developed. Through practice. Through experience. Through the kind of daily, consistent, small interactions that don’t look dramatic and aren’t measured by any test.
The children who will thrive alongside AI are not the ones who are shielded from it. They are the ones who, by the time they encounter it, have already built a mind that evaluates before it accepts. A mind that generates before it retrieves. A mind that can sit with not knowing — and find that not knowing is not a problem to solve, but a space to wonder in.
That mind is not built by a school curriculum. It is not built by a parental control app. It is not even built by limiting screen time, though that may help.
It is built at home. Five minutes a day. In the questions you ask. In the frustrations you don’t solve. In the moments you trust your child to reach inside — and they discover that something is already there.
The window is open. Use it.
If your child is between 7 and 9, the capacity for critical reasoning is just beginning to open. This is the age when a child can first hold two competing ideas and compare them. When they can be asked “why do you believe that?” and actually consider the question. When the habit of evaluating — rather than simply absorbing — can be practised and strengthened until it becomes a reflex.
If your child is between 9 and 12, the window is narrower but the capacity is more powerful. A 10-year-old can engage with genuinely complex reasoning. They can examine their own thinking. They can notice when something sounds too certain and ask why. This is the age when the practice compounds fastest — if someone is asking the right questions.
And if your child is between 3 and 5, you have the most time and the simplest work: protect their wonder. The child who asks “why?” about everything is already practising the first form of reasoning — the refusal to accept the world without questioning it. Don’t answer every question. Don’t redirect their curiosity toward “learning.” Let them wonder. Sit in the wonder with them. That is the foundation everything else is built on.
The world will fill your child with answers. AI will make those answers faster, more fluent, and more convincing than any previous technology. Your child will live in a world where everything sounds credible.
The question is whether they’ll have something inside them — a habit, a reflex, a practised capacity — that says:
“Wait. Let me think about that first.”
That something is built by you. At home. In the small moments. Before the world gets complicated.
A strong mind is not one that knows all the answers. It is one that knows which answers to question. That mind is built at home, five minutes at a time, in the years when it matters most.
The world fills children. Parents build them.
And a child who can reach inside? That is the child who is ready for anything.
About Neurry — Neurry builds strong minds in children ages 3–9 through daily screen-free practice between parent and child. Our free Wonder question gives your child one question every day that no AI can answer — because the answer has to come from inside. neurry.com


