Several friends of mine work in higher education and keep coming back to the same observation: another PhD chapter that is technically fine – sentences in the right order, structure obediently in place – but somehow it reads like a choir where every singer has the same voice, the air faintly scented with generative AI and that smooth airport‑lounge prose where everything lands safely, no turbulence, no unexpected pockets of weather. Every paragraph lands safely, even when you secretly wish for a bit of drama.
Now, PhDs are meant to add new knowledge to the pot. That’s the deal. You stir, you taste, you throw in something no one’s tried before. Up to Master’s level, you mostly learn how to carry the pot without spilling it. You repeat what other people have cooked. You maybe add a garnish.
But a doctorate is supposed to say: “Here’s a new ingredient. It might explode. Shall we try?”
The trouble is, education hasn’t exactly been training people to reach for strange ingredients. It has mostly been training people to fill in forms in full sentences. Critical thinking – the “hang on a minute, what if this is nonsense?” part – has been the optional side salad, not the main dish.
Enter AI, stage left, with a silver platter: “Would madam care for a perfectly structured paragraph that sounds like every other paragraph in existence?”
AI in a lab coat
In the natural sciences, AI is currently wearing a lab coat and looking helpful. It chews through mountains of data, spits out patterns, and generates graphs that look very grown-up. On paper, it’s a dream: faster analysis, more simulations, fewer poor PhD students staring at spreadsheets until their eyes cross.
But data has history. It comes from messy, human contexts. It’s shaped by what got measured and what didn’t, who was counted and who was left out. Train an AI on that, and it will calmly reproduce the same old gaps and biases, just with more confidence and fancier visuals.
And because the graphs look neat, and the models hum with complicated maths, there’s a temptation to bow to them. “Well, the system says so.” That’s where critical thinking should kick the door in. “Alright, dear system, why do you say so? Who fed you? What are you blind to? And why is everyone so impressed by your confidence?”
AI the overeager student
Different building, same problem across the road in the social sciences, arts, and humanities.
Here, AI looks less like a lab assistant and more like that overeager student who has read everything and understood… very little. It churns out literature reviews, theoretical frameworks, and “here are five themes in this novel” like it’s on commission.
AI sounds clever. And in a way it is clever. But it is also trained on what has already been said, mostly by the loudest, most published voices, and it will quite happily invent books and journal articles to fill the gaps. Minority perspectives, awkward questions, grief, rage, the stuff that doesn’t fit neatly into categories – all of that tends to get quietly averaged out.
If you’re not careful, you end up with a thesis that is technically solid and spiritually anaemic, risking nothing. The academic equivalent of beige wallpaper.
Again, this is where critical thinking should stroll in, uninvited, with muddy boots.
“Hang on. Who’s missing from this story? What does this argument assume about the world? Why does this paragraph sound like it’s been copy-pasted from a very polite robot?”
Productivity dream, thinking nightmare
Instead, what we’re doing – not just in universities, but everywhere – is rewarding speed and smoothness. Write faster. Decide faster. Publish faster. Think? Well, if there’s time.
AI slots into that world beautifully. It is a productivity dream and a reflection nightmare.
Students are not villains in this. If your entire education has told you that the goal is to produce the right shape of text on time, why wouldn’t you reach for the tool that does exactly that? If the system cares more about word counts and deadlines than about “did your brain actually wrestle with this?”, AI starts to look like a sensible life choice.
Universities, panicking, are trying to fix this with software that detects software. AI to sniff out AI. A technological snake chasing its own tail.
There are policies, declarations, serious meetings with very serious PowerPoints (really!?). Some people talk about going “back” to handwritten exams, as if a biro will save the soul of knowledge production.
It’s a bit like banning calculators and then handing everyone a smartphone. Technically principled. Practically pointless.
Because the real question isn’t “Did you use AI?”
It’s “Did you stop thinking when you did?
Letting AI start the argument, not finish it
There are ways to use these tools that actually sharpen thinking. They’re just slower and less glamorous.
You can ask an AI to poke holes in your argument. “Be rude. What am I missing? Who would disagree with this?”
You can ask it to summarise the mainstream view on something – then go hunting for everything that sits outside that summary.
You can use it to surface patterns in data, then deliberately chase the outliers instead of the averages.
The point isn’t to obey the machine. The point is to turn it into that annoying colleague who always plays devil’s advocate.
Of course, it only works if you have some critical muscle to begin with. The willingness to push back. To say “no, that’s wrong” or “that’s too neat” instead of “thank you, copy–paste.”
And that muscle is exactly what our current way of living, learning, and working is quietly dissolving.
- Critical thinking requires slowness.
- Room to doubt.
- Permission to not know for a bit.
None of that plays nicely with “move fast, ship more, optimise everything.”
Leaving AI in the airport lounge while
your brain catches the flight
So here we are building machines that can produce ideas-shaped objects at speed and raising humans who are increasingly measured by how well they keep up. Then we act shocked when no one has the time, energy, or courage to sit with a question long enough to actually add something new to the pot.
Maybe originality, in the age of AI, is not about having a completely fresh idea. Those are rare and, frankly, often slightly suspicious. Maybe it’s about where you choose to stand.
Most AI systems are trained to migrate toward the centre – the average, the most probable next word, the dominant story. Perhaps the original move now is to step to the edges, on purpose. To use the machine to map the obvious, then go sniffing around in the places it doesn’t illuminate.
- The weird data point.
- The voice that barely appears in the corpus.
- The question that never quite gets asked.
You can let AI draft something and then argue with every paragraph. You can notice when your own writing suddenly sounds like the airport lounge and ask: “Where did I disappear in this?” And you can keep choosing, in small, stubborn ways, to do parts of the thinking yourself – not because the machine can’t, but because you don’t want to forget how.
Beyond what the AI can bluff
Getting back to my academic friends listening to discussions about the weary ritual of scrolling, marking, sighing. “I just want to hear them,” someone says. “The student. Not the system.”
That, in the end, is the thing no tool can manufacture. A human voice, carrying the marks of having wrestled with something difficult and not entirely succeeding. A mind that has bumped into its own limits and widened, just a bit.
AI will get better and better at faking that. It will sprinkle in imperfections, craft “authentic” little asides, maybe even confess to its own limitations in a very charming way.
But critical thinking – the decision to stop, question, and care about what you’re doing with your mind – still lives in one place only.
In people.
In you, staring at a blinking cursor, choosing not just whether to ask the machine for help, but to pause, question its answers, and rewrite at least one paragraph in your own uncertain words before you hit send.
