Idea
The disappearance of the unclear question

By Victoria Livingstone, Johns Hopkins University
By Jeppe Klitgaard Stricker, Aalborg University
Several years before AI chatbots became publicly available, a student in one of our undergraduate courses wrote a paper on the disproven 鈥淢ozart Effect鈥濃攖he theory that listening to classical music boosts intelligence, particularly spatial reasoning. It took the student a month of mapping ideas and reading articles to refine the topic to a manageable scope. She had begun the semester with a desire to research the impact of music on cognition and ended with a paper dissecting the prevalence of a disproven myth. With an AI chatbot, this student could have narrowed her focus within minutes.
The appeal of generative AI is obvious鈥攁nd used intentionally, at the right phases and for the right purposes, AI can be a powerful tool for learning and teaching. But there鈥檚 a catch: as students and educators, we may begin to 鈥渙ptimize鈥 for what elicits useful responses from generative AI.
Without the aid of chatbots, our student searched databases, read articles, and followed bibliographic trails. When she found conflicting information in the research, she had to think critically to evaluate the sources and theories presented. Ultimately, her research led her to a nuanced conclusion that disproven theories can still have beneficial effects.
Efficiency without friction
Large language models offer more direct paths. When we prompted ChatGPT4 to describe the Mozart Effect, the chatbot immediately returned the result that the theory was a controversial 鈥渕ix of science, hype, and a bit of myth!鈥 It offered summaries of the research without the detail鈥攁nd without transparency.
Generative AI, then, would have given our student quick responses but removed much of the friction鈥攖he uncomfortable, often slow process that requires students to critically evaluate sources, draw connections, and refine ideas. AI offers quick answers, but the output of the chatbot was less complex than what our student concluded. Would she have gotten there working in reverse, with AI feeding her an idea and then looking for sources to back that up? Perhaps so, but that isn鈥檛 true questioning鈥搕hat鈥檚 just finding sources.
To refine research questions, students must be willing to reevaluate and revise their initial positions. It is within that discomfort鈥攖hat cognitive friction鈥攖hat real learning takes place.
Further, students often learn more from badly framed research questions than they do from good ones. Questions with obvious answers, topics too broad in scope, and intellectual dead ends can cultivate critical thinking and build intellectual stamina. In fact, good questions often begin as bad ones followed by a productive struggle. The process of grappling with an unclear question is uncomfortable. To refine research questions, students must be willing to reevaluate and revise their initial positions. It is within that discomfort鈥攖hat cognitive friction鈥攖hat real learning takes place.
None of this, however, aligns well with generative AI. This technology rewards the illusion of precision and clarity, and an output-driven kind of logic. The student or teacher who can produce the most optimized, constrained, and system-compatible prompt will get the best result. This is not a flaw but an integral design component. AI nudges us toward a particular kind of cognitive behavior鈥攐ne that favors instrumental clarity over depth. And this nudge, subtle and unannounced, is steering students and educators toward a new set of habits and assumptions.
From questions to prompts
In the short term, this feels like a win. But over time, we risk conflating "better prompts" with "better thinking." We may begin to ascribe value to not what we want to know, but what the machine is best at answering. In an overheated educational system optimized for consistent outputs, and where grades and time are key, efficiency becomes a natural goal. But the shift is obviously consequential. The metric of value is no longer internal鈥攃uriosity, relevance鈥攂ut external, driven by the pressure to produce.
Generative AI is a phenomenal technology for helping students achieve, but it generates output without necessarily building much actual understanding along the way. For some students, generative AI can enable academic agency. For others, the opposite is true. The danger here is not primarily that we offload learning, but rather that our intellectual priorities shift without discussion or reflection. We learn implicitly to ask the kinds of questions that work well for generative AI鈥攁nd we stop asking the ones that don't. And in doing so, we risk losing something essential: the capacity to ask questions that do not yield immediate or satisfying answers. In many fields of study, including literature, philosophy, and the humanities more broadly, ambiguity is not a problem to be solved. It is a resource for continuous critical thinking, renegotiating arguments, and reflection.
We learn implicitly to ask the kinds of questions that work well for generative AI鈥攁nd we stop asking the ones that don't. And in doing so, we risk losing something essential: the capacity to ask questions that do not yield immediate or satisfying answers.
Generative AI, however, is not designed to dwell in ambiguity. It can mimic ambiguity, yes. It can reflect the style of a speculative essay or a philosophical dialogue. But it is optimized for resolution. For outputs. For an articulated "answer" that looks clean and complete.
This creates a subtle but powerful disincentive for ambiguity. Students may begin to avoid unclear questions, not because they lack value, but because they lack utility in an AI-mediated workflow. The fuzzy question comes to signify inefficiency in a world where the path of least resistance is increasingly privileged over complexity and deep learning.
A collaborative approach to questioning
However, rather than seeing AI-friendly questions and intellectual inquiry as mutually exclusive paths, we might consider how they can work in sequence. Students could begin with AI-optimized questions to establish a foundation, then deliberately introduce the complexity and ambiguity that deeper learning requires.
Many students struggle to come up with research questions. Using large language models as conversation partners may allow them to more quickly arrive at a topic with a manageable scope. Once they have a topic, however, they should start looking at sources (journal articles, primary documents, etc.) rather than relying only on the output of AI, which, while improving, is still unreliable in terms of accuracy and source attribution. Further, AI-generated summaries allow students to gloss over understanding.
Asking students to summarize research or paraphrase passages often reveals the limits of their understanding. Here, too, we return to the concept of cognitive friction. Reading closely鈥攚ith the ability to draw connections between various ideas, identify inconsistencies, and evaluate methodology鈥攊s slow, uncomfortable work. Teachers can assist students by annotating texts together or breaking students into groups to discuss research. Students can then use those discussions as a basis for in-class writing.
This two-stage approach acknowledges AI's utility while teaching students to recognize its limitations. The initial AI-assisted stage provides scaffolding, while the subsequent human-led inquiry builds intellectual muscles that only resistance and struggle can develop. The key is teaching students to see AI-generated content not as an endpoint but as raw material for more sophisticated questioning.
In this way, generative AI becomes not a replacement for traditional questioning but a new tool within a broader ecology of inquiry - provided we explicitly teach the difference between optimizing a prompt and deepening a question.
Beyond the prompt
The problem we face regarding questions in education is not primarily a technological issue. It is a pedagogical and philosophical one. As educators, we must ensure that the purpose of questioning does not collapse into the mechanics of prompting. It is pivotal that students continue to value questions, not for their compatibility with AI, but for their capacity to open up new ways of thinking.
This means protecting, somehow, the kinds of questions that do not yield quick answers. Questions that are slow, layered, and unresolved. It means resisting the urge to reframe every problem in terms that make it legible to a machine. And it means teaching students to recognize when a good question is one that cannot, or should not, be answered by an algorithm.
As educators, we must ensure that the purpose of questioning does not collapse into the mechanics of prompting. It is pivotal that students continue to value questions, not for their compatibility with AI, but for their capacity to open up new ways of thinking.
We need to reassert the value of cognitive friction. In an age of frictionless interfaces, learning must retain its difficulty. The struggle to articulate a question, to explore a problem without a clear solution, to follow a line of thought that leads nowhere - these are not inefficiencies. They are the essential processes by which human understanding grows.
And yet, this is not a call to reject generative AI. On the contrary, generative AI models can have immense value in higher education. They can assist, extend, and even inspire our thinking. But there鈥檚 a chance we will start thinking differently, and the biggest loss will be the loss that is hardest to detect. We will still have conversations, and we will still teach and learn. But something vital may be missing before we know it.
Clearly, generative AI will continue to play a still more pertinent role in higher education. But in a world with plenty of fast answers, it鈥檚 more important than ever to hold on to the purpose of the question itself.
The ideas expressed here are those of the authors; they are not necessarily the official position of UNESCO and do not commit the Organization
Victoria Livingstone, PhD, is a writer, educator, and editor. She was a professor before joining Johns Hopkins University, where she is the managing editor of , a peer-reviewed journal of literary scholarship. In 2024, her Time Magazine piece 鈥 gained considerable international attention.
Jeppe Klitgaard Stricker leads the administration at Aalborg University鈥檚 Department of Clinical Medicine and works as a speaker and advisor on generative AI in education through . He writes the newsletter , with pieces republished in various international outlets 鈥 including an upcoming collaboration with the German Goethe-Institut this summer.