Idea
Are human beings necessary in the life cycle of knowledge?

By David Ross, writer and AI consultant, Ross Consulting
A few nights ago, I spent 90 minutes online with the staff of an elite public school in Shenzhen, China. I gave an hour-long talk about generative AI and its uses in education then engaged my audience in a 30-minute Q&A.
Near the end of our chat, one of the teachers made a passing comment that has made me think about a potential crisis in our implementation of AI in schools. I call this foreseeable crisis, with very little hyperbole, the classroom AI doom loop.
The teacher in question strode to the front of the room to grab the microphone and then explained the tremendous pressure he and his colleagues face. Students, parents, administrators, government officials, industry leaders, and the public were demanding that they increase students' AI literacy while training learners to use the tools in ethical and effective ways.
This implementation was, somehow, to occur in a learning environment bereft of guidelines, policy, laws, regulations, research, models, or proven strategies. He and his teaching colleagues were left on their own to face the most invasive disruption to education since, well, paper.
Students in his high school history class were turning in 25-page reports, perfectly written, clearly reasoned, and adorned with pages of citations. Did he really want to tell influential parents that their cherished offspring had cheated by using AI? Even if he did, could he prove it? He had no clue how to assess the work.
The teacher glared at me 鈥 the expert 鈥 and asked a two-part question: 鈥Where do you see this going? Best-case scenario and worst-case scenario?鈥
Worst-case scenario: a screenplay
I, and all the other experts in this field, have answered the best-case scenario question countless times, but I had never really pondered the idea of a worst-case scenario. Three days later, I have one. I鈥檒l call it Johnny and Mr. Bennett Enter the Classroom AI Doom Loop, with apologies to the team who wrote the introduction for the classic American television program Twilight Zone ...
You're traveling through another dimension -- a dimension not only of sight and sound but of mind, both human and artificial. A journey into a wondrous land whose boundaries are that of imagination. There鈥s a signpost up ahead: You are now entering the Classroom AI Doom Loop鈥
Mr. Bennett, who teaches 10th-grade history and is the father of a newborn, asks ChatGPT to create five standards-based writing prompts for the topic of a compare-and-contrast essay on the root causes of the French and American Revolutions. ChatGPT pops out five prompts a moment later. Mr. Bennett skims the options and quickly chooses prompt No. 2:
鈥Examine how economic hardship鈥攅specially unfair taxation and national debt鈥攆ueled the American and French Revolutions. How did the similarities and differences in each country鈥s financial pressures affect the direction of revolt?鈥
Mr. Bennett pastes the prompt into a new assignment in Google Classroom and schedules it for release the following morning. He goes back to caring for his new baby.
Johnny sees the assignment in the morning, copies it, pastes it into Claude.ai and asks Claude to write a response in the style of a 10th-grade student. Claude generates a passable essay, which Johnny scoops off the screen and pastes into a blank email in an attempt to remove any digital watermarks. He then copies the scrubbed text and pastes it into the AI Humanizer . The Humanizer makes it read even more like a 10th-grader by eliminating the mechanical tropes of AI-generated text.
I, and all the other experts in this field, have answered the best-case scenario question countless times, but I had never really pondered the idea of a worst-case scenario.
Johnny, who is thinking only about his overloaded daily schedule, doesn鈥t even read the output from the humanizer before downloading it as a Word doc and sending it via email attachment to for automatic grading.
Essay Grader AI assesses the essay using a system-generated rubric, giving Mr. Bennett advice for improving the assignment and Johnny for improving his writing. The AI outputs the score back into the Google Classroom grade book, which ports that grade into the district grade book on .
Mr. Bennett, consumed by his fatherly duties, should really look at the feedback from Essay Grader AI, but he doesn鈥t have time. He supplements his salary by coaching the girls鈥&苍产蝉辫;basketball team after school. For understandable reasons, he trusts the AI to relieve him of the onerous task of grading 125 essays every week.
"We know that a scenario can be real, but who ever thought that reality could be a scenario? We exist, of course, but how, in what way? As we believe, as flesh-and-blood human beings, or are we simply parts of an AI鈥s automated workflow? Think about it, and then ask yourself, do you live here, in this classroom, in this school, or do you live, instead, in the Classroom AI Doom Loop?"
We exist, of course, but how, in what way? As we believe, as flesh-and-blood human beings, or are we simply parts of an AI鈥檚 automated workflow?
The mechanics of the doom loop

Let鈥s follow the workflow:
- Teacher prompts AI, which generates assignment
- Teacher posts assignment in Learning Management System
- Student copies text of assignment and prompts AI to create response
- AI generates a response, which student pastes into a AI Humanizer
- AI Humanizer outputs revised response, which student uploads to automated AI grader
- AI grader evaluates response then posts grade/feedback to LMS
- LMS outputs grade into Student Information System
No humans were harmed in this process because humans were only ancillaries to the process. And this is today鈥s technology. By the beginning of the next school year, agentic AIs such as , or will be able to eliminate humans from any involvement in the knowledge transmission cycle. If the last 100 years of technological innovation have taught us anything, it鈥s that if something can be automated, it will be automated.
Is this scenario really that far-fetched? Students and parents are busy and stressed. They hate homework because they have to give up their evenings and weekends to do it or monitor it. Teachers are busy and stressed. They hate homework because they have to give up their evenings and weekends creating it and then grading it. There is no conspiracy here, but humans will all choose to use AI for similar reasons.
Wouldn鈥t it be ironic if the solution to the industrial model of knowledge transmission is in fact automation?
Do not blame the tech sector
The tech sector did not create AI so that students could cheat on history homework or teachers could relieve themselves of the drudgery of grading assignments. These are unintended consequences of AI鈥檚 functionality. However, the training data and processes used to build these tools play a significant role in how a student or teacher uses them. One also can鈥檛 ignore the reality of the market: Companies must create products that generate revenue as well as text and images. Form and function are close to an ideal match. Generative AI tools are large-language models. Teaching and learning is an even larger language model. They fit together naturally.
By the beginning of the next school year, agentic AIs ...will be able to eliminate humans from any involvement in the knowledge transmission cycle. If the last 100 years of technological innovation have taught us anything, it鈥檚 that if something can be automated, it will be automated.
In 1951, the philosopher Bertrand Russell wrote an entitled 鈥Are Human Beings Necessary?鈥 in response to the new field of cybernetics. He pondered the consequences of an automated society long before generative AI as we know it was even imagined. Russell concluded that an automated society might not necessarily require human beings to function, but that probably wasn鈥t a good idea.
We have come to the inflection point where we can automate most elements of knowledge transmission. I鈥m not sure if that is a good idea. But before we take up residence in the Classroom AI Doom Loop, we should have a serious policy discussion about the purpose of education. If a major function of education can be automated, it's probably not human enough.
I can illustrate this point by advising you to walk through the aisles of your local megamart. You will see families, which almost always include a toddler who is sitting in the shopping cart, eyes glued to a tablet, oblivious to the chaotic bounty of capitalism that surrounds them. We have trained a generation of children to be entertained and taught by staring at a screen. Or if you prefer to learn via screen, by a teacher named Ema, who shares the same sentiment much more eloquently on TikTok.
The current iteration of AI tutors is really good at teaching content to students, though, as Jeremey Knox , in a manner that currently breaks no new ground pedagogically. If children prefer to learn that way, let them. For thousands of years teaching has consisted of equal parts transmission of knowledge and human development. Let the machines handle the mundane transmission of low-level knowledge per
But before we take up residence in the Classroom AI Doom Loop, we should have a serious policy discussion about the purpose of education. If a major function of education can be automated, it's probably not human enough.
There is not a school on earth that doesn鈥t have a poster or plaque in the office that says something like the following: 鈥Our mission is to develop lifelong learners who effectively communicate, critically think, collaborate, and use their creativity to become successful in college, career while improving their community as active citizens.鈥 These competencies are the components of education that humans excel at modeling, explaining, coaching, and teaching.
Let鈥s live up to those promises by letting the humans focus on human development and the array of skills that are described in frameworks as varied as t, , , , , the , and . They are all based on lived experience, and that鈥s something humans currently hold a monopoly on.
The ideas expressed here are those of the authors; they are not necessarily the official position of UNESCO and do not commit the Organization
David Ross is a writer and AI consultant, is the former CEO of the Partnership for 21s Century Learning.