Faced with the meteoric rise of artificial intelligence in education, many schools and universities have decided to ban the use of telephones in class, mainly to prevent cheating linked to tools such as ChatGPT. But an unexpected phenomenon is gaining momentum: it is now the teachers themselves who are being singled out for their excessive use of AI.
When teachers become students of AI
It was the New York Times that highlighted this contradiction. After an initial wave of panic linked to the risk of students plagiarising thanks to generative AI, some teachers have also turned to these tools on a massive scale. The result: students are discovering that their lessons, homework and corrections are generated by artificial intelligence.
This was the case for Ella Stapleton, a student at Northeastern University, who discovered that her course materials contained clumsy instructions and content clearly generated by ChatGPT, sometimes riddled with errors and distorted images. Shocked, she demanded a refund of her tuition fees.
AI for teachers, but at what cost?
In a survey conducted in the United States, 35% of university lecturers now say they regularly use AI, compared with 18% a year earlier. For many, these tools are seen as invaluable assistants: help with writing, generation of course materials, automated corrections, more “empathetic” feedback suggestions, etc.
But there is growing unease among students. On platforms such as Rate My Professors, the criticisms are mounting: empty slides, impersonal remarks, generic vocabulary, even incoherent answers. For young people who sometimes pay tens of thousands of dollars a year for their education, the impression that they are being trained by a machine rather than a human being is perceived as a betrayal.
An ethical and pedagogical divide
Some teachers, such as Paul Shovlin (Ohio University), are calling for a reasoned use of AI, insisting on the importance of preserving the human link and pedagogical discernment. Others, like Katy Pearce (University of Washington), are trying to integrate AI in a positive way: for example, she has trained a chatbot with its own evaluation criteria to help students outside class hours.
At Harvard, Professor David Malan uses an AI assistant to answer basic questions from his programming students, freeing up time for more rewarding interactions such as workshops or hackathons. The use of AI can therefore, in some cases, improve the quality of teaching, provided that it is transparent and well supervised.
The challenge of trust
But when use becomes excessive or concealed, it destroys the relationship of trust. This is what happened to Rick Arrowood, also a professor at Northeastern University, who admits that he used ChatGPT and other AIs to develop his course… without always checking the content or informing his students. The controversy forced him to review his practices, and the university has since introduced a clear policy: any content generated by AI must be reported, verified and justified.
AI in education is not an evil in itself. But its opaque and unregulated use by teachers can undermine the very legitimacy of the education system. As the lines between human and automated become blurred, the real challenge is not technological, but pedagogical: how to ensure that artificial intelligence remains a tool at the service of knowledge, and not a substitute for educational effort.