Tufts professors have increasingly decided to shift their assessments from take-home essays to in-class exams due to the rise of artificial intelligence and large language models.
Vickie Sullivan, a professor of political science, believes that AI usage among students has made take-home essays an unreliable method of assessing students’ learning.
“I believe that in introductory level courses with a large enrollment faculty are more likely to encounter students who will take the easy way out not only by consulting AI but who will also cut and paste from it,” Sullivan wrote in a statement to the Daily.
Nicholas Anderson, a post-doctoral scholar at Ohio State University and former lecturer in political science at Tufts, said AI undermines students’ fundamental learning strategies.
“You have to really have the experience of being uncomfortable and not knowing the answer, and really struggling with it, and maybe not knowing the answer for months or ever,” he said. “This slow, patient thinking, which is one important experience of a good liberal arts education, is undermined by AI and its instantaneous gratification.”
Although Sullivan recognizes the need to avoid the risk of AI usage in take-home essays, she believes the shift toward in-person exams in introductory-level classes could leave students unprepared for writing in advanced courses.
“I am concerned about the ability of students to learn how to write compelling analyses. It is my personal experience that the very process of writing induces deeper, more assiduous thinking,” she wrote. “It can still be taught in upper-level courses but I think students will rarely encounter its demands in lower-level courses.”
Anderson also explained that AI usage forces teachers to have to monitor their students’ writing, rather than grading it from an objective standpoint.
“If ten percent of students are using ChatGPT to write this paper, that would be ten students in [a 100 student course] and that’s a lot of policing,” Anderson said. “It would put a lot of extra work and burden on my TAs, who would be looking out for this.”
AI usage has posed problems for educational institutions abroad as well. A study from the Higher Education Policy Institute, based in the UK, found that 88% of students surveyed are using ChatGPT and other AI tools for assessments, with 8% using AI in assessments without editing it.
Nonetheless, professors do not consider AI as an entirely negative learning resource. Both Sullivan and Anderson believe there is some merit in using AI for certain educational purposes.
“In order for [AI] to be valuable, I think you have to have already reached a certain level of discernment and education to tease out what is valuable in AI and what is not,” Anderson said.
Mary Davis, senior associate vice provost for education and an associate professor in Urban & Environmental Policy & Planning as well as economics, said that the university had been working to address the rise of AI in education.
“An AI Taskforce of faculty representing the Tufts schools are working together to identify the challenges and opportunities presented by the growing use of AI in education,” she wrote in a statement to the Daily. “Taskforce recommendations in the form of guidance on AI use will be circulated later in the fall.”
The task force, an initiative of the Center for the Enhancement of Learning and Teaching, has been working to help professors learn how to use AI and combat its inappropriate usage.
The task force’s website listed links that encouraged professors to “rethink assessments” and course work, and take steps to update course standards.
“Given the challenges that AI poses to traditional assessments, now is the time for each of us to rethink our assessment strategies to foster authentic student engagement,” writes Carie Cardamone, associate director for STEM & Professional Schools, in the article. “Use a variety of in-class activities, short writing assignments and quizzes to provide students with opportunities to retrieve knowledge and practice interleaving skills throughout the semester.”
Davis also said that the task force is focused on better understanding AI — both its opportunities and challenges — rather than on crafting specific rules.
“The university-wide Taskforce is developing recommendations and general guidelines on AI but these are not intended as hard and fast policies,” she wrote. “There’s no one-size-fits-all approach that’s been taken by the university.”
Theo Weller contributed reporting.