AI is here to stay at the University, but it requires proper instruction and discourse, according to Pitt faculty.
Student Government Board and the University Library System hosted a panel about AI and academia in the Hillman Library’s Latin American Room on Wednesday evening. Moderated by SGB executive board member Patrick Ryan, three Pitt faculty members said instructors have an important role in teaching students best AI practices and clarifying classroom expectations, such as when and why not to use AI for schoolwork.
Elise Silva, director of policy research at Pitt Cyber, said instructors have a responsibility to better articulate the value of schoolwork they expect students to complete without AI.
“I think that really matters right now, because a lot of times, without those explanations, we find that students are like, ‘I’m just going to automate the things that don’t seem to matter to me,’” Silva said.
In an interview with The Pitt News, Jeff Aziz, teaching professor of English and assistant dean of the Dietrich School, said he notifies his students of an “expected work method” and “appropriate practices” on assignments. He told students he wanted their presentations to be original and not formulaic because he believes literature classes should act as a place for reflection rather than “instantly assimilating” knowledge, and he found that they listened.
“Why not be clear about why we do these things?” Aziz said. “Here’s a thing that I noticed as a dean — people say, ‘Why don’t my students do that?’ And I’m like, ‘I don’t know, have you asked them to do it?’”
Morgan Frank, assistant professor in the Department of Informatics and Networked Systems, said he believes professors have a responsibility to teach students how to use AI because of its increasing importance in the professional world.
“[Students] should be worried if they are not using AI,” Frank said. “Imagine we tried to graduate you, and get you on the job market, and you didn’t know how to use it. You’re going to have a way harder time, and we don’t make a big deal about that.”
Frank believes there is a lack of University-wide AI teaching policies because major research is still being conducted, given that the technology is relatively new.
“There aren’t a lot of safeguards because we haven’t reached consensus. You want the answers to this question to be thought out, tested in field experiments that were carefully done, and those things are all happening now,” Frank said.
With his perceived lack of safeguards, Frank said he believes student and professor usage of AI mimics an “arms race,” where students may be tempted to use AI for submissions and professors may be tempted to use it for assessment. Frank said he believes there should be more research on how access to AI tools impacts students’ learning.
“Where we’re going to meet in the middle in terms of evaluating students’ work autonomously or safeguarding against AI-filled submissions when necessary — it’s very unclear to me what that resolution or equilibrium will look like, or what it should look like,” Frank said.
Aziz said institutions, in any societal shift, have a challenge with adapting “aggressively enough to meet the moment.”
“The University needs the flexibility to change in the face of the challenges presented by the current economic and artificial intelligence scene, and institutions aren’t that flexible,” Aziz said.
Aziz commented on a finding from a recent Pitt project querying Pitt students about AI use that indicated some students thought their instructors had used generative AI in assessing work. Although he was unsure if they were right, it caused him to think about the relationship between instructors and students in writing assessments.
“We need to respect that student concern — that earnest work is being done by their professors and that the contract between the professor and the student is what we both expect it to be,” Aziz said.