As AI creeps into college curriculums, Pitt students are deciding whether to use it and how to do so.

While Pitt seeks to integrate AI into classrooms and research, many students are skeptical of AI models for their usefulness in assignments, while others are optimistic about their potential.

Danny Peelen, a junior computer science major, said he uses Claude and ChatGPT for schoolwork. Peelen uses ChatGPT to generate templates for emails and essays and Claude for his coding work, saying Claude is better engineered for coding-related tasks. 

Despite this, Peelen added that learning computer science fundamentals without the help of generative AI is crucial.

“As a software engineer, you need to have a good concept of the engineering process and how to actually build out code. In essence, that’s the whole goal of a computer science program,” Peelen said.

Liam Brem, a sophomore computer science major, is currently a teaching assistant for CS 0445: Data Structures and Algorithms. Brem, who said the class focuses on computer science fundamentals, has witnessed the skill of fundamental coding decrease recently among students in the class.

“A lot of the code that I see other people writing, it’s not necessarily that you can tell it was AI generated right away, but when you start asking them questions about it, digging a little bit deeper, they don’t know what’s going on,” Brem said. “That’s a clear tell that it was AI generated.”

Brem has “drastically” reduced his own AI usage for coding over the last couple of weeks because he believes it inhibits his learning abilities. An MIT study showed that those who used AI to help write essays exhibited weaker brain connectivity and memory retention.

“Since [ChatGPT] was released, the more I found myself using it, I was more productive and able to accomplish more, but I wasn’t actually becoming well-practiced in the fundamentals,” Brem said. “If you were to [have taken ChatGPT] away at one point, I probably would have been lost.”

Elise Silva, director of policy research at Pitt Cyber, interviewed Pitt students about their use of AI with faculty from across Pitt’s campuses. The study found that a majority of Pitt students use AI in an academic setting.

Silva said she thinks the most significant findings of the study, which was sponsored by Pitt Digital, were students’ emotions about using AI, such as guilt, shame, fear, anxiety and distrust. Despite these negative feelings,  Silva said, many would use AI anyway.

“Some students seemed wary of AI. But then later on in the focus group, they would disclose that they had used it when they were up against a deadline and didn’t know what else to do,” Silva said. “Even students who maybe even ethically disagree with using AI, because of how hard it is to be a student, they’re using it as a tool when they need it.”

The study found students mostly used AI in productive ways and for work they deemed less important.

“Students overwhelmingly, across departments and across disciplines, use it when they think something is not worth their while, like busy work,” Silva said. “We see that a lot in discussion posts and when students don’t understand the importance of [the work.]”

Doyle Keane, a senior psychology and neuroscience major, said he uses ChatGPT for literature searches and research analyses, which is helpful in his neuropsychology lab.

“I would say, ‘Find me articles or sources relating to damage to Brodmann’s area 44 and how that relates to Western Aphasia Battery,’ or ‘How did these scores affect this outcome?’ or ‘How does this variable affect this one?’” Keane said.

For schoolwork, Keane said he uses AI mostly for long, menial parts of assignments or when he gets “totally stuck” on a problem.

“I’ve used it to help me finish assignments when I’m totally stuck on an assignment, not making any progress and I need to finish this so I can get to bed,” Keane said. “But it’s never something where I start an assignment and open it up immediately.”

Aunomitra Mandal, a sophomore philosophy major, said she “refuses” to use AI for her schoolwork because she believes her major is inherently interdisciplinary and requires having context in prior reading and analysis.

“You can’t read something [in philosophy] without reading something else — without knowing about the ancients, without knowing about the ethics, without knowing about metaphysics,” Mandal said. “No AI can analyze something for you. You have to know it for yourself.”

Shreyash Ranjan, a junior computer science major, believes that using AI for fundamental computer science classes may harm job prospects that require technical interviews.

“Fundamentals are probably the very first question that you’ll get asked upon during an interview for any major tech company like Google, Meta, Amazon,” Ranjan said. “If you’re using AI through the entire class and you’re not actually picking up on anything, you’re going to get to the interview and fail.”

Ranjan, a Claude Campus Ambassador, said he believes Anthropic, the company that created Claude, spends more time ensuring their product is safe and useful. Ranjan said some of his friends in more difficult computer science classes use the chatbot as an assistant, guiding them through writing the code in a Socratic style.

“They’ll go one section at a time and say ‘Here’s what I’m thinking, here’s what I’ve written — Can you help me find errors or help me get to a solution a little bit faster?’” Ranjan said.

Peelen said he uses ChatGPT to generate essay outlines rather than the actual writing because current AI “can’t think” or “know what [he] wants as a writer.”

“I think people conflate the fact that ChatGPT can reproduce language to mean that it actually can think,” Peelen said. “At the end of the day, large language models model language. They know each word to produce after the other, but it actually has no cognition or actual idea of induction or deduction. It does not operate like a human brain at all.”

Mandal believes philosophy is rooted in the human experience and therefore cannot effectively utilize machines to further ideas.

“AI doesn’t have experience in the real world, it can’t have sense perception, it’s not human,” Mandal said. “That takes away from its ability to analyze, especially with philosophy, because [philosophy] is so based in the human experience. That’s not something AI can replicate.”

Since AI is trained on ideas from the internet, Mandal said she believes AI could be considered a form of plagiarism. From conversations with her philosophy professors, Mandal said they have emphasized the importance of students’ brainstorming.

“Teachers have always been against [plagiarism] ever since our youth — the reason for that is not just because it’s wrong to steal other ideas, but it’s how we facilitate our own ideas, our own thought processes,” Mandal said. “AI doesn’t come from nowhere. It uses all the ideas that were ever published on the internet.”