Pitt’s new attempt to integrate AI into higher education with the help of Amazon Web Services is simultaneously drawing criticism and receiving praise.
On Oct. 22, Pitt announced its new partnership with AWS and Anthropic to create an “AI-enabled Campus of the Future.” This will include University-wide access to a chatbot called Claude for Education, the development of a “workforce readiness program” using technical training from Anthropic and integrating Amazon Bedrock — a machine-learning platform used by companies to easily build AI applications — into PittGPT, Pitt’s own AI chatbot.
Pitt previously partnered with AWS last spring for their Cloud Innovation Center. Brady Lutsko, communications director at Pitt Digital, said the new partnership is “a natural next step in existing relationships.”
One of the main aspects of the new Pitt-AWS partnership is standardizing AI access across the University.
“The biggest distinction for Pitt is making sure every single student has access to these tools — we are leveling the playing field,” Lutsko said. “The goal is to integrate AI in a way that gives students hands-on, career-ready experience and to do it thoughtfully. For the University, true leadership means equitable access and ethical use.”
Lutsko added that the partnership will prepare students for workplaces that will use AI.
“The reality is that AI is already in the workplace and being used by students, and that presence is only going to grow,” Lutsko said.
Claude for Education, which the new partnership will give students access to, is a generative AI chatbot powered by large language models that is catered specifically towards academic institutions. Lutsko said Claude for Education prompts users with “guiding questions” instead of simply giving an answer to a problem, which he says is more conducive to learning.
“The University sees Claude acting as a personalized coach for critical thinking, and that’s a significant benefit no matter what a student is studying,” Lutsko said. “This tool was designed to support the way students learn.”
Michael Madison, coordinator of the Pitt AI Scholar-Teacher Alliance, said the language of the partnership means Claude will not be forced upon students and faculty.
“When Pitt says it is going to be an AI-enabled University, the word ‘enabled’ is a very important word in this context,” Madison said. “What they mean, as I understand it, is that Claude for Education will be available to faculty, students and administrators. I do not think that it will be mandated or required that it be used by anybody.”
Madison said the partnership represents Pitt’s strategy in competing with other universities by promoting its students.
“I think it sends a signal to the job market that Pitt graduates are worth hiring, and Pitt is forward-thinking when it comes to AI,” Madison said.
Claude for Education is not a net positive for some professors. Zachary Horton, an associate professor of film and media studies, said that because AI technology is continuously shifting and disrupting the economy, the higher skill of critical thinking is more important to teach than the use of generative AI.
“Students are not going to get jobs because they’ve learned how to be a good Claude prompt engineer — that’s totally irrelevant,” Horton said. “What matters is that they can think creatively, lead and communicate effectively, synthesize information and thus be ready for a hybrid AI and human workplace.”
Horton is teaching ENGLIT 2270, “AI Cultures” next semester, which investigates the history and culture of AI.
According to Horton, large language models — which Claude is powered by — base their actions on data and patterns rather than understanding. LLMs “can’t tell if [they’re] wrong or right about anything,” Horton said.
“They’re mimicking the things we’ve created before as humans, but there’s no knowledge behind it, and I think that’s what’s problematic,” Horton said.
Horton called LLMs a “crisis for education” because he believes classroom writing assignments would be threatened if students relied on generative AI.
“AI can respond to assessments,” Horton said. “It can’t do it well, but it can do it plausibly.”
Some proponents of generative AI tout its ability to complete low-skilled tasks, thus freeing up the users’ time. Horton believes using it to summarize research papers is far from a low-level skill and thus a “danger” of the technology.
“Summarization involves value judgments about what actually matters and what doesn’t, what things are connected and what aren’t, and if you’re just offloading this to LLMs, your ability to synthesize information is going to atrophy,” Horton said.
Madison said he “has a lot of sympathy” for the idea that colleges should solely focus on critical thinking rather than securing a job. However, according to Madison, this is “not a practical way to organize the entirety of a higher education institution.”
“People pay a lot of money to go to [Pitt], so it’s legitimate to expect that one of the things that the University will do is get you ready to live not just a productive life, but it will get you ready to live a life that includes a productive career,” Madison said.
Annette Vee, an associate professor of English who studies the intersection of computation and literacy, said she believes the Anthropic partnership is safer than a partnership with larger AI companies, such as OpenAI, because it may have stricter data privacy policies. Vee thinks Claude for Education is superior to ChatGPT because it is catered towards learning.
“Claude for Education is trying out a Socratic method — a tutoring method as a default — rather than answering questions and outsourcing the cognitive work that we need students to do,” Vee said.
In a report Vee and other Pitt faculty conducted last year about the use of generative AI in higher education, she found that about 90% of students use generative AI tools, with the most common being ChatGPT.
According to Vee’s findings, most students use unpaid versions of ChatGPT and Gemini. She said this is a “haphazard way” of using AI because of the worsened data protection and lower overall quality.
“I think [we should] standardize across the University one technology as much as we can,” Vee said, “[in] the same way that we standardize on Canvas.”
John Radzilowicz, director of practice and assessment at Pitt’s Teaching Center, is concerned about the University’s communication regarding the partnership. Even though the news release said the technology would be available through Canvas, Radzilowicz — who helps operate Canvas through the Teaching Center — said he received no communication about this prior to the release.
“Most of these contracts and tools that have been brought in have been done by Pitt [Digital], and we haven’t been involved in those discussions, which I think is perhaps a little concerning,” Radzilowicz said.