As artificial intelligence begins to join the classroom, faculty members are trying to balance technological assistance with human interaction.

Since its release in 2022, OpenAI reports that one-third of adults aged 18-24 use ChatGPT. To keep up with the number of students using AI, Texas Tech assistant vice provost of Teaching and Learning Suzanne Tapp created the AI Resources and Guideline Committee in 2022 to assist faculty with the ins and outs of Learning Language Models.

“The first thing that we did was get familiar with AI and LLMs. We’re not computer scientists. We’re not IT specialists,” Tapp said. “In their worlds, it’s not a new concept, but in my world it is.”

Tapp said Tech does not have a universal AI policy because the university cannot account for the varying classroom styles, but the ARGC helped create a set of guidelines for professors to apply to their syllabi based on the subject, class and level.

“If my goal for students is to learn foundation-level information, I might not want students to use AI,” Tapp said. “I really want students to learn that material”

While observing how students use ChatGPT, Tapp said the most difficult obstacle for faculty was being able to gauge whether or not a student is sincerely taking in and learning the material.

“We’re seeing students use AI to write their personal reflection statements, discussion posts, to write papers, to solve problems,” Tapp said. “The problem with that is that there is no accurate way to measure the authenticity of a student’s knowledge.”

Although Tapp has recommended the use of AI detectors, she said they are often unreliable. She said professors should learn both a student’s writing style and the common mistakes an AI model can make in its writing.

“AI detectors present a lot of false positives, and they are more prone to flag a non-native speaker,” Tapp said. “One of the best strategies is not to get so caught up in catching AI, but rather change the way that you assess the product.”

Detectors are not the only tool Tapp uses, she said AI-generated work leaves a footprint behind in the metadata which allows her to see if a student used AI.

Although she wants protections against AI, Tapp said she is not anti-AI, but rather she just wants students to reflect on their own personal usage of it and whether or not it affects their ability to learn at Tech and beyond.

“I don’t want to come off like I am the AI police, because I’m not. I use it every day,” Tapp said. “But some critical thinking has to happen. Why are you here? What do you want to do with your degree? What do you want it to be worth?”

When it comes to using AI in daily student life, Nathan Schober, a fourth-year university studies major from Fort Worth, said he uses LLMs like ChatGPT and Microsoft Copilot to assist with creating emails and study plans.

“I write up one or two sentences — very brief, very concise, not professional at all — and l throw it into something like Copilot,” Schober said. “I say I need to make this professional and make sure it converts it into like two or three paragraphs with the proper formats.”

When it comes to using AI and understanding his course material, Schober said he will do the assignment informally at first to ensure he understands the content, and then use AI to assist with clean up.

“I’ll write basically the full three pages myself without citations, spell check, proper grammar, but with all the content I know and understand,” Schober said. “Then I ask ChatGPT to revise this and add citations in proper APA format.”

Schober said students using AI need to make sure they are using it as a tool and not a cheating device. He said teachers can offer more in-class assignments to remove the possibility of cheating.

“If you don’t know how to use it, you’re screwed, but if you rely solely on it you’re behind everyone else,” Schober said. “I would say in-person assignments are a really good way of going about making sure students know the material.”