Advancements in accessible AI tools have allowed students to generate entire essays, produce complex programs and solve difficult mathematical problems. However, AI also has a tendency to hallucinate without users ever realizing.
This new tool has changed how students learn, and while many professors are concerned, others are adapting to its integration in their classrooms.
Every humanities class in our sample prohibited the use of AI and the only classes that encouraged its use were two upper division engineering classes. Overall, almost 60% of our sample allowed some use of AI in the learning process.
Many professors see AI as another tool such as calculators or spell check and hope students will use it in a similar way. However, other professors have adapted their courses to become resilient to AI reliance. Additionally, some professors hold discussions about AI use and encourage their students to opt out of the social and environmental harms of AI.
UC Berkeley does not explicitly enforce a uniform policy regarding the use of generative AI tools in learning and education.
Gender and women’s studies professor Laura Nelson noted that before this summer, faculty were on their own. However, over the summer, Executive Vice Chancellor and Provost Benjamin E. Hermalin’s office sent out resources for AI use in education.
“There was a lot of discussion among faculty about how to consider AI. But, the university itself was giving very minimal, if any, direction,” Nelson said.
Despite campus’s guidance, some faculty still express concerns about the lack of clarity. Over the same summer, the Berkeley Division of the Academic Senate, which represents faculty interests on campus, shared its own academic guidance with professors. The senate, following discussions with the Academic Senate GenAI Working Group, encouraged faculty to include a clear statement about AI use in their syllabi to ensure students were not inclined to cheat or misuse technology that would affect their learning.
The Academic Senate provided three sample statements: “Full AI,” “Some AI” and “No AI.” Many of the classes we surveyed follow the Academic Senate’s guidance, either by directly using the template’s language or adapting the language to fit the course’s needs.
Chemistry professor Eric Neuscamman, who teaches Chemistry 1A, appreciated this flexibility given to professors to decide their AI policy.
“There probably isn’t a one-size-fits-all policy for classes that (have various sizes and formats),” he said.
Common policies
We collected 36 AI policies by reaching out to professors and department heads.
“When people were (asked to share their policy with the Daily Cal), people said ‘I don’t want to share my policy,’” Nelson said. “I think it reflects the sense that there’s a hazard in sharing academic information in this particular political moment.”
18 of the classes fall under the second policy provided by the Academic Senate, which states, “This course enables limited uses of GenAI tools but also prohibits broad use of them in cases that would be considered plagiaristic if the tool’s output had been composed by a human author.” Across these courses, instructors emphasized disclosure, responsible use and maintaining one’s voice.
For many courses, disclosure of AI use is explicitly required. Both STEM and humanities classes require students to identify how they used AI tools. Professors often asked for a citation including the prompts used and how the conversation supported their work and learning.
For example, the Computer Science 161 syllabus notes that students “must indicate specifically which answers/problems contain material written by AI.” Even classes that strongly discourage AI, such as Sociology 111AC, required students to disclose and be able to explain how AI was used.
Some professors note that if they believe a student is plagiarizing from AI, they “may require (a student) to complete (an examination) related to the content and skills tested in the original assessment.”
However, many professors were intent on noting that AI answers are not always correct and may contain false information.
Public Policy 290, a graduate course, tells “students to think of ChatGPT as a “drunk but brilliant intern.” Many classes note that students are responsible for their work and should be aware and skeptical of AI outputs.
Syllabi commonly suggested that AI should be used as a tool such as Grammarly or SparkNotes rather than as a substitute for original writing. Many courses encourage or allow students to use AI in the process of brainstorming, summarizing, providing feedback or suggesting style improvements, especially in the context of writing.
Professors consistently warn against students losing their personal voice and intellectual ability, especially in classes where active thinking, engagement and discussion are important.
Neuscamman compared AI to search engines and peer collaboration, adding that while both reduce confusion, they also reduce opportunities for students to learn and develop problem-solving skills on their own.
Erika Weissinger, an Goldman School of Public Policy assistant professor, said she encourages students to not rely on AI or undermine the goal of authentic engagement in her course, but rather use AI for brainstorming or revisions.
Some course policies address the societal and environmental implications of AI use. Diana Negrín da Silva, a geography lecturer, noted that the “serious negative environmental impacts” of AI led her to prohibit its use.
Reading and Composition on Topics in Ancient History and Mediterranean Archaeology, an R1B course, has a syllabus that takes on similar concerns but with a darker message.
“You will be stuck at the mercy of an algorithm that is incapable of caring about you as a person or your unique perspective, made and controlled by billionaires who care even less about you and your humanity than the algorithm does,” the syllabus reads.
However, there are a handful of courses that encourage AI. The Bioengineering 140L syllabus encourages any AI or software tools but notes that ”the student is responsible for the product, the correctness of its statements and references.”
Some classes strike a balance between allowing and not allowing AI usage. For example, in Information and Management Systems 153A, students are encouraged to use AI on their group project and take-home final exams, but not on assignments that help solidify their initial learning and foundation.
On the other hand, some classes have adapted to discourage AI usage. Bioengineering 140L Professor Chris Anderson uses a customized software to personalize questions and autograde so that questions cannot be easily answered with AI. Other classes have moved away completely from take-home assignments to remove the possible use of AI.
While many classes have allowed AI use under specific clauses and warnings, others have completely banned AI usage.
The range of AI policy is also reflected in the type of class or class level. Classes were categorized into five types: humanities (such as philosophy, education and French), science (such as chemistry and neuroscience), social science (such as public policy, geography and political economy), technology (such as electrical engineering and computer sciences or data science) and engineering (such as material science, mechanical engineering and math).
Of the sampled classes, both of the classes that encouraged AI were upper division engineering classes: Bioengineering 140L and Chemical Engineering 140. From our sample, all the humanities classes prohibited AI use.
Neuscamman noted that professors may need to adapt and integrate to AI resources students are using. “It makes it more important that the course be structured in a way that it incentivizes the style of studying that leads to the type of learning that you’re trying to make happen,” he said.
Nelson believes this may still take a few more years. “I think that professors are really sincerely trying to figure it out. I do think that it’s a moment where we’re all trying to figure it out together,” she added.