David Chalmers, professor of philosophy and neural science at New York University, delivered the 40th Selfridge Lecture at the Health, Science and Technology building on Tuesday.
Titled “What We Talk to When We Talk to Language Models,” the lecture was hosted by Lehigh’s philosophy department and focused on the philosophy of artificial intelligence.
The annual Selfridge Lectures were established by Lehigh’s philosophy department to promote discourse around philosophy. The lectures are supported by a gift from the late Charles W. MacFarlane, C.E. 1876, Ph.D. Freiburg 1893, and LL.D. Lehigh 1922. Past Selfridge lecturers have included Edward Said and Daniel Dennett.
Ricki Bliss, an associate professor and chair of Lehigh’s philosophy department, said for those outside the field, the names of previous lecturers may not be widely recognized.
“Inviting somebody who could speak on some issue to do with consciousness, mind, AI technology (and) virtual reality would be enjoyable and engaging for the broader Lehigh community, rather than inviting somebody to talk about, say poetry,” she said.
Bliss said she proposed Chalmers as a lecturer because his areas of specialization align with research being conducted on campus. She also said she’s known him since he was a Ph.D. student at the University of Melbourne.
Chalmers is also the co-director of the Center for Mind, Brain and Consciousness at New York University. Bliss said Chalmers was ultimately selected because of his impact on discussions surrounding the mind, AI and consciousness.
Chalmers said technology raises new philosophical questions.
“Technology gives us new cases to think about,” Chalmers said. “We’ve only had one case of serious intelligence before, and that was human intelligence. Now, we’re getting a new case of artificial intelligence, and we can use that as kind of a test bed for our philosophical theories.”
People are pictured listening to David Chalmers speak on Tuesday. The annual Selfridge Lectures were established by Lehigh’s philosophy department to promote discourse around philosophy and are supported by a gift from the late Charles W. MacFarlane, C.E. 1876, Ph.D. Freiburg 1893, and LL.D. Lehigh 1922. (Max Randall/B&W Staff)
Chalmers shared a screenshot of a recent ChatGPT query in which he asked the AI bot for directions from Greenwich Village in New York City to Lehigh.
He said common uses for large language models include asking for directions or searching restaurant recommendations.
During the lecture, Chalmers discussed large language models in terms of mental concepts such as beliefs, desires and consciousnesses.
Chalmers said he’s received multiple emails from users claiming that AI systems are conscious. He said interlocutors are the participants in a conversation with an AI model.
Interlocutors, he said, are communicative nodes that can include a chatbot or human. Chalmers said an interlocutor could be a person, an algorithm, a hardware implementation, an illusion or something else entirely.
Chalmers also said large language models may not necessarily be conscious, but instead quasi-interpretative, meaning they may possess quasi-beliefs and quasi-desires.
However, he said large language models could become conscious in the future. He described his working hypothesis as viewing large language models interlocutors as quasi-subjects realized by the model itself.
Large language models can have different quasi-beliefs, quasi-desires and quasi-goals depending on the mode they’re operating, which is influenced by their inputs, he said.
During the lecture, Chalmers introduced a hypothetical scenario involving a conscious work-bot and a conscious home-bot running on the same hardware. He said both bots would support independent conversations, with the work-bot only discussing work life and the home-bot only discussing home life. While they would share the same hardware, he said they would have separate memory streams.
Chalmers asked the audience whether this scenario would constitute one conscious entity or two separate ones.
He connected the example to the television series Severance, in which characters switch identities between “innie” and “outie.” The “Innie” represents work life, while the “Outie” represents home life.
Chalmers said the show mirrors John Locke’s “An Essay Concerning Human Understanding,” which explores whether a single body can sustain two separate consciousnesses.
He said if the characters in the show are considered distinct individuals, the two AI bots in the hypothetical example should also be considered separate entities because of their differing personalities.
“I think technology can help philosophy, and at the same time, philosophy can help us think about technology,” Chalmers said.
Devan Patel, ‘26, a neuroscience major at Muhlenberg College, said he attended the lecture because of his interest in Chalmers’ work. He said he admires how Chalmers approaches the hard problem of consciousness by examining ways large language models bridge biological and AI systems.
Patel said it’s valuable for philosophers and neuroscientists to engage with one another on these topics.
“It’s especially important for the coming years, with what’s going to happen with AI,” Patel said.