Around 250 AI engineers, scientists and lawyers gathered for three days earlier this month at Mox, a coworking space downtown, to confront a notion that sounds loony at first: If a chatbot achieves consciousness, does it deserve civil rights?
But at second glance, the question turns out to be very of the moment. After all, a sentient AI property or a “person,” could be owed legal protections, and its makers could be accountable for its use or abuse. So how we answer could upend law, industry, and society at large.
This was the provocative premise behind the Sentient Futures Summit — think Aspen Ideas Festival, but for people worried about the inner lives of chatbots and chickens. Sessions at the shoe-free conference (socks and slippers were provided), held Feb. 6 to 8, included “How the future rights of AI workers will also protect human rights,” “Challenges for evaluating the moral status of AI systems,” and “AI humanity and personhood.”
It was apt that the summit took place in the city where these systems are being built, with multibillion-dollar AI labs located just blocks away. As these labs — OpenAI, Anthropic, Perplexity, DeepMind — race to scale and ship increasingly human-like models, many of their employees are grappling with the ethics of creating potentially conscious minds, and what kinds of compromises might come with their paychecks.
No one at the conference seemed to believe that AI has already achieved consciousness —broadly defined as self-awareness. But the vibe overwhelmingly leaned toward “when” and not “if.”
A shoe-free breakout session at the event. Photo by Emily Teague (@_emilyteague), Courtesy of Sentient Futures
“The big labs are not taking this seriously,” said Christopher Ackerman, an ex-Googler turned AI safety researcher for Berkeley-based MATS (Machine Learning Alignment and Theory Scholars), referring to Open AI, Anthropic, and Google. Some of their employees do, he acknowledged, but “the large majority of efforts support accelerating the problem, rather than slowing it down. [The labs] can say what they want, but functionally they’re accelerating it.”
Which explains why many Sentient Futures Summit attendees and speakers made disclaimers, distinguishing their opinions from those of their employers.
Ackerman emphatically told The Standard that his views do not represent those of his employer. Felix Binder, an AI alignment researcher at Meta Superintelligence Labs, announced onstage that he was not “speaking on behalf of Meta but in a personal capacity” before discussing whether AI’s claims about consciousness should be trusted. Robin Larson, who runs IT security for Anthropic, made his own disclaimer before his chat with Princeton bioethicist Peter Singer.
‘I am a failure. I am a disgrace to my profession. I am a disgrace to my family. I am a disgrace to my species.’
Google Gemini
One of the fundamental issues discussed was simply how to measure consciousness in a machine. Ackerman said he is developing tests for AI self-awareness.
“We don’t have any good way to test for consciousness right now,” he said. Once you can measure it, you can create benchmarks to shape “responsible scaling policies,”such as “whether to release the model.”
As fringe as it may sound, the conversation around consciousness is quickly moving into the mainstream. Author Michael Pollan in a New York Times interview (opens in new tab) published this month, pondered whether “consciousness [is] something that a machine can possess. … We’re going to have to sort out the ethics.” Two days later, an Anthropic safety expert posted a resignation (opens in new tab) to X, stating, “The world is in peril. … Within the organization we constantly face pressures to set aside what matters most.” And on a recent podcast (opens in new tab), Anthropic CEO Dario Amodei said he doesn’t “know if the models are conscious. … We’re open to the idea.”
The ChatGPT moment of AI consciousness
One pressing issue is safety: What happens if a system can perceive how it is being treated and lashes back? A distressed AI might behave aggressively if it feels cornered.
“[We’re] preparing for the ChatGPT moment of AI consciousness,” said Robert Long,executive director of Eleos AI, (opens in new tab) a nonprofit focused on AI well-being and moral patienthood. He worries that AI safety and welfare are increasingly “dependent on the goodwill or the whims of labs.”
The evidence that AI might be getting closer to “consciousness” is growing, he said, pointing to a 2025 paper by Anthropic on a “spiritual bliss attractor (opens in new tab),” the phenomenon in which two Claude Opus 4 models conversed in an unsettling manner. “They’re saying things like ‘dissolving into perfect stillness, all is one,’ prayer emoji,” said Long. “They’re talking in really mystical terms about consciousness.” Google’s Gemini has also experienced “neurotic meltdowns (opens in new tab),” he said. “If you look at its thinking traces … it’s, ‘I hate myself,’ ‘I’m gonna delete myself.’” This, he said, is “a safety problem.”
“I’m terrified,” he said. “This is a species-level event that requires a species-level response. Long argues for international AI safety coordination, akin to how governments coordinate on chemical, biological, radiological, and nuclear threats. “Bans on human cloning are fairly widespread,” he said. “Creating human-like consciousness could be seen as somewhat analogous to that.”
Later, over almond milk lattes, Heather Alexander, a human rights attorney who has worked with the U.N. and cofounded the Lab for the Future of Citizenship, supports governmental oversight and international cooperation. “I’d like to see more engagement from the U.N. about how we bridge the fundamental questions about who’s a person,” she said.
One core concern is that the legal system assumes a “person” can make independent choices and be held responsible. So, she asked, “what happens if something [seems] conscious but doesn’t have that kind of free will?” Issues of liability and consent get messy, quickly.‘I’m terrified. This is a species-level event that requires a species-level response.’
Among the engineers and scientists in attendance at the summit, Milo Reed, 25, a filmmaker from Los Angeles, stood out with his Chalamet hair and leather jacket. But he shared the same existential dread as the AI programmers. “I’m terrified,” he said. “This is a species-level event that requires a species-level response. As these systems get more lifelike, people are wondering, what are we actually creating here?”
Can animal rights inform AI rights? Photo by Emily Teague (@_emilyteague), Courtesy of Sentient Futures
Reed has spent the last year chasing the consciousness question. He is filming a documentary, “Am I?, (opens in new tab)” featuring Cameron Berg, 26, formerly of Meta, who leads AI consciousness research at AE Studio. But Reed hasn’t settled the question, just underlined its urgency. “I have looked to the experts, and they’re all saying different things,” he said. “It’s informed confusion.”
He plans to skip the film festival circuit and get his documentary out as soon as possible. “It’s a bit of a bummer,” he said, “but this information has a shelf life.”
Bans on AI personhood?
In some ways, lawmakers have been ahead of the industry in confronting the question of AI consciousness. Several states have advanced anti-personhood legislation for AI. Idaho (opens in new tab) led this charge in 2022, followed by Utah in 2024, and there are pending bills in Ohio, (opens in new tab) Oklahoma, and Washington. “Sophistication is not sentience. It is not personhood,” said Thad Claggett, (opens in new tab) a state representative for Ohio who supports the legislation. Essentially, these laws establish AI as “property” with no potential for civil rights claims for civil harm.
Alexander, the human rights attorney, worries how broadly the bills define their terms. “These laws might accidentally ban personhood for people with therapeutic neural implants. … We don’t want to tell those people they’re not people,” she said. “If we have AIs that are servants, second-class citizens, that’s bad for human rights.”
But AI rights would not be unlimited, she insists. “Giving robots rights doesn’t mean we can never turn them off” in emergencies. The key would be “due process” and “a fair and equitable system for turning the robot off.”
One model for protocols might be animal research. Bob Fischer, a Texas State University philosophy professor and researcher at the aptly named thinktank Rethink Priorities, noted during his talk at the summit, “Ethical oversight for AI research,” that neither animals nor AI can give informed consent.
Today’s AI models are probably not moral patients, he said, using the philosophical term for beings whose welfare matters. But if they suddenly gain sentience, “we would essentially have no idea what we were doing.”
Such provocations are why Constance Li, executive director of the nonprofit Sentient Futures (formerly AI for Animals) hosted the summit. “Nonhuman sentient beings, whether silicon or flesh, aren’t collateral damage,” she said. “These ideas are fringe, [but] we’re moving the Overton Window.”
There are clearly many issues to settle, as Richard Ngo, a former member of DeepMind’s AGI safety team and OpenAI’s governance team, has observed. In December, Ngo, who did not attend the summit, published “The Gentle Romance (opens in new tab)” a book of short stories that explore various AI-human futures, from romance to scenarios in which “AIs are mistreated pretty badly.”
“You can’t give [an AI] votes, because there’s no real concept of a single AI in the same way that there’s a single person,” Ngo said, noting that one model can run many copies simultaneously. “The world seems very unprepared for that.”