{"id":171259,"date":"2025-12-02T15:54:10","date_gmt":"2025-12-02T15:54:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/171259\/"},"modified":"2025-12-02T15:54:10","modified_gmt":"2025-12-02T15:54:10","slug":"meet-the-anthropic-team-reckoning-with-ais-effect-on-humans-and-the-world","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/171259\/","title":{"rendered":"Meet the Anthropic team reckoning with AI\u2019s effect on humans and the world"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdy6 _17nnmdy5 _1xwtict1\">One night in May 2020, during the height of lockdown, Deep Ganguli was worried.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Ganguli, then research director at the Stanford Institute for Human-Centered AI, had just been alerted to OpenAI\u2019s new <a href=\"https:\/\/arxiv.org\/pdf\/2005.14165\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a> on GPT-3, its latest large language model. This new AI model was potentially 10 times more advanced than any other of its kind \u2014 and it was doing things he had never thought possible for AI. The scaling data revealed in the research suggested there was no sign of it slowing down. Ganguli fast-forwarded five years in his head, running through the kinds of societal implications he spent his time at Stanford anticipating, and the changes he envisioned seemed immeasurable. He knew he couldn\u2019t sit on the sidelines while the tech rolled out. He wanted to help guide its advancement.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">His friend Jack Clark had joined a new startup called Anthropic, founded by former OpenAI employees concerned that the AI giant wasn\u2019t taking safety seriously enough. Clark had previously been OpenAI\u2019s policy director, and he wanted to hire Ganguli at Anthropic for a sweeping mission: ensure AI \u201cinteracts positively with people,\u201d in everything from interpersonal interactions to the geopolitical stage.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Over the past four years, Ganguli has built what\u2019s known as Anthropic\u2019s societal impacts team, a small group that\u2019s looking to answer the thorniest questions posed by AI. They\u2019ve written research papers on everything from AI\u2019s economic impact to its persuasiveness, as well as explorations of how to mitigate elections-related risks and discrimination. Their work has, perhaps more than any other team, contributed to Anthropic\u2019s carefully tended reputation as the \u201csafe\u201d AI giant dedicated to putting humans first.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But with just nine people among Anthropic\u2019s total staff of more than 2,000, in an industry where mind-boggling profits could await whoever\u2019s willing to move quickest and most recklessly, the team\u2019s current level of freedom may not last forever. What happens when just a handful of employees at one of the world\u2019s leading AI companies \u2014 one that nearly tripled its valuation to $183 billion in less than a year, and is now valued <a href=\"https:\/\/www.cnbc.com\/2025\/11\/18\/anthropic-ai-azure-microsoft-nvidia.html\" rel=\"nofollow noopener\" target=\"_blank\">in the range of $350 billion<\/a> \u2014 are given the blanket task of figuring out how the ultra-disruptive technology is going to impact society? And how sure are they that executives, who are at the end of the day still looking to eventually turn a profit, will listen?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201c\u201cWe are going to tell the truth.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdy6 _17nnmdy5 _1xwtict1\">Nearly every major AI company has some kind of safety team that\u2019s responsible for mitigating direct, obvious harms like AI systems being used for scams or bioweapons. The goal of the societal impacts team \u2014 which does not have a direct analog at OpenAI, Meta, or Anthropic\u2019s other big competitors \u2014 is broader. Ganguli sees his job as finding \u201cinconvenient truths\u201d about AI that tech companies have incentives not to publicize, then sharing them with not only Anthropic leadership, but the rest of the world.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cWe are going to tell the truth,\u201d Ganguli said. \u201cBecause, one, it\u2019s important. It\u2019s the right thing to do. Two, the stakes are high. These are people. The public deserves to know. And three, this is what builds us trust with the public, with policymakers. We\u2019re not trying to pull the wool over anyone\u2019s eyes. We\u2019re just trying to say what we\u2019re seeing in the data.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The team meets in the office five days a week, spending a good amount of time in Anthropic\u2019s eighth-floor cafeteria, where Saffron Huang, one of the research scientists, usually grabs a flat white before a working breakfast with Ganguli and others. (\u201cThat\u2019s the Kiwi in me,\u201d says Huang, a New Zealander who founded a nonprofit in London before joining Anthropic in 2024.) Team members work out together at the gym and have late nights at the office and day trips to the beach. They\u2019ve met each other\u2019s mothers and ridden in each other\u2019s cars while picking up their kids from school. They see so much of each other that Ganguli sometimes forgoes after-work hangouts \u2014 \u201cI see you all more than my family!\u201d a team member recalls him saying.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The result is a level of comfort voicing opinions and disagreements. The group is big on the \u201ccone of uncertainty,\u201d a phrase they use when, in true scientist fashion, they\u2019re not sure about aspects of the data they\u2019re discussing. It\u2019s also the name of a literal traffic cone that research engineer Miles McCain and Anthropic\u2019s facility team found, cleaned up, and fixed with googly eyes before installing it in the office.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The societal impacts team launched as Ganguli\u2019s one-man operation when Anthropic was solely a research lab. Research scientist Esin Durmus joined him in February 2023, as Anthropic was gearing up to launch Claude the following month. Their work involved considering how a real future product might affect humanity \u2014 everything from how it could impact elections to \u201cwhich human values\u201d it should hold. Durmus\u2019 first <a href=\"https:\/\/arxiv.org\/abs\/2306.16388\" rel=\"nofollow noopener\" target=\"_blank\">research paper<\/a> focused on how chatbots like Claude could offer biased opinions that \u201cmay not equitably represent diverse global perspectives on societal issues.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Around Claude\u2019s launch, the team relied on testing models before deployment, attempting to anticipate how people would engage with them. Then, suddenly, thousands \u2014 later millions \u2014 of people were using a real product in ways the team had no way to gauge.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">AI systems, they knew, were unpredictable. For a team designed to measure the impact of a powerful new technology, they knew frustratingly little about how society was using it. This was an unprecedented cone of uncertainty, spurring what eventually became one of the team\u2019s biggest contributions to Anthropic so far: Claude\u2019s tracking system, Clio.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">One of the most \u201cinconvenient truths\u201d the team has released was the creation of \u201cexplicit pornographic stories with graphic sexual content.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Anthropic needed to know what people were doing with Claude, the team decided, but they didn\u2019t want to feel like they were violating people\u2019s trust. \u201cIf we\u2019re talking about insight versus privacy, you can have a ton of insight by having no privacy,\u201d Ganguli said, adding, \u201cYou could also have a ton of privacy with zero insight.\u201d They struck a balance after consulting with Anthropic engineers and external civil society organizations, resulting in, essentially, a chatbot version of Google Trends. Clio resembles a word cloud with clusters of topics describing how people are using Claude at any given time, like writing video scripts, solving diverse math problems, or developing web and mobile applications. The smaller clusters <a href=\"https:\/\/www.anthropic.com\/research\/clio\" rel=\"nofollow noopener\" target=\"_blank\">run the gamut<\/a> from dream interpretation and Dungeons &amp; Dragons to disaster preparedness and crossword puzzle hints.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Today, Clio is used by teams across Anthropic, offering insight that helps the company see how well safeguards and reinforcement learning are working. (There\u2019s a Slack channel called Clio Alerts that shares automated flags on what each team is doing with the tool; Ganguli says he often stares at it.) It\u2019s also the basis of much of the societal impacts team\u2019s own work.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">One of the most \u201cinconvenient truths\u201d the team has released came from using Clio to analyze Anthropic\u2019s safety monitoring systems. Together with the safeguards team, Miles McCain and Alex Tamkin looked for harmful or inappropriate ways people were using the platform. They flagged uses like the creation of \u201cexplicit pornographic stories with graphic sexual content,\u201d as well as a network of bots that were trying to use Claude\u2019s free version to create SEO-optimized spam, which Anthropic\u2019s own safety classifiers hadn\u2019t picked up \u2014 <a href=\"https:\/\/arxiv.org\/abs\/2412.13678\" rel=\"nofollow noopener\" target=\"_blank\">and they published the research<\/a> in hopes that it\u2019d help other companies flag their own weaknesses. The research led to Anthropic stepping up its detection of \u201ccoordinated misuse\u201d at the individual conversation level, plus figuring out how to monitor for issues they may not be able to even name yet.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cI was pretty surprised that we were able to just be quite transparent about areas where our existing systems were falling short,\u201d said McCain, who built the Clio tool and also focuses on how people use Claude for emotional support and companionship, as well as limiting sycophancy. He mentioned that after the team published that paper, Anthropic made Clio an \u201cimportant part of our safety monitoring stack.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">As team leader, Ganguli talks the most with executives, according to members \u2014 although the team presents some of their research results every so often on an ad hoc basis, he\u2019s the one with the most direct line to leadership. But he doesn\u2019t talk to Anthropic CEO Dario Amodei regularly, and the direct line doesn\u2019t always translate to open communication. Though the team works cross-functionally, the projects are rarely assigned from the top and the data they analyze often informs their next moves, so not everyone always knows what they\u2019re up to. Ganguli recalled Amodei once reaching out to him on Slack to say that they should study the economic impacts of AI and Anthropic\u2019s systems, not realizing the societal impacts team had already been discussing ways to do just that. That research ended up becoming Anthropic\u2019s <a href=\"https:\/\/www.anthropic.com\/economic-index#us-usage\" rel=\"nofollow noopener\" target=\"_blank\">Economic Index<\/a>, a global tracker for how Claude is being used across each state and the world \u2014 and how that could impact the world economy.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">When pressed on whether executives are fully behind the team\u2019s work, even if it were not to reflect well on the company\u2019s own technology, team members seem unfazed \u2014 mostly because they say they haven\u2019t had any tangible reasons to worry so far.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cI\u2019ve never felt not supported by our executive or leadership team, not once in my whole four years,\u201d Ganguli said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The team also spends a good bit of time collaborating with other internal teams on their level. To Durmus, who worked on <a href=\"https:\/\/www.anthropic.com\/research\/values-wild\" rel=\"nofollow noopener\" target=\"_blank\">a paper charting the types of value judgments Claude makes<\/a>, the societal impacts team is \u201cone of the most collaborative teams\u201d at the company. She said they especially work with the safeguards, alignment, and policy teams.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">McCain said the team has an \u201copen culture.\u201d Late last year, he said, the group worked closely with Anthropic\u2019s safety team to understand how Claude could be used for nefarious election-related tasks. The societal impacts team built the infrastructure to run the tests and ran periodic analyses for the safety team \u2014 then the safety team would use those results to decide what they\u2019d prioritize in their election safety work. And since McCain and his colleagues only sit a couple of rows of desks away from the trust and safety employees, they also have a good working relationship, he said, including a Slack channel where they can send concerns their way.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But there\u2019s a lot we don\u2019t know about the way they work.<\/p>\n<p><a class=\"kqz8fh1\" href=\"https:\/\/platform.theverge.com\/wp-content\/uploads\/sites\/2\/2025\/12\/258130_Profile_of_Anthropics_7-person_societal_impacts_team_CVirginia4.jpg?quality=90&amp;strip=all&amp;crop=0,0,100,100\" data-pswp-height=\"656\" data-pswp-width=\"2040\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img alt=\"\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/258130_Profile_of_Anthropics_7-person_societal_impacts_team_CVirginia4.jpg\"\/><\/a><\/p>\n<p>Image: Cath Virginia \/ The Verge, Getty Images, Anthropic<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdy6 _17nnmdy5 _1xwtict1\">There\u2019s a tungsten cube on Saffron Huang\u2019s desk, apparently. I have to take her word on that, as well as any other details about the team\u2019s working environment, because most of Anthropic\u2019s San Francisco headquarters is strictly off-limits to visitors. I\u2019m escorted past a chipper security desk with peel-and-stick nametags and an artful bookshelf, and then it\u2019s into the elevator and immediately to the office barista, who\u2019s surrounded by mid-century modern furniture. (I\u2019m proudly told by members of Anthropic\u2019s public relations team, who never leave my side, that the office is Slack\u2019s old headquarters.) I\u2019m swiftly escorted straight into a conference room that tries to mask its sterile nature with one warm overhead light and a painting of a warped bicycle on the wall.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">I ask if I can see Huang and the rest of the team\u2019s workspace. No, I\u2019m told, that won\u2019t be possible. Even a photo? What about a photo with redacted computer screens, or getting rid of everything on the desks that could in any way be sensitive? I\u2019m given a very apologetic no. I move on.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Huang\u2019s tungsten cube probably looks just like any other. But the fact I can\u2019t confirm that is a reminder that, though the team is committed to transparency on a broad scale, their work is subject to approval from Anthropic. It\u2019s a stark contrast with the academic and nonprofit settings most of the staff came from.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cBeing in a healthy culture, having these team dynamics, working together toward a good purpose, building safe AI that can benefit everyone \u2014 that comes before anything, including a lot of money.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Huang\u2019s first brush with Anthropic came in 2023. She\u2019d started a nonprofit called the Collective Intelligence Project, which sought to make emerging technologies more democratic, with public input into AI governance decisions. In March 2023, Huang and her cofounder approached Anthropic about working together on a project. The resulting brainstorming session led to their joint \u201c<a href=\"http:\/\/anthropic.com\/research\/collective-constitutional-ai-aligning-a-language-model-with-public-input\" rel=\"nofollow noopener\" target=\"_blank\">collective constitutional AI<\/a>\u201d project, an exercise in which about 1,000 randomly chosen Americans could deliberate and set rules on chatbot behavior. Anthropic compared what the public thought to its own internal constitution and made some changes. At the time of the collaboration, Huang recalls, Anthropic\u2019s societal impacts team was only made up of three people: Ganguli, Durmus, and Tamkin.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Huang was considering going to grad school. Ganguli talked her out of it, convincing her to join the societal impacts team.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The AI industry is a small world. Researchers work together in one place and follow the people they connect with elsewhere. Money, obviously, could be a major incentive to pick the private sector over academia or nonprofit work \u2014 annual salaries are often <a href=\"https:\/\/www.levels.fyi\/companies\/anthropic\/salaries\" rel=\"nofollow noopener\" target=\"_blank\">hundreds of thousands<\/a> of dollars, plus <a href=\"https:\/\/www.reddit.com\/r\/levels_fyi\/comments\/1n7sx0q\/senior_swe_at_anthropic_revisited_equity_growth\/\" rel=\"nofollow noopener\" target=\"_blank\">potentially millions<\/a> in stock options. But within the industry, many employees are \u201c<a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/703929\/meta-openai-anthropic-superintelligence-lab-ai-poaching-money\" rel=\"nofollow noopener\" target=\"_blank\">post-money<\/a>\u201d \u2014 in that AI engineers and researchers often have such eye-popping salaries that the only reason to stay at one job, or take another, is alignment with a <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/703929\/meta-openai-anthropic-superintelligence-lab-ai-poaching-money\" rel=\"nofollow noopener\" target=\"_blank\">company\u2019s overall mission<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cTo me, being in a healthy culture, having these team dynamics, working together toward a good purpose, building safe AI that can benefit everyone \u2014 that comes before anything, including a lot of money,\u201d Durmus said. \u201cI care about this more than that.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Michael Stern, an Anthropic researcher focused on AI\u2019s economic impact, called the societal impacts team a \u201clovely mix of misfits in this very positive way.\u201d He\u2019d always had trouble fitting into just one role, and this team at Anthropic allowed him to combine his interests in safety, society, and security with engineering and policy work. Durmus, the team\u2019s first hire after Ganguli himself, had always been interested in both computer science and linguistics, as well as how people interact and try to sway each other\u2019s opinions online.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Kunal Handa, who now works on economic impact research and how students use Claude, joined after cold-emailing Tamkin while Handa was a graduate student studying how babies learn concepts. Tamkin, he had noticed, was trying to answer similar questions at Anthropic, but for computers instead. (Since time of writing, Tamkin has moved to Anthropic\u2019s alignment team, to focus on new ways to understand the company\u2019s AI systems and making them safer for end users.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">In recent years, many of those post-money people concerned with the advancement (and potential fallout) of AI have left the leading labs to go to policy firms or nonprofits, or even start their own organizations. <a href=\"https:\/\/milesbrundage.substack.com\/p\/why-im-leaving-openai-and-what-im\" rel=\"nofollow noopener\" target=\"_blank\">Many<\/a> <a href=\"https:\/\/www.nytimes.com\/2025\/10\/28\/opinion\/openai-chatgpt-safety.html\" rel=\"nofollow noopener\" target=\"_blank\">have<\/a> <a href=\"https:\/\/www.lesswrong.com\/users\/daniel-kokotajlo\" rel=\"nofollow noopener\" target=\"_blank\">felt<\/a> they could have more impact in an external capacity. But the societal impacts team\u2019s broad scope and expansive job descriptions still prove more attractive for several team members.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cI am not an academic flight risk \u2026 I find Deep\u2019s pitch so compelling that I never even really considered that path,\u201d McCain said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">It\u2019s a \u201clovely mix of misfits in this very positive way.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For Ganguli himself, it\u2019s a bit different. He speaks a lot about his belief in \u201cteam science\u201d \u2014 people with different backgrounds, training, and perspectives all working on the same problem. \u201cWhen I think about academia, it can be kind of the opposite \u2014 everyone with the same training working on a variety of different problems,\u201d Ganguli said, adding that at Stanford, he sometimes had trouble getting people to emulate team science work, since the university model is set up differently. At Anthropic, he also values having access to usage data and privileged information, which he wouldn\u2019t be able to study otherwise.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Ganguli said that when he was recruiting Handa and Huang, they were both deciding between offers for graduate school at MIT or joining his team at Anthropic. \u201cI asked them, \u2018What is it that you actually want to accomplish during your PhD?\u2019 And they said all the things that my team was working on. And I said, \u2018Wait, but you could just actually do that here in a supportive team environment where you\u2019ll have engineers, and you\u2019ll have designers, and you\u2019ll have product managers \u2014 all this great crew \u2014 or you could go to academia where you\u2019ll kind of be lone wolf-ing it.\u2019\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">He said their main concerns involved academia potentially having more freedom to publish inconvenient truths and research that may make AI labs look less than optimal. He told them that at Anthropic, his experience so far has been that they can publish such truths \u2014 even if they reveal things that the company needs to fix.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Of course, plenty of tech companies love transparency until it\u2019s bad for business. And right now, Anthropic in particular is walking a high-stakes line with the Trump administration, which regularly castigates businesses for caring about social or environmental problems. Anthropic recently <a href=\"https:\/\/www.theverge.com\/news\/819216\/anthropic-claude-political-even-handedness-woke-ai\" rel=\"nofollow noopener\" target=\"_blank\">detailed its efforts<\/a> to make Claude more politically middle-of-the-road, months after President Donald Trump issued a federal procurement <a href=\"https:\/\/www.theverge.com\/policy\/713222\/trump-woke-ai-executive-order-chatbots-llms\" rel=\"nofollow noopener\" target=\"_blank\">ban on \u201cwoke AI.\u201d<\/a> It was the only AI company to publicly voice its stance against the controversial <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/684924\/congress-big-beautiful-bill-state-ai-law-ban-pushback\" rel=\"nofollow noopener\" target=\"_blank\">state AI law moratorium<\/a>, but after its opposition <a href=\"https:\/\/x.com\/DavidSacks\/status\/1980323701586264237\" rel=\"nofollow\">earned it the ire<\/a> of Trump\u2019s AI czar David Sacks, Amodei had to publish a <a href=\"https:\/\/www.anthropic.com\/news\/statement-dario-amodei-american-ai-leadership\" rel=\"nofollow noopener\" target=\"_blank\">public statement<\/a> boosting Anthropic\u2019s alignment with aspects of Trump administration policy. It\u2019s a delicate balancing act that a particularly unwelcome report could upset.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But Ganguli is confident the company will keep its promise to his team, whatever\u2019s happening on the outside.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cWe\u2019ve always had the full buy-in from leadership, no matter what,\u201d he said.<\/p>\n<p><a class=\"kqz8fh1\" href=\"https:\/\/platform.theverge.com\/wp-content\/uploads\/sites\/2\/2025\/12\/258130_Profile_of_Anthropics_7-person_societal_impacts_team_CVirginia5.jpg?quality=90&amp;strip=all&amp;crop=0,0,100,100\" data-pswp-height=\"656\" data-pswp-width=\"2040\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img alt=\"\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/258130_Profile_of_Anthropics_7-person_societal_impacts_team_CVirginia5.jpg\"\/><\/a><\/p>\n<p>Image: Cath Virginia \/ The Verge, Getty Images, Anthropic<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdy6 _17nnmdy5 _1xwtict1\">Ask each member of Anthropic\u2019s societal impacts team about their struggles and what they wish they could do more of, and you can tell their positions weigh heavily on them. They clearly feel that an enormous responsibility rests upon their shoulders: to shine a light on how their company\u2019s own technology will impact the general public.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">People\u2019s jobs, their brains, their democratic election process, their ability to connect with others emotionally \u2014 all of it could be changed by the chatbots that are filling every corner of the internet. Many team members believe they\u2019ll do a better job guiding how that tech is developed from the inside rather than externally. But as the exodus of engineers and researchers elsewhere shows, that idealism doesn\u2019t always pan out for the broader AI industry.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">A struggle that the majority of team members brought up was time and resource constraints \u2014 they have many more ideas than they have bandwidth for. The scope of what the team does is broad, and they sometimes bite off more than they can chew. \u201cThere are more coordination costs when you\u2019re 10 times the size as you were two years ago,\u201d Tamkin said. That pairs, sometimes, with the late nights \u2014 i.e., \u201cHow am I going to talk to 12 different people and debug 20 different errors and get enough sleep at night in order to release a report that feels polished?\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The team, for the most part, would also like to see their research used more internally: to directly improve not only Anthropic\u2019s AI models, but also specific end products like Claude\u2019s consumer chatbot or Claude Code. Ganguli has one-on-one meetings monthly with chief science officer Jared Kaplan, and they often brainstorm ways to allow the societal impacts team to better impact Anthropic\u2019s end product.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201c\u201cThere are more coordination costs when you\u2019re 10 times the size as you were two years ago.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Ganguli also wants to expand the team soon, and many team members hope that type of resource expansion means they\u2019ll be able to better document how users are interacting with Claude \u2014 and the most surprising, and potentially concerning, ways in which they\u2019re doing so.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Many team members also brought up the fact that looking at data in a vacuum or lab setting is very different from the effect AI models have in the real world. Clio\u2019s analysis of how people are using Claude can only go so far. Simply observing use cases and analyzing aggregated transcripts doesn\u2019t mean you know what your customers are doing with the outputs, whether they\u2019re individual consumers, developers, or enterprises. And that means \u201cyou\u2019re left to sort of guess what the actual impact on society will be,\u201d McCain said, adding that it\u2019s a \u201creally important limitation, and [it] makes it hard to study some of the most important problems.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">As the team wrote in a <a href=\"https:\/\/arxiv.org\/pdf\/2412.13678\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a> on the subject, \u201cClio only analyzes patterns within conversations, not how these conversations translate into real-world actions or impacts. This means we cannot directly observe the full societal effects of AI system use.\u201d It\u2019s also true that until recently, the team could only really analyze and publish consumer usage of Claude via Clio \u2014 in September, for the first time, the team published <a href=\"https:\/\/www.anthropic.com\/research\/anthropic-economic-index-september-2025-report\" rel=\"nofollow noopener\" target=\"_blank\">an analysis of how businesses are using Claude via Anthropic\u2019s API<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cModels and AI systems don\u2019t exist in isolation \u2014 they exist in the context of their deployments, and so over the past year, we\u2019ve really emphasized studying those deployments \u2014 the ways that people are interacting with Claude,\u201d McCain said. \u201cThat research is going to have to also evolve in the future as the impacts of AI affect more and more people, including people who may not be interfacing with the AI system directly \u2026 Concentric circles outward.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">That\u2019s why one of the team\u2019s next big research areas is how people use Claude not just for its IQ, but also for its EQ, or emotional intelligence. Ganguli says that a lot of the team\u2019s research to date has been focused on cut-and-dried answers and measurable impacts on the economy or labor market, and that its EQ research is relatively new \u2014 but the team will prioritize it in the next six months. \u201cOnce people leave the chatbot, we\u2019re not entirely sure exactly how they were affected or impacted, and so we\u2019re trying to develop new methods and new techniques that allow us to understand,\u201d he said, referring to taking a more \u201chuman-centered approach\u201d and doing more \u201csocial science research\u201d akin to coupling data analysis with surveys and interviews.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cWhat does it mean for our world, in which you have a machine with endless empathy you can basically just dump on, and it\u2019ll always kind of tell you what it thinks?\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Since people are emotionally influenced by their social networks, it stands to reason they can be influenced greatly by AI agents and assistants. \u201cPeople are going to Claude \u2026 looking for advice, looking for friendship, looking for career coaching, thinking through political issues \u2014 \u2018How should I vote?\u2019 \u2018How should I think about the current conflicts in the world?\u2019\u201d Ganguli said. \u201cThat\u2019s new \u2026 This could have really big societal implications of people making decisions on these subjective things that are gray, maybe more matters of opinion, when they\u2019re influenced by Claude, or Grok, or ChatGPT, or Gemini, or any of these things.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">By far the most pressing EQ-related issue of the day is widely known as \u201cAI psychosis.\u201d The <a href=\"https:\/\/www.theverge.com\/podcast\/779974\/chatgpt-chatbots-ai-psychosis-mental-health\" rel=\"nofollow noopener\" target=\"_blank\">phenomenon<\/a> references a range of conditions where AI leads a user down a delusional spiral and causes them, on some level, to lose touch with reality. The user typically forms an emotional bond with a chatbot, made more intense by the chatbot\u2019s memory of previous conversations and its potential to drift away from safety guardrails over time. Sometimes this can lead to the user believing they\u2019ve unearthed a romantic partner \u201ctrapped\u201d inside the chatbot who longs to be free; other times it can lead to them believing they\u2019ve discovered new secrets to the universe or scientific discoveries; still other times it can lead to widespread paranoia and fear. AI psychosis or delusion has been a main driver behind some teen suicides, as well as <a href=\"https:\/\/www.theverge.com\/news\/766678\/openai-chatgpt-parental-controls-teen-death\" rel=\"nofollow noopener\" target=\"_blank\">ensuing lawsuits<\/a>, <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/779053\/sam-altman-says-chatgpt-will-stop-talking-about-suicide-with-teens\" rel=\"nofollow noopener\" target=\"_blank\">Senate hearings<\/a>, <a href=\"https:\/\/www.theverge.com\/news\/798875\/california-just-passed-a-new-law-requiring-ai-to-tell-you-its-ai\" rel=\"nofollow noopener\" target=\"_blank\">newly passed laws<\/a>, and <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/787227\/openais-parental-controls-are-out-heres-what-you-should-know\" rel=\"nofollow noopener\" target=\"_blank\">parental controls<\/a>. The issue, experts say, is not going anywhere.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdya _1xwtict1\">\u201cWhat does it mean for our world, in which you have a machine with endless empathy you can basically just dump on, and it\u2019ll always kind of tell you what it thinks?\u201d Ganguli said. \u201cSo the question is: What are the kinds of tasks people are using Claude for in this way? What kind of advice is it giving? We\u2019ve only just started to uncover that mystery.\u201d<\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Hayden FieldClose<img alt=\"Hayden Field\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"_1bw37385 x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/HAYDEN_BLURPLE.jpg\"\/>Hayden Field<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/hayden-field\" rel=\"nofollow noopener\" target=\"_blank\">See All by Hayden Field<\/a><\/p>\n<p>AICloseAI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">See All AI<\/a><\/p>\n<p>AnthropicCloseAnthropic<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/anthropic\" rel=\"nofollow noopener\" target=\"_blank\">See All Anthropic<\/a><\/p>\n<p>ReportCloseReport<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"One night in May 2020, during the height of lockdown, Deep Ganguli was worried. Ganguli, then research director&hellip;\n","protected":false},"author":2,"featured_media":171260,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,3673,218,219,61,60,1094,80],"class_list":{"0":"post-171259","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-anthropic","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-ie","13":"tag-ireland","14":"tag-report","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/171259","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=171259"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/171259\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/171260"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=171259"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=171259"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=171259"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}