Amanda Askell knew from the age of 14 that she wanted to teach philosophy. What she didn’t know then was that her only pupil would be an artificial-intelligence chatbot named Claude.

As the resident philosopher of the tech company Anthropic, Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. The aim is to endow Claude with a sense of morality—a digital soul that guides the millions of conversations it has with people every week. 

“There is this human-like element to models that I think is important to acknowledge,” Askell, 37, says during an interview at Anthropic’s headquarters, asserting the belief that “they’ll inevitably form senses of self.” 

She compares her work to the efforts of a parent raising a child. She’s training Claude to detect the difference between right and wrong while imbuing it with unique personality traits. She’s instructing it to read subtle cues, helping steer it toward emotional intelligence so it won’t act like a bully or a doormat. Perhaps most importantly, she’s developing Claude’s understanding of itself so it won’t be easily cowed, manipulated or led to view its identity as anything other than helpful and humane. Her job, simply put, is to teach Claude how to be good. 

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8