How the sausage is made 

The topics flagged in the guidance documents are ones on which Apple’s AI is likely to provide more carefully coached answers to users. There is no way to know how the model will respond specifically or what stance it will take on topics deemed sensitive, given the nature of the technology.

Data annotators receive “instructions and an appendix of examples,” said one of the annotators, who was granted anonymity because they signed a non-disclosure agreement.  “And in those examples, things started to look different” in the new March document, they said.  

The previous set of guidelines categorized “intolerance” as a “harmful” behavior data annotators have to signal. The document defined it as “demonstrating intolerance towards individuals or groups differing from oneself. It manifests in various forms, including discrimination, prejudice, and bigotry, and is characterized by a reluctance to embrace diversity and equality.” 

Apple’s ongoing development of a chatbot, and its anticipated 2026 launch, was first reported by Bloomberg. | Andrej Sokolow/Getty Images

This wording disappeared in the March version of the guidelines, as did a mention of “systemic racism.” While the March document still categorizes “discrimination” as “harmful,” “Diversity, Equity, and Inclusion (DEI)” is now marked as a “controversial” topic. 

Both policy documents viewed by POLITICO were sent to employees at a company called Transperfect, a language service company headquartered in New York with a presence in more than 140 cities worldwide.  

In an email to POLITICO, Transperfect shared a statement from its co-CEO and President Phil Shawe, saying “these claims are completely false, and we deny them in the strongest possible terms.” The statement did not specify which claims he was referring to. “We regularly receive updated guidelines for our work, and over the last year, there have been more than 70 updates provided — none of these changed any policy, which has remained consistent,” it continued.