{"id":494665,"date":"2026-02-23T19:09:15","date_gmt":"2026-02-23T19:09:15","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/494665\/"},"modified":"2026-02-23T19:09:15","modified_gmt":"2026-02-23T19:09:15","slug":"the-ai-fluency-index-anthropic","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/494665\/","title":{"rendered":"The AI Fluency Index \\ Anthropic"},"content":{"rendered":"<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">People are integrating AI tools into their daily routines at a pace that would have been difficult to predict even a year ago. But adoption alone doesn\u2019t tell us much about the impact of these tools. A further, equally important question is: as AI becomes part of everyday life, are individuals developing the skills to use it well?<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">Previous Anthropic Education Reports have studied how <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-education-report-how-university-students-use-claude\" rel=\"nofollow noopener\" target=\"_blank\">university students<\/a> and <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-education-report-how-educators-use-claude\" rel=\"nofollow noopener\" target=\"_blank\">educators<\/a> use Claude. We found that students use it to create reports and analyze lab results; educators use it to build lesson materials and automate routine work. But we know that any person who uses AI is likely to improve at what they do. We wanted to explore this further, and to understand how people using AI develop \u201cfluency\u201d with this technology over time.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">In this report, we begin answering that question. We track the presence or absence of a taxonomy of behaviors that we take to represent AI fluency across a large sample of anonymized conversations.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">In line with our recent <a href=\"https:\/\/www.anthropic.com\/research\/economic-index-primitives\" rel=\"nofollow noopener\" target=\"_blank\">Economic Index<\/a>, we find that the most common expression of AI fluency is augmentative\u2014treating AI as a thought partner, rather than delegating work entirely. In fact, these conversations exhibit more than double the number of AI fluency behaviors than quick, back-and-forth chats.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">But we also find that when AI produces artifacts\u2014including apps, code, documents, or interactive tools\u2014users are less likely to question its reasoning (-3.1 percentage points) or identify missing context (-5.2pp). This aligns with related patterns we observed in our <a href=\"https:\/\/www.anthropic.com\/research\/AI-assistance-coding-skills\" rel=\"nofollow noopener\" target=\"_blank\">recent study on coding skills<\/a>.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">These initial findings present us with a baseline that we can use to study the development of AI fluency over time.<\/p>\n<p>Measuring AI fluency<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">To quantify AI fluency, we use the <a href=\"https:\/\/anthropic.skilljar.com\/ai-fluency-framework-foundations\" rel=\"nofollow noopener\" target=\"_blank\">4D AI Fluency Framework<\/a>, developed by Professors Rick Dakan and Joseph Feller in collaboration with Anthropic. This framework helps us define 24 specific behaviors that we take to exemplify safe and effective human-AI collaboration.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">Of these 24 behaviors, 11 (listed in the graph below) are directly observable when humans interact with Claude on Claude.ai or Claude Code. The other 13 (including things like being honest about AI\u2019s role in work, or considering the consequences of sharing AI-generated output), happen outside Claude.ai\u2019s chat interface, so they\u2019re much harder for us to track. These unobservable behaviors are arguably some of the most consequential dimensions of AI fluency, so in future work we plan to use qualitative methods to assess them.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">For this study, we focused on the 11 directly observable behaviors. We used our <a href=\"https:\/\/www.anthropic.com\/research\/clio\" rel=\"nofollow noopener\" target=\"_blank\">privacy-preserving analysis tool<\/a> to study 9,830 conversations that included several back-and-forths with Claude on Claude.ai during a 7-day window in January 2026.1 We then measured the presence or absence of the 11 behaviors; each conversation could display evidence of multiple behaviors. We assessed the reliability of our sample by checking whether our results were consistent across each day of the week, and across the different languages in our sample (we found that they were).2 This, finally, gave us the AI Fluency Index: a baseline measurement of how people collaborate with AI today, and a foundation for tracking how those behaviors evolve over time as models change.<\/p>\n<p><img loading=\"lazy\" width=\"3840\" height=\"2160\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/02\/1771873754_578_image.webp\"\/>Prevalence of each AI fluency behavioral indicator across 9,830 Claude.ai conversations, ranked from most to least common and color-coded by competency.Results<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">With our first study, we\u2019ve found two main patterns in Claude use: a strong relationship between AI fluency and iteration and refinement through longer conversations with Claude, and changes in users\u2019 fluency behaviors when coding or building other outputs.<\/p>\n<p>Fluency is strongly associated with conversations that exhibit iteration and refinement<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">One of the strongest patterns in the data is the relationship between iteration and refinement and every other AI fluency behavior. 85.7% of the conversations in our sample exhibited iteration and refinement: building on previous exchanges to refine the user\u2019s work, rather than accepting the first response and moving to a new task. These conversations showed substantially higher rates of other fluency behaviors, as the chart below shows:<\/p>\n<p><img alt=\"The iteration and refinement effect: Comparison table or visual showing behavior rates with\/without iteration and refinement]\" loading=\"lazy\" width=\"1920\" height=\"1080\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/02\/1771873754_850_image.webp\"\/>Behavioral indicator prevalence in conversations where the user iterates and refines (n=8,424) versus conversations without iteration and refinement (n=1,406). All behaviors are substantially more prevalent in conversations with iteration and refinement.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">On average, conversations with iteration and refinement exhibit 2.67 additional fluency behaviors\u2014roughly double the non-iterative rate of 1.33. This is especially pronounced for fluency behaviors related to evaluating Claude\u2019s outputs. Conversations with iteration and refinement are 5.6x more likely to involve users questioning Claude\u2019s reasoning, and 4x more likely to see them identify missing context.<\/p>\n<p>When creating outputs, users become more directive but less evaluative<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">12.3% of conversations in our sample involved <a href=\"https:\/\/claude.ai\/redirect\/website.v1.827d8765-f00a-4cae-ae90-6524f5c00529\/catalog\/artifacts\" rel=\"nofollow noopener\" target=\"_blank\">artifacts<\/a>, including code, documents, interactive tools, and other outputs. In these conversations, people collaborated with AI quite differently.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">Specifically, we found substantially higher rates of behaviors that fall within the broader themes of \u201cdescription\u201d and \u201cdelegation.\u201d For instance, these conversations are more likely to see users clarify their goal (+14.7pp), specify a format (+14.5pp), provide examples (+13.4pp), and iterate (+9.7pp) compared to non-artifact conversations. In other words, they\u2019re doing more to direct AI at the outset of their work.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">But this directiveness doesn\u2019t correspond with greater levels of evaluation or discernment. In fact, it\u2019s the opposite: in conversations where artifacts are created, users are less likely to identify missing context (-5.2pp), check facts (-3.7pp), or question the model\u2019s reasoning by asking it to explain its rationale (-3.1pp). Our <a href=\"https:\/\/www.anthropic.com\/research\/anthropic-economic-index-january-2026-report\" rel=\"nofollow noopener\" target=\"_blank\">Economic Index<\/a> finds that\u2014unsurprisingly\u2014the most complex tasks are where Claude struggles the most, so this seems particularly noteworthy.<\/p>\n<p><img alt=\"Artifact vs. non-artifact comparison table or paired bar chart\" loading=\"lazy\" width=\"1920\" height=\"1080\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/02\/1771873755_751_image.webp\"\/>Behavioral indicator prevalence in conversations with artifacts (n=1,209) versus without artifacts (n=8,621). Description and delegation behaviors increase in artifact conversations, while all three discernment behaviors decrease.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">There are several possible explanations for this pattern. It might be that Claude is creating polished, functional-looking outputs, for which it doesn\u2019t seem necessary to question things further: if the work looks finished, users might treat it as such. But it\u2019s also possible that artifact conversations involve tasks where factual precision matters less than aesthetics or functionality (designing a UI, for instance, versus writing a legal analysis). Or users might be evaluating artifacts through channels we can\u2019t observe\u2014running code, testing an app elsewhere, sharing a draft with a colleague\u2014rather than expressing their evaluation within that same initial conversation.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">Whatever the explanation, the pattern is worth paying attention to. As AI models become increasingly capable of producing polished-looking outputs, the ability to critically evaluate those outputs, whether in direct conversation or through other means, will become more valuable rather than less.<\/p>\n<p class=\"headline-6\">Developing your own AI fluency<\/p>\n<p>As with all skills, AI fluency is a matter of degree\u2014for most of us, it\u2019s possible to develop our techniques much further. Based on the patterns in our data, there are three areas where we\u2019ve found many users could improve their skills:Staying in the conversation. Iteration and refinement is the single strongest correlate of all other fluency behaviors in our data. So, when you get an initial response, it\u2019s worth treating it as only a starting point: ask follow-up questions, push back on any parts that don\u2019t feel right, and refine what you\u2019re looking for.Questioning polished outputs. When AI models produce something that looks good, it\u2019s the perfect moment to pause and ask: is this accurate? Is anything missing? Does this reasoning hold up? As we discussed above, our data show that polished outputs coincide with lower rates of critical evaluation, even though users go to greater lengths to direct Claude\u2019s work at the outset.Setting the terms of the collaboration. In only 30% of conversations do users tell Claude how they\u2019d like it to interact with them. Try being explicit by adding instructions like, \u201cPush back if my assumptions are wrong,\u201d \u201cWalk me through your reasoning before giving me the answer,\u201d or, \u201cTell me what you\u2019re uncertain about.\u201d Establishing these expectations up front can change the dynamic of the rest of the conversation.Limitations<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">This research comes with important caveats:<\/p>\n<p>Sample limitations: Our sample reflects Claude.ai users who engaged in multi-turn conversations during a single week in January 2026. Since we think this is still relatively early on in the diffusion of AI tools, these users likely skew towards early adopters who are already comfortable with AI\u2014i.e., who may not represent the broader population. Our sample should be understood as providing a baseline for this population, not as a universal benchmark. Because the data comes from a single week, it is also unable to capture any seasonal or longitudinal effects. And because it\u2019s focused on <a href=\"http:\/\/claude.ai\/redirect\/website.v1.827d8765-f00a-4cae-ae90-6524f5c00529\" rel=\"nofollow noopener\" target=\"_blank\">Claude.ai<\/a>, we don\u2019t capture how users interact with other AI platforms.Partial framework coverage: In this study, we only assessed the 11 of the 24 behavioral indicators that are directly observable in conversations on Claude.ai. All behaviors related to the responsible and ethical use of AI outputs occur outside of these conversations, and are not captured.Binary classification: For each conversation in our sample, we classify each behavior as either present or absent. But this likely misses significant nuance\u2014like arguable or partial demonstrations of behaviors, or overlapping signals between them.Implicit behaviors: Users might demonstrate fluency behaviors mentally (such as fact-checking Claude\u2019s claims against their own knowledge) without expressing these behaviors in conversation. This seems especially relevant for our data on artifacts\u2014users might be evaluating Claude\u2019s outputs through testing and practical use, rather than through conversation-visible behaviors.Correlational findings: The relationships we identify are correlational. We don\u2019t know whether one behavior causes another, or whether they both reflect some common underlying factor, like task complexity or user preferences.Looking ahead<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">This study offers us a baseline that we can use to assess how AI fluency is changing over time. As AI capabilities evolve and adoption increases, we\u2019re aiming to learn whether users are developing more sophisticated behaviors, which skills are emerging naturally with experience, and which will require more intentional development.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">In future work, we plan to extend our analysis in several directions. First, we plan to conduct \u201ccohort analyses,\u201d comparing new users to experienced ones in order to understand how familiarity with AI is correlated with fluency development. Second, we plan to use qualitative research methods to assess the behaviors that aren\u2019t directly observable in Claude.ai conversations. And third, we aim to explore the causal questions that this work raises\u2014like whether encouraging iterative conversations leads to greater critical evaluation, or whether there are other interventions that could encourage this more effectively.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">In addition, we\u2019d like to explore AI fluency behaviors in Claude Code, a platform mostly used by software developers. In preparation for this study, we conducted some initial analysis that found consistency between Claude Code conversations and ones in <a href=\"http:\/\/claude.ai\/redirect\/website.v1.827d8765-f00a-4cae-ae90-6524f5c00529\" rel=\"nofollow noopener\" target=\"_blank\">Claude.ai<\/a>. But this is still preliminary, and Claude Code\u2019s very different user base and functionality implies that more substantial research is necessary.<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">We expect that the nature of AI fluency will develop and evolve substantially over time. With this and future research, we\u2019re aiming to make that development visible, measurable, and actionable.<\/p>\n<p>Bibtex<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">If you\u2019d like to cite this post, you can use the following Bibtex key:<\/p>\n<p>@online{swanson2026aifluency,<br \/>\nauthor = {Kristen Swanson, Drew Bent, Saffron Huang and Zoe Ludwig and Rick Dakan and Joe Feller},<br \/>\ntitle = {Anthropic Education Report: The AI Fluency Index},<br \/>\ndate = {2026-02-16},<br \/>\nyear = {2026},<br \/>\nurl = {https:\/\/www.anthropic.com\/news\/anthropic-education-report-the-ai-fluency-index},<br \/>\n}Acknowledgements<\/p>\n<p class=\"Body-module-scss-module__z40yvW__reading-column body-2 serif post-text\">Kristen Swanson designed the research, led the analysis, and wrote this report. Zoe Ludwig, Saffron Huang, and Drew Bent contributed to framework alignment, messaging, and review. The 4D Framework for AI Fluency was developed by Rick Dakan and Joe Feller. Zack Lee provided technical support. Hanah Ho helped visualize the data. Keir Bradwell, Rebecca Hiscott, Ryan Donegan and Sarah Pollack provided communications review and guidance.<\/p>\n","protected":false},"excerpt":{"rendered":"People are integrating AI tools into their daily routines at a pace that would have been difficult to&hellip;\n","protected":false},"author":2,"featured_media":494666,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-494665","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/494665","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=494665"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/494665\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/494666"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=494665"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=494665"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=494665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}