Anthropic published research showing a marked difference in how people interact with its Claude AI assistant depending on whether they iterate with the tool or accept an initial reply. The company examined 9,830 anonymized conversations on Claude.ai collected over a seven-day period in January 2026 and evaluated them using its 4D AI Fluency Framework, which measures 11 observable behaviors including iteration, fact-checking, and questioning of reasoning.
The analysis found that 85.7% of the conversations displayed iteration and refinement. Conversations that included iteration exhibited a higher number of fluency behaviors on average - 2.67 behaviors - compared with 1.33 behaviors in conversations where users accepted the assistant's first response.
Anthropic also isolated exchanges that generated artifacts - defined in the analysis as code, documents, or interactive tools. These artifact-related conversations made up 12.3% of the sample and were associated with higher frequencies of directive behaviors. In particular, users in those exchanges were more likely to clarify goals, specify formats, and provide examples, with increases of 14.7, 14.5, and 13.4 percentage points respectively versus non-artifact conversations.
At the same time, the artifact-generating exchanges showed lower rates of critical evaluation. Compared with conversations that did not produce artifacts, users in artifact conversations were 5.2 percentage points less likely to identify missing context, 3.7 points less likely to check facts, and 3.1 points less likely to question Claude's reasoning.
Anthropic characterized these findings as establishing a baseline for monitoring the development of AI fluency over time. The company said it plans to follow up with cohort analyses that compare new and experienced users, and to incorporate qualitative approaches to capture behaviors that occur outside the chat interface.
Methodology note - The results are drawn from an analysis of 9,830 anonymized conversations collected during a specified seven-day interval in January 2026 and assessed against the 4D AI Fluency Framework's 11 observable behaviors.