Anthropic has formally announced that, in the near future, conversational transcripts generated within its widely known artificial intelligence assistant, Claude, will begin to be incorporated into the company’s model training processes. This policy shift was disclosed on Thursday through a revision to the organization’s Consumer Terms and Privacy Policy—a document that governs how user information is handled. In practical terms, this means that selected user interactions may be processed for the purpose of refining Claude’s conversational abilities, though individuals will be given a measure of control over whether their data is included.
For users signing up with Claude for the very first time, the enrollment sequence will feature a clear toggle labeled “Help improve Claude.” This switch, positioned within the initial sign-up flow, can be activated or deactivated at the user’s discretion. Existing account holders, meanwhile, will encounter a notification describing the modification. Unless they actively opt out, the new data-sharing feature will be enabled automatically, with September 28 designated as the final deadline for customers to decline participation. Importantly, even after this date, the privacy settings menu within Claude will continue to provide a straightforward way to turn off the option at any point in the future.
While questions have arisen from industry observers about the rationale behind this policy evolution, a company spokesperson chose not to provide additional commentary beyond the published materials. Still, by surveying what has been made publicly available, users can gain a more comprehensive understanding of how the update will operate in practice.
The most immediate impact will fall upon individuals making use of the free, Pro, Max, or Code sessions—categories that encompass the various consumer-facing tiers of Claude’s functionality. Beginning in the latter part of September, once the default setting comes into effect, only new conversations or sessions that are reopened will be eligible for training inclusion. Legacy chats that remain untouched will not be retroactively extracted for model updates, at least according to the current policy framework. Notably, several categories remain shielded from this change: professional plans such as Claude for Work (including Team and Enterprise subscriptions), Claude Gov for government users, and Claude Education for academic contexts will remain exempt. Furthermore, developers tapping into Claude via third-party Application Programming Interfaces—including those integrated with Amazon Bedrock or Google Cloud’s Vertex AI platform—will not see their conversations applied toward training purposes.
During the interim period before the September 28 deadline, users encountering the new notification may sidestep it either by dismissing the message or by selecting the “not now” option. However, once this date passes, continued use of Claude will require an explicit decision, meaning that postponement will no longer be possible.
In tandem with the training change, Anthropic has instituted a second major shift concerning how long user data will be stored. Up until now, recorded chats were retained for only a 30-day window unless otherwise specified. Under the new approach, if a user opts into training, their eligible transcripts from new or resumed sessions could be stored securely for a period as long as five years—a striking extension intended to equip the company with a greater ability to monitor and mitigate misuse, detect emergent harmful behaviors, and strengthen safeguards for responsible AI operation.
For those who may regret their initial choice or who inadvertently enabled the feature, the company has outlined straightforward paths to opt out. On the web interface, this involves selecting the user icon at the bottom left corner, then navigating through Settings, proceeding to the Privacy section, and toggling “Help improve Claude” to the off position. On mobile devices, the equivalent process can be accessed by opening the menu via the stacked lines icon, selecting the Settings cog, choosing Privacy, and switching off the same option. After opting out, any future sessions will no longer be incorporated into training. However, it is important to understand that conversations already ingested into active or completed training cycles cannot be retroactively removed, though no additional sessions will be included going forward.
In sum, Anthropic’s update represents a significant recalibration of how Claude user data may be handled. By implementing default inclusion while simultaneously offering a clear opt-out path, the company has struck a balance between advancing AI capabilities and preserving individual choice. The extended retention period, though potentially controversial, has been justified on the grounds of enhanced security and oversight. Ultimately, users now face an important decision regarding whether to contribute their conversations to the larger project of refining Claude, or to prioritize personal privacy by disabling the setting.
Sourse: https://www.cnet.com/tech/services-and-software/anthropic-will-soon-use-your-chats-with-claude-for-ai-training/#ftag=CAD590a51e