Anthropic’s Claude to Begin AI Training on User Chats by Default—Opt-Out Deadline September 28, 2025
AI Articles

Anthropic’s Claude to Begin AI Training on User Chats by Default—Opt-Out Deadline September 28, 2025

Starting September 28, 2025, consumer users of Anthropic’s Claude AI—across Free, Pro, Max, and Claude Code plans—will by default have their future chat transcripts and coding sessions used to train and improve the AI model, with data retention extended to five years unless they opt out. The policy excludes enterprise and API users. New users can set their preference during signup, while existing users will see a pop‑up—users must decide by the deadline or risk losing access. Regardless of consent changes, once data is used for training, it cannot be retracted. Anthropic emphasizes data filtering, confidentiality, and no third‑party selling.

The Policy Shift: From Privacy Default to Training Default

In a notable policy pivot, Anthropic announced on August 28, 2025, that it will begin using user-generated content—namely new or resumed chat transcripts and coding sessions—as training data for Claude AI unless users actively opt out. For consenting users, retention of this data will extend to five years, a considerable increase from the previous 30-day policy. If users do not consent, the 30-day deletion still applies.

B. Who Is Affected—And Who Is Not

This policy update applies to all consumer subscription tiers—Claude Free, Pro, Max, and Claude Code— but excludes enterprise and government users, including Claude for Work, Claude Gov, Claude for Education, and API users via platforms like Amazon Bedrock or Google Cloud’s Vertex AI. Those accounts remain exempt from this updated training policy.

C. User Interaction: Opt-In at Signup, Mandatory Decision for Existing Users

  1. New users: During the signup process, users must actively choose whether to allow Claude to use their chats for training.
  2. Existing users: Will encounter an in-app pop‑up titled “Updates to Consumer Terms and Policies” after August 28, 2025, featuring a prominent “Accept” button (which opts you in), with a smaller “Help improve Claude” toggle set to “On” by default. Clicking “Not now” defers the decision—but users must decide by September 28, 2025, to continue using Claude.

D. Data Usage and Retention Details

  1. Only new or resumed interactions will be used in training; dormant or old chats, unless reopened, will remain excluded.
  2. Deleted conversations will not be used.
  3. Users who opt in allow retention of their data for up to five years; opting out preserves the existing 30-day retention.
  4. User preferences can be changed anytime within Privacy Settings—though such changes only impact future chats; already used data can't be withdrawn.

E. Anthropic’s Privacy Assurances

Anthropic emphasizes that:

  1. It employs automated filters and obfuscation tools to remove or mask sensitive information.
  2. It does not sell user data to third parties.
  3. These safeguards are meant to balance improved AI capability with user privacy protection.

F. Why It Matters: Capabilities, Control, and Controversy

  1. Allowing Claude to train on user data could significantly enhance AI reasoning, coding, and safety capabilities, enabling models to adapt to real user inputs.
  2. However, the shift from privacy default to training default places responsibility on users to actively opt out, a move that has raised concerns among privacy advocates and users.
  3. The extended retention period—from 30 days to five years—further amplifies the stakes, especially for users sharing sensitive or personal information.

G. How to Opt Out: Step-by-Step

New users:

  1. During sign-up, toggle off “Help improve Claude” to remain private.

Existing users:

  1. When the pop‑up appears (“Updates to Consumer Terms and Policies”), immediately toggle off “Help improve Claude” before clicking “Accept.”
  2. To change your choice later: navigate to Settings → Privacy, and toggle off the training consent.
  3. Remember—toggling off affects only chats going forward; previous data already used for training stays in the system.

Conclusion

Anthropic’s policy update marks a significant evolution in Claude’s data practices—requiring consumer users to opt in for AI training, extending data retention, and underscoring the tension between innovation and privacy. While the shift may unlock advanced capabilities, privacy-conscious users should act before September 28, 2025, to maintain control over their conversational data.

Source:indianexpressGPT