The post Anthropic Announces Updates to Consumer Terms and Privacy Policy appeared on BitcoinEthereumNews.com. Felix Pinkston Oct 27, 2025 21:03 Anthropic has updated its Consumer Terms and Privacy Policy, introducing data usage choices for users and extending data retention to enhance AI model development. Anthropic, an AI safety and research company, has announced significant updates to its Consumer Terms and Privacy Policy. These changes are designed to enhance the capability and safety of its AI models, such as Claude, by offering users more control over their data usage, according to Anthropic. Data Usage and User Control With the new updates, users of Anthropic’s Claude Free, Pro, and Max plans can choose whether their data is used to improve AI models and strengthen safeguards against harmful activities. This option, however, does not extend to services under the company’s Commercial Terms, such as Claude for Work or API usage through platforms like Amazon Bedrock and Google Cloud’s Vertex AI. Users are encouraged to participate in this initiative to help refine model safety and accuracy, particularly in detecting harmful content and improving coding, analysis, and reasoning skills. New users will make this choice during the signup process, while existing users will be prompted to select their preferences via an in-app notification. Data Retention Policy Anthropic is also extending its data retention period to five years for users who opt to allow their data to be utilized in model training. This extended retention is applicable only to new or resumed chats and coding sessions, allowing for better support in model development and safety improvements. Users who choose not to participate will continue under the existing 30-day data retention policy. The company ensures that user privacy is protected through a combination of tools and automated processes to filter or obfuscate sensitive data. Importantly, Anthropic does not sell user data to third… The post Anthropic Announces Updates to Consumer Terms and Privacy Policy appeared on BitcoinEthereumNews.com. Felix Pinkston Oct 27, 2025 21:03 Anthropic has updated its Consumer Terms and Privacy Policy, introducing data usage choices for users and extending data retention to enhance AI model development. Anthropic, an AI safety and research company, has announced significant updates to its Consumer Terms and Privacy Policy. These changes are designed to enhance the capability and safety of its AI models, such as Claude, by offering users more control over their data usage, according to Anthropic. Data Usage and User Control With the new updates, users of Anthropic’s Claude Free, Pro, and Max plans can choose whether their data is used to improve AI models and strengthen safeguards against harmful activities. This option, however, does not extend to services under the company’s Commercial Terms, such as Claude for Work or API usage through platforms like Amazon Bedrock and Google Cloud’s Vertex AI. Users are encouraged to participate in this initiative to help refine model safety and accuracy, particularly in detecting harmful content and improving coding, analysis, and reasoning skills. New users will make this choice during the signup process, while existing users will be prompted to select their preferences via an in-app notification. Data Retention Policy Anthropic is also extending its data retention period to five years for users who opt to allow their data to be utilized in model training. This extended retention is applicable only to new or resumed chats and coding sessions, allowing for better support in model development and safety improvements. Users who choose not to participate will continue under the existing 30-day data retention policy. The company ensures that user privacy is protected through a combination of tools and automated processes to filter or obfuscate sensitive data. Importantly, Anthropic does not sell user data to third…

Anthropic Announces Updates to Consumer Terms and Privacy Policy

2025/10/28 09:53
2분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다


Felix Pinkston
Oct 27, 2025 21:03

Anthropic has updated its Consumer Terms and Privacy Policy, introducing data usage choices for users and extending data retention to enhance AI model development.

Anthropic, an AI safety and research company, has announced significant updates to its Consumer Terms and Privacy Policy. These changes are designed to enhance the capability and safety of its AI models, such as Claude, by offering users more control over their data usage, according to Anthropic.

Data Usage and User Control

With the new updates, users of Anthropic’s Claude Free, Pro, and Max plans can choose whether their data is used to improve AI models and strengthen safeguards against harmful activities. This option, however, does not extend to services under the company’s Commercial Terms, such as Claude for Work or API usage through platforms like Amazon Bedrock and Google Cloud’s Vertex AI.

Users are encouraged to participate in this initiative to help refine model safety and accuracy, particularly in detecting harmful content and improving coding, analysis, and reasoning skills. New users will make this choice during the signup process, while existing users will be prompted to select their preferences via an in-app notification.

Data Retention Policy

Anthropic is also extending its data retention period to five years for users who opt to allow their data to be utilized in model training. This extended retention is applicable only to new or resumed chats and coding sessions, allowing for better support in model development and safety improvements. Users who choose not to participate will continue under the existing 30-day data retention policy.

The company ensures that user privacy is protected through a combination of tools and automated processes to filter or obfuscate sensitive data. Importantly, Anthropic does not sell user data to third parties.

Existing users have until October 8, 2025, to accept the updated terms and make their preferences known. The new policies will become effective immediately upon acceptance and apply only to new or resumed interactions with Claude. Users can modify their privacy settings at any time through Anthropic’s platform.

For further information on the updates, users are encouraged to visit the FAQ section provided by Anthropic.

Image source: Shutterstock

Source: https://blockchain.news/news/anthropic-updates-consumer-terms-privacy-policy

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0.01853
$0.01853$0.01853
+2.03%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!