Mandatory Claude KYC verification goes live; accounts in unsupported regions face blocks

MarketWhisper

Claude KYC

Anthropic has fully rolled out mandatory identity verification (KYC) on its Claude platform, carried out by the third-party service provider Persona Identities. The official defines this as routine “integrity checks” intended to prevent misuse, implement platform policies, and ensure regulatory compliance. After this measure was widely deployed in April 2026, its impact is most directly felt by users located in unsupported regions such as mainland China.

Trigger conditions for KYC verification and the official objectives

KYC (Know Your Customer) verification is a compliance mechanism used by financial and technology platforms to confirm a user’s real identity. With this initiative, Anthropic has introduced Persona Identities as the third-party enforcement party, requiring users to submit valid identification documents and complete real-time liveness verification.

The official lists three core goals for this measure:

Prevent misuse: Identify abnormal behavior such as batch-operated accounts and automated malicious use

Enforce usage policies: Ensure that platform usage complies with Anthropic’s Terms of Service

Regulatory compliance: Meet the increasingly stringent requirements from U.S. and international regulations for AI service providers

It is worth noting that even if a user successfully completes passport and liveness verification, if the system detects that the account’s actual usage location is in a region not supported by Anthropic, the account may still be disabled after verification.

Official recommended routes for affected users

For users whose KYC verification is triggered, Anthropic recommends that after the verification window appears, they promptly prepare valid government-issued identification documents and devices with a high-definition camera, and complete the review within the required timeframe to avoid service interruptions.

If an account has already been disabled, the official provides two formal channels:

Appeals channel: Fill out the official appeals form or email support@anthropic.com, describing the account usage and the reasons it was wrongly flagged

Refund request (paid users): If you just completed a subscription before the account was disabled, you can contact billing@anthropic.com to request a refund

Please note that, based on current community feedback, the appeals success rate is relatively limited, and refunds typically require multiple rounds of communication.

Broader context: AI platform compliance trends accelerating

Anthropic’s rollout of mandatory KYC is not an isolated case, but a reflection of how AI service platforms are responding to rising global regulatory pressure. In recent years, many major markets have continuously strengthened export controls on AI services and user verification requirements, prompting platforms to gradually tighten their identity verification mechanisms. For users in restricted regions, finding lawful alternative AI service platforms—such as other AI assistants that support the user’s location, or obtaining API access rights through official channels—is currently the most compliant and practical option.

Frequently asked questions

What is KYC verification, and why did Anthropic deploy it now?

KYC is short for “Know Your Customer.” It is the standard compliance process used by financial and technology platforms to verify a user’s real identity. Anthropic’s large-scale deployment, on the one hand, responds to the requirements from U.S. and international regulatory bodies for AI service providers’ compliance responsibilities; on the other hand, it also reflects internal management needs to prevent its platform from being misused (such as batch accounts and automated operations).

For users in unsupported regions, will accounts necessarily be banned?

Not necessarily. Whether KYC verification is triggered depends on the platform’s risk-control mechanism, and not all accounts in unsupported regions will face review immediately. However, once triggered—and if the system determines that the account’s actual usage location is in a restricted region—there is still a risk that the account will be disabled even after completing verification. At present, the official has not provided a clear exemption list.

What lawful alternative options are available for affected users?

For users who cannot pass Claude KYC verification, lawful alternative options to consider include: other AI service platforms that provide support in the local market (such as local or international AI services that support mainland China), obtaining usage authorization through an enterprise API partnership channel, or evaluating other open-source or commercial large language model services with equivalent capabilities.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Microsoft survey: Only 13% of employees who are incentivized to drive AI-powered workplace innovation fail

According to Microsoft’s annual Work Trend Index report released on May 5, the report analyzed trillions of anonymous Microsoft 365 productivity signals and surveyed 20,000 employees across multiple markets including the United States, the United Kingdom, India, and Japan. The report data shows that only 13% of employees say their employers provide incentives when attempts to improve work with AI do not deliver the expected results.

MarketWhisper18m ago

Meta is developing an AI assistant named Hatch to rival OpenClaw, completing internal testing by the end of June

According to a May 5 report by the Financial Times, Meta is developing an AI assistant (Hatch) for mainstream consumers, inspired by OpenClaw from OpenAI, with the goal of completing internal testing by the end of June; at the same time, Meta plans to integrate an independent agentic shopping tool into its Instagram service before the fourth quarter of this year.

MarketWhisper28m ago

OpenAI court hearing: Brockman testified that Musk once said he wouldn’t prioritize safety, and that equity talks could turn violent

According to the New York Post, on May 6, OpenAI CEO Greg Brockman testified in federal court in Oakland, California, on May 5, disclosing that when Musk stepped down from OpenAI’s board in 2018, he delivered an all-hands speech saying that when he pushed AI at Tesla, he “wouldn’t spend time on safety,” and that in 2017 he and OpenAI co-founder negotiated over equity stakes in a tense, close-to-violent confrontation.

MarketWhisper1h ago

Indian Cybersecurity Firms Use AI to Cut Vulnerability Testing to Hours

Indian cybersecurity firms including Indusface and Astra Security are adopting AI agents built on large language models to accelerate software vulnerability testing from days or weeks to hours, according to The Economic Times. The shift reflects growing attacker speed and AI tools' emerging ability

CryptoFrontier2h ago
Comment
0/400
No comments