On April 14, 2026, Anthropic launched a large-scale identity verification (KYC) mechanism on the Claude platform without prior notice. According to official instructions, some users may receive identity verification prompts when accessing certain functions, and will be required to submit government issued physical documents with photos (passport, driver's license, ID card, etc.), and complete real-time selfie liveness detection through cameras. The next day, the "Identity verification" policy page was officially launched on the Claude official website help center.
At present, most of the users who have triggered verification through public feedback are those who triggered the process when subscribing to Claude's highest paid Max plan, in addition to high-frequency users and users who triggered risk control or suspected gray production scenarios. The sudden launch of the authentication mechanism has caused a large number of developers to complain: the workflow relying on Claude Code has broken, and the product chain is in jeopardy. Along with it, many users have been banned due to AI misjudgments.
And touching on sensitive topics such as privacy and security, regional discrimination, etc., is the reason why this incident has gained such great popularity. On the one hand, overseas users are concerned about privacy and the risk of false blocking. Starting from the verification mechanism itself, financial grade verification that requires uploading passports and facial recognition is already sensitive; At the same time, third-party service provider Persona has been exposed to have vulnerabilities, exacerbating concerns about privacy breaches and data security.
It is reported that Persona Identity, a third-party service provider that provides identity verification mechanisms, has been repeatedly questioned for its security records, despite providing core services to tech giants such as OpenAI and LinkedIn. Previously, the cooperation between the two parties was terminated less than a month ago due to the leakage of personal information of approximately 70000 Discord users. At the beginning of this year, the US federal government identity platform it undertook once again sparked a security controversy due to a configuration error that exposed 53MB of complete source code to the public network.
The response from domestic communities such as V2EX and Zhihu has been rapid and intense. A user on V2EX pointed out that "the lack of verification function is limited, and identity verification exposes the dilemma." As a domestic user joked, "Verification itself is a trap, actively submitting a passport is equivalent to revealing nationality." After verification, it is more likely to be banned.
Even more ironic is that some netizens sent the new regulations to Claude Opus 4.6, and as a result, even Claude himself denied his company's KYC rules and encouraged users to write about it and publicly criticize his behavior.
In addition to the heat brought by touching on sensitive topics, what core users are most worried about, or concerned about, is actually the change in Anthropic's consistent attitude behind the event, and even its symbolic significance or profound impact on the entire industry.
The intense reactions from users, and even strong dissatisfaction with mandatory KYC, stem from the huge gap between Anthropic's "high moral character" and "high-pressure coercive measures". Looking back at its development history, it has long claimed to be a "pioneer of AI security", which may understand the discomfort and helplessness that users face in the face of this huge contrast.
From OpenAI Rebel to Trillion Unicorn
At the end of 2020, Dario Amodei, Vice President of Research at OpenAI, and his sister Daniela Amodei submitted their resignations and, along with several other OpenAI veterans, set up a tent in the backyard of San Francisco to establish Anthropic. On April 7, 2026, Anthropic announced that the company's annualized revenue had exceeded $30 billion, surpassing OpenAI's $25 billion for the same period. This is also the first time since the release of ChatGPT in 2022 that a competitor has surpassed OpenAI in terms of revenue scale.
Dario led the development of GPT-2 and GPT-3 at OpenAI, during which he paid close attention to potential model risks and advocated for the establishment of strict security assessment mechanisms before model iterations. The reason for leaving is not the rumored opposition to working with Microsoft. In a later interview program, he revealed his idea: instead of staying at OpenAI to pursue his vision, it's better to "bring some people you trust to achieve your vision." This "safety first" AI concept has also become a core principle engraved in Anthropic's genes.
Anthropic's founding philosophy established a transparent and clear "constitution" to constrain the direction of AI. In December 2022, the paper "Constitutional AI: Harmlessness from AI Feedback" was officially published, marking the formal emergence of its core "Constitutional AI" theoretical system. This training mechanism framework sets the principle of honesty and harmlessness for AI models, making the decision-making process traceable and auditable.
Claude, who runs on this framework, naturally excels in security. In the 2025 Silicon Valley core test for measuring AI's tendency to fabricate facts, the "Illusion Rate" test, Claude 3.5 Sonnet only achieved 3.9%, significantly better than the industry benchmark GPT-4's 5.8%.
More importantly, the existence of the "AI Constitution" directly solves the fear of uncontrollable models among enterprises, especially in industries such as finance, law, and government that require high compliance. As a result, it has become Anthropic's core competitiveness in the enterprise market. According to the Menlo Ventures 2025 AI report, Anthropic's market share in the enterprise LLM API market will reach 40% by the end of 2025. However, OpenAI's market share has declined from 50% in 2023 to 27%, and the trend does not seem to be reversing.
The brand reputation accumulated around the principle of "safety first" has also helped it gain an advantage in important scenarios of the Agent track, with a particularly prominent leading advantage in programming assisted scenarios. After Anthropic opened its programming aid tool Claude Code to all users in May 2025, its usage increased more than tenfold within three months. As of early 2026, Claude Code's annualized revenue has exceeded $2.5 billion.
At the same time, Anthropic is continuously consolidating its "security first, trustworthy" "persona" by iterating model constitutions, releasing security and threat intelligence reports, and limiting the scope of new model openness. Until February of this year, Anthropic experienced a "moment of breaking out of the circle" - faced with the Pentagon's ultimatum to lift Claude's military use restrictions, CEO Dario Amodei issued a statement stating that he "cannot accept it without conscience" and adhered to the two red lines of "not for mass surveillance of American citizens" and "not for completely autonomous lethal weapons".
Although Anthropic was ultimately classified as a "supply chain risk" by the US government and suffered a huge loss of $200 million worth of orders, its "responsible" public image and "safety first" brand reputation went against the trend and reached unprecedented heights. Afterwards, OpenAI hastily took over the position and instead ignited a wave of resistance to "escape GPT", which became a key catalyst for Anthropic to transform its reputation advantage into market traffic. According to DianDian data statistics, the daily download volume of its mobile application "Claude by Anthropic" surpassed "ChatGPT" to reach the top in the United States in early March, with a peak daily download volume of over 200000.
Throughout Anthropic's five-year development history, from being a "traitor" of OpenAI to a giant with a market value of billions, the product positioning of "safety first" has played an indispensable role. At this point, the underlying reasons for its mandatory implementation of KYC may have surfaced.
The deep logic of KYC: it's not just about "checking ID card"
According to official public information, Anthropic's benchmark logic for implementing KYC is that the stronger the model capability, the higher the risk of abuse, and identity verification is needed as a "backstop". Although everything still follows the principle of "safety first", it is actually a carefully planned "scheme" in the face of huge compliance and geopolitical pressure in the process of commercialization. The imminent IPO target or the core driving force behind this event.
Firstly, KYC is a key measure to sell a "safe" narrative to investors and solidify high valuations. By actively establishing a financial level identity verification system, the company not only demonstrates its "responsible" attitude to regulatory agencies, effectively avoiding potential regulatory risks in the future, but also consolidates its image as the "safest and most compliant" AI service provider in the minds of B-end large customers, providing a credible "security premium" support for achieving fundraising goals in the future.
Secondly, KYC is also an efficient means of screening high-quality customers, optimizing revenue structure, and ensuring the health of business models. Prior to its IPO, Anthropic must optimize its financial statements and improve the quality of its revenue. Mandatory KYC has precisely cracked down on grey industry behaviors such as API reverse proxy and account sharing through personal subscriptions, forcing these "pseudo C-end" users to turn to high profit official API channels and thus plugging revenue loopholes. At the same time, it can also concentrate extremely expensive computing resources on serving high net worth B-end large clients.
Finally, KYC is still an active defense measure to deal with geopolitical games and avoid political risks. By explicitly excluding identity documents from specific regions (such as Chinese Mainland), Anthropic has voluntarily abandoned the market with high compliance costs and low strategic value, avoiding falling into regulatory quagmire during the critical period of listing. At the same time, this also enables it to cooperate with the overall technology restriction policy of the United States, ensuring that core technologies do not flow to "sensitive areas" and minimizing the political review risks that may be faced during the IPO process.
In summary, mandatory KYC is much more than just "checking identity cards". It is also a key strategic card that Anthropic played on the eve of its IPO to demonstrate its compliance to the capital market, optimize its business model, and mitigate its geopolitical risks.
Anthropic's "safety first" label was once its core weapon against OpenAI commercialization. In 2023, its Responsible Expansion Policy (RSP) explicitly promises to unconditionally suspend training if the model's capabilities exceed safety thresholds and cannot mitigate risks. This' unilateral commitment 'was once regarded as the ethical benchmark in the industry.
At the beginning of this year, the Anthropic official website released the third edition of the Responsible Scaling Policy (RSP 3.0: Responsive Scaling Policy: Version 3.0), and the hard red line of "suspending training when the model reaches the danger threshold" had quietly disappeared from the document, replaced by a flexible framework of "transparent disclosure".
Combining with the current proactive attitude towards implementing KYC, this representative enterprise that once stubbornly adhered to AI security principles has undergone a change in attitude and stance.
Impact and outlook: The industry landscape is undergoing restructuring
In January 2025, Anthropic's annualized revenue was only $1 billion; Just 15 months later, it skyrocketed 30 times to 30 billion. Its rapid development process has verified the commercial value of the secure AI route and demonstrated the importance of user trust for AI platforms.
In September 2025, a former employee of xAI shocked Silicon Valley by stealing Grok related core code libraries; In October of the same year, two "AI girlfriend" applications, "Chattee ChatAI" and "GiMe Chat," exploded, causing the death of 400000 users; At the beginning of the month, the National Cybersecurity Center just reported multiple incidents of AI supply chain poisoning. In the current situation where leaks continue to occur frequently, Anthropic's development path precisely illustrates that information security is still the breakthrough strategy for enterprises in the AI era, and its importance will continue to rise with the frequent occurrence of such incidents.
The protagonist of this incident, as a representative enterprise that adheres to the principle of "safety first", Anthropic's attitude change has also forced us to face the industry's development path or inevitably enter the trend of "compliance and governance". This indicates that future AI services will be fully integrated into strong regulatory systems similar to those in the financial and communication industries.
However, with KYC effectively cutting off the channels of abuse and gray production, high-value commercial closed loops are screened out. Independent developers, small teams, and users in restricted areas are marginalized and even fragmented, and the loss of innovation diversity in fields such as AI programming and design is inevitable.
The trend of KYC becoming a standard feature for large models has long been expected. As early as April 2025, OpenAI was the first to implement KYC, and the supplier was another protagonist at the forefront of this incident - Persona. Now Anthropic's comprehensive follow-up may also indirectly dispel the last trace of "expectation" in people's minds. It is foreseeable that giants such as Google will quickly align with this standard.
At the same time, Anthropic had long anticipated restrictions on Chinese users and businesses. On September 5 last year, it announced publicly that it would stop selling Claude services to groups with majority equity held by Chinese capital, covering Chinese Mainland and enterprises indirectly using cloud services through overseas registration or cloud services. In addition, OpenAI terminated API service access to mainland China and Hong Kong as early as July 2024.
For domestic models, this trend may be a valuable window for the accelerated development of domestic substitution. After Anthropic announced its "suspension of services" in China last year, domestic AI platform Zhipu took advantage of the situation and launched the "Claude API User Special Relocation Plan" to take on this wave of traffic.
Overall, in the short term, the knowledge transfer of domestic models relying on distillation will be cut off, but in the long term, it will force domestic models to accelerate iteration in chip adaptation and algorithm frameworks. With the advantages of native compliance, no identity restrictions, and convenient payment, domestic models will accelerate the acceptance of developer and enterprise demands overflowing from overseas platforms, and promote technology selection towards localization. Global AI may split into two parallel AI worlds driven by different routes, models, and chips.
