Anthropic tightens AI access rules, targeting Chinese-controlled companies globally
Policy targets entities over 50% owned by Chinese companies, creating uncertainty for international coding tools and developer platforms.
Anthropic has implemented sweeping new restrictions that block Chinese-controlled entities from accessing its Claude AI services worldwide, regardless of where these companies operate or are incorporated. The policy change, announced September 5, aims to prevent authoritarian nations from leveraging advanced AI capabilities for military and intelligence purposes.
The updated Terms of Service now prohibit companies or organizations where ownership structures subject them to control from unsupported jurisdictions like China. This includes entities that are more than 50% owned, directly or indirectly, by companies headquartered in restricted regions—a significant expansion from previous geographic-based restrictions.
The policy shift has created uncertainty for several overseas AI tools backed by Chinese tech giants. Singapore-based Trae, ByteDance's AI-powered code editor for international users, relies heavily on both OpenAI's GPT and Anthropic's Claude models.
Users have begun requesting refunds over concerns about continued Claude access, though Trae management has assured users that Claude remains available "for the time being."
Similar concerns surround other Chinese-backed coding platforms targeting international markets, including Alibaba's Qoder and Tencent's CodeBuddy, which is currently in beta testing.
Strategic rationale behind the move
Anthropic justified the restrictions by pointing to legal requirements in authoritarian regions that can compel companies to share data with intelligence services or cooperate in ways that create national security risks.
The company argued that subsidiaries incorporated in other countries cannot fully escape these pressures, regardless of individual preferences within those organizations.
"When these entities access our services through subsidiaries, they could use our capabilities to develop applications and services that ultimately serve adversarial military and intelligence services," Anthropic stated in its announcement.
The company also expressed concern that restricted entities could use Claude models to advance their own AI development through techniques like distillation, potentially competing with trusted technology companies in the United States and allied nations.
Market response and competition
Chinese AI companies have moved quickly to capitalize on the uncertainty. Z.ai, formerly Zhipu AI, immediately announced special offers to attract Claude API users seeking alternatives to Anthropic's models.
The restrictions highlight the increasingly fragmented nature of the global AI landscape. In China, AI applications for the domestic market rely almost exclusively on local models, as the government has not approved any foreign large language models for Chinese users.
Broader context and timing
The policy change comes as Anthropic has achieved significant commercial success. The company recently completed a US$13 billion funding round, tripling its valuation to $183 billion. Its software development tool Claude Code, launched in May, is generating over $500 million in run-rate revenue with usage increasing more than tenfold in three months.
Anthropic's latest Claude Opus 4.1 coding model has achieved an industry-leading 74.5% score on SWE-bench Verified, a benchmark for evaluating AI models' programming capabilities.
The move aligns with broader US policy efforts to limit China's access to advanced AI technology. Anthropic CEO Dario Amodei has consistently advocated for stronger export controls on advanced US semiconductor technology to China and has called for accelerated energy infrastructure development to support AI scaling on US soil.
The restrictions represent a proactive step by a private AI company to address national security concerns, potentially setting a precedent for how other US AI firms approach similar challenges. As AI capabilities continue advancing, the balance between open access and security considerations will likely remain a contentious issue across the industry.
The full implementation timeline and specific enforcement mechanisms for the new restrictions remain unclear, leaving affected users and companies in a period of uncertainty about their continued access to Claude's AI services.