Anthropic has stopped companies and groups run by Chinese entities from using its AI services. According to the US-based company, this move comes as part of a broader effort to limit access to AI tools in regions considered authoritarian.
The company, which has strong support from Amazon, is known for its Claude chatbot and is focused on developing AI safely and responsibly.
Companies based in China, along with those in countries like Russia, North Korea, and Iran, are already unable to access Anthropic’s products because of legal and security concerns.
Similarly, products like ChatGPT from US-based OpenAI are not available in China, which has led to the growth of local AI models from Chinese companies such as Alibaba and Baidu.
On Friday, Anthropic announced it is making changes to its service terms. The company said that even though some groups are already blocked, some still access its services through subsidiaries based in other countries.
As a result, the new rule stops companies or organizations that are controlled by entities in regions where Anthropic’s products are not allowed, such as China, from using its services, no matter where they are based.
Also see: FRSC NVIS Portal Aids Recovery of Stolen Vehicle in Rivers State
Anthropic, which is valued at $183 billion, said this change will affect any group that is more than 50% owned, directly or indirectly, by companies in unsupported regions.
Nicholas Cook, a lawyer with 15 years of experience in the AI industry and based in China, said this is the first time a major US AI company has made a public ban of this kind.
He told AFP that the direct effect on business might be small, as US AI firms have already faced challenges entering the Chinese market, and many groups have chosen to use their own local AI systems.
But he added that taking such a position will likely lead to questions about whether other companies will follow a similar path.
