Lack of International Cooperation Threatens Chaos in AI Management

The hard truth is that there are currently no serious projects to establish internationally binding rules – Francis Gurry
Divergent national approaches to regulating artificial intelligence (AI) have created a complex environment for global intellectual property (IP) rights management that cries out for the kind of international cooperation whose attainability seems far-fetched today.
“The subject has become much more confused and difficult over time,” says Francis Gurry, strategic advisor to IPH Limited and the former director-general of the World Intellectual Property Organisation. “But universal multilateralism is dead, and nobody wants to cooperate in areas where competitive positions are at stake.”
Opposing Perspectives
The issues boil down to two polarised philosophies. The first, exemplified by the European Union’s (EU) Artificial Intelligence Act, features a strict prohibitive framework prioritising risk mitigation and safeguarding privacy.
“The best way to describe the EU approach is as one that puts negative limits on AI activity,” Gurry explains.
Surprisingly, perhaps, the United States (U.S.) and China share a different philosophy that rejects general-purpose regulation akin to what exists in the EU for more flexible and supportive policies.
“Both countries take a strategic approach that emphasises the practical advantages of AI’s beneficial effects as opposed to focusing on negative consequences,” Gurry says. “And they share the goal of becoming the world leader in AI.”
As recently as the latest Two Sessions (the collective term for the annual plenary session of the National People’s Congress and the National Committee of the Chinese People’s Political Consultative Conference) in March, the Chinese Prime Minister delivered a report promoting AI integration across all production in the country.
“That’s similar to what the Chinese did in the nineties when they reengineered their manufacturing to make it information technology-based,” Gurry says. “It’s also similar to the administrative guidance approach of the Japanese in the eighties and nineties, where they only regulated hived-off specifics more closely.”
U.S. and China: Similar aims, different approaches
To be sure, there are differences between the U.S. and Chinese approaches. Technical regulation of AI aimed at specific negative effects (as opposed to general-purpose regulation) is not uncommon in China’s centralised, government-controlled environment. The Chinese have, for example, mandated transparency when AI generates synthetic data.
By contrast, the Trump administration has revoked orders from former President Joe Biden that it perceived as hampering the private sector by imposing regulatory oversights. Legislation that might provide governance has been slow unfolding, although state governments are circulating various proposals.
“Both China and the U.S. concentrate on enablement,” Gurry says. “The difference is that the Chinese have been very detailed in their approach to AI, whereas few particulars have emerged in the U.S., where the regulatory situation is vague.”
Understandably, the private sector is more influential in America than in China.
“China has been listening to the private sector too, but not like the Trump administration,” Gurry says. “In my view, however, China represents the middle ground because it embraces a strategy for the whole economy that gets everyone on board. The U.S. is much more free-flowing and unstructured.”
Elsewhere, Japan recently announced its policy approach. Intended to make Japan “the most AI-friendly country in the world”, its strategic perspective mimics the American and Chinese frameworks.
The IP issues
From an IP perspective, the big questions revolve around AI input and output.
“On the input side, questions like whether it’s legal to consume copyrighted data for AI training purposes will play out globally,” Gurry explains. “But the current situation is so unclear that there are 37 related lawsuits in the U.S. alone, and more emerging worldwide”.
Not surprisingly, the artistic and literary communities are up in arms about governments’ failure to clarify the situation.
“The U.S. administration hasn’t even regulated any licensing schemes like the ones existing for cable networks,” Gurry says. “And that’s because they don’t want to prejudice the competitive position of their domestic enterprises.”
On the output side, looming issues remain about what kind of output infringes copyright and whether AI-generated art, inventions, and patents attract IP protection. Although the unanimous judicial reaction to date, at least in the copyright context, is that they don’t—chiefly because current statutes are human being-based—the issue isn’t likely to go away.
“Consider that an AI work sold for £500,000 in London last year,” Gurry notes.
There is insufficient discussion about these matters, Gurry believes, partly because it’s difficult to detect what is AI-generated.
“Still, shouldn’t we be considering alternative solutions like shorter protection duration for AI-generated works, say five years instead of twenty?”
Harmonisation initiatives
It’s not that harmonisation initiatives are lacking: the United Nations has emphasised the need for global AI regulation; the Organisation for Economic Co-operation and Development has established principles that promote responsible AI development; the United Nations Educational, Scientific and Cultural Organisation is working on a framework for ethical AI government; the AI Safety Summit, widely known as the “Bletchley AI”, hosted by the United Kingdom in 2023, gathered government and industry to address AI risks; and there has been talk about creating a multilateral AI research institute to enhance international cooperation.
All this indicates that the need for collaboration exists. Yet, progress is slow.
“The hard truth is that there are currently no serious projects to establish internationally binding rules,” Gurry says. “And that’s partly because of the lack of common interests in a dog-eat-dog world.”