UK 'going it alone' on artificial intelligence regulation – what could that mean?


Jaque Silva | Nurphoto | Getty Images

LONDON — Britain says it wants to do “its own thing” when it comes to regulating artificial intelligence, suggesting possible divergence from approaches taken by its main Western partners.

“It's really important that we as the UK do our part when it comes to regulation,” Feryal Clark, the UK's minister for artificial intelligence and digital government, told CNBC in an interview broadcast on Tuesday.

She added that the government already has “good relationships” with artificial intelligence companies such as OpenAI and Google DeepMind, which have voluntarily made their models available to the government for security testing purposes.

“It's really important that we address this safety at the beginning of model development… and that's why we will work with the industry on any safety measures that emerge,” Clark added.

The minister says the UK can

Her comments echoed Prime Minister Keir Starmer's comment on Monday that the UK “now has discretion over the regulation to proceed in the way that we think is best for the UK” after Brexit.

“There are different models around the world, EU and US approaches, but we have the opportunity to choose the one that we think is in our best interests and we intend to do that,” Starmer said in response to a reporter's question after the announcement A 50-point plan to make the UK a world leader in artificial intelligence.

Discrepancy with the US, EU

The UK has so far refrained from introducing formal legislation to regulate AI, instead tasking individual regulators with enforcing existing laws on the development and use of AI.

This differs from the EU, which has introduced comprehensive, pan-European legislation aimed at harmonizing technology rules across the bloc, adopting a risk-based approach to regulation.

Meanwhile, the United States it lacks any regulation of artificial intelligence at the federal level instead, it adopted a patchwork of regulatory frameworks at the state and local levels.

During Starmer's election campaign last year, the Labor Party committed in its manifesto to introduce regulations focusing on the so-called “border” models of artificial intelligence – referring to large language models such as OpenAI's GPT.

However, as of yet, the UK has not yet confirmed details of its proposed AI security regulations, instead saying it will consult with the industry before proposing formal regulations.

“We will work with the sector to develop this and implement it in line with what we said in our manifesto,” Clark told CNBC.

Chris Mooney, partner and head of commercial at London-based law firm Marriott Harrison, told CNBC that the UK is taking a “wait and see” approach to artificial intelligence regulation, even though the EU is working on an artificial intelligence bill.

“While the UK government claims to have taken a 'pro-innovation' approach to AI regulation, our experience working with clients is that they find the current position uncertain and therefore unsatisfactory,” Mooney told CNBC by email.

One area where the Starmer government has spoken out about reforming AI laws has been copyright.

At the end of last year, Great Britain opened approx consultation on the review of the national copyright framework assessing possible exceptions to existing rules for AI developers using the work of artists and media publishers to train their models.

Businesses remained uncertain

Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that while the government's AI roadmap “shows ambition,” proceeding without clear rules is “bordering on reckless.”

“We have already missed key regulatory windows twice – first in cloud computing and then in social media,” Duggal said. “We cannot afford to make the same mistake with artificial intelligence, where the stakes are exponentially higher.”

“UK data is our crown jewel; they must be used to build sovereign AI capabilities and create British success stories, not just powering foreign algorithms that we cannot effectively regulate or control,” he added.

Details of Labor's plans for artificial intelligence legislation were in place it was initially expected to appear in King Charles III's speech opening the UK Parliament last year.

However, the government only committed to establishing “appropriate regulations” for the most powerful artificial intelligence models.

“The UK government needs to provide clarity here,” John Buyers, international head of artificial intelligence at law firm Osborne Clarke, told CNBC, adding that he had learned from sources that a consultation on formal AI security regulations was “awaiting publication.”

“By releasing consultations and plans in a piecemeal fashion, the UK has missed an opportunity to provide a comprehensive picture of where its AI economy is heading,” he said, adding that not disclosing details of new AI security regulations would lead to investor uncertainty.

Still, some figures in the UK tech scene believe a more relaxed and flexible approach to regulating AI may be appropriate.

“It's clear from recent conversations with the government that significant efforts are being made on AI safeguards,” Russ Shaw, founder of the advocacy group Tech London Advocates, told CNBC.

He added that the UK is well-placed to adopt a “third way” in AI security and regulation – “sector-specific” regulations that regulate different industries such as financial services and healthcare.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *