Musk and Trump, the EU law on artificial intelligence


US President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas, November 19, 2024.

Brandon Bell | Via Reuters

The U.S. political landscape will change somewhat in 2025, and these changes will have major implications for the regulation of artificial intelligence.

President-elect Donald Trump will be inaugurated on January 20. He will be joined at the White House by a group of top business advisers – including Elon Musk and Vivek Ramaswamy – who are expected to influence policy thinking about emerging technologies such as artificial intelligence and cryptocurrencies.

On the other side of the Atlantic, a story of two jurisdictions has emerged – Great Britain and the European Union differing in regulatory thinking. While the EU has worked more rigorously with the Silicon Valley giants behind the most powerful artificial intelligence systems, the UK has adopted a more gentle approach.

In 2025, the state of artificial intelligence regulation around the world may change dramatically. CNBC takes a look at some of the key developments worth watching, from the evolution of the EU's landmark artificial intelligence bill to what the Trump administration could do for the U.S.

Musk's influence on US politics

Elon Musk walks around the Capitol on the day of his meeting with Senate Republican Leader John Thune (R-SD) in Washington, December 5, 2024.

Benoit Tessier | Reuters

Although it is not an issue that was particularly discussed during Trump's election campaign, artificial intelligence is expected to be one of the key sectors that will benefit from the next US administration.

First, Trump appointed Musk as CEO of the electric car maker Teslato co-lead his “Department of Government Effectiveness” with Ramaswamy, an American biotech entrepreneur who withdrew from the 2024 presidential race to support Trump.

Matt Calkins, CEO of Appian, told CNBC Trump's close relationship with Musk could put the U.S. in a good position when it comes to artificial intelligence, citing the billionaire's experience as co-founder of OpenAI and CEO of xAI, his own AI lab, as positive indicators.

“We finally have one person in the U.S. administration who actually knows about artificial intelligence and has an opinion on it,” Calkins said in an interview last month. Musk has been one of Trump's most prominent supporters in the business community and has even appeared at some of his campaign rallies.

There is currently no confirmation of Trump's plans for possible presidential directives or executive orders. Calkins, however, believes it is likely that Musk will propose guardrails to ensure that the development of artificial intelligence does not threaten civilization – a risk that he considers warned many times in the past.

“There's no doubt he's reluctant to allow AI to cause catastrophic consequences for humans – he's definitely concerned about it, he's been talking about it long before he took political office,” Calkins told CNBC.

There is currently no comprehensive federal AI legislation in the US. Rather, there is a patchwork of regulatory frameworks at the state and local levels, with numerous AI-related bills introduced in 45 states as well as Washington, D.C., Puerto Rico, and the U.S. Virgin Islands.

EU Act on Artificial Intelligence

The European Union is so far the only jurisdiction in the world to have developed comprehensive AI legislation in its Artificial Intelligence Act.

Jaque Silva | Nurphoto | Getty Images

The European Union has so far been the only jurisdiction in the world to push for comprehensive legislation for the AI ​​industry. Earlier this year, the bloc's Artificial Intelligence Law – a first-of-its-kind regulatory framework for artificial intelligence – officially entered into force.

The law has not yet taken full effect, but it is already causing tension among large U.S. technology companies, which fear that some aspects of the regulation are too stringent and could stifle innovation.

In December, the EU Office for Artificial Intelligence, the newly established model regulator under the Artificial Intelligence Act, published the second draft of the Code of Conduct for General Purpose Artificial Intelligence (GPAI) models, which applies to systems such as OpenAI's GPT family of large language models. or LLM.

The second bill provided exemptions for providers of certain open-source artificial intelligence models. Such models are usually publicly available to allow developers to create their own custom versions. It also requires developers of “system” GPAI models to undergo rigorous risk assessments.

Computer and Communications Industry Association – whose members include, among others: Amazon, Google AND Meta — warned that it “includes measures well beyond the agreed scope of the Act, such as far-reaching copyright measures.”

When CNBC reached out to AI's office, they could not immediately be reached for comment.

It is worth noting that the EU Artificial Intelligence Act is still far from being fully implemented.

As Shelley McKinley, legal director of the popular GitHub code repository platform, told CNBC in November, “the next phase of work has begun, which may mean that at this point there is more ahead of us than behind us.”

For example, in February the first provisions of the act will come into force. These regulations cover “high-risk” AI applications such as remote biometric identification, credit decision-making and educational scoring. The third draft of the code for GPAI models is scheduled for release later that month.

European tech leaders are concerned about the risk that EU punitive measures against US tech companies could trigger a backlash from Trump, which could in turn cause the bloc to soften its approach.

Take antitrust regulations, for example. The EU is an active player in taking steps to limit the dominance of US tech giants, but this could result in a negative reaction from Trump, according to the CEO of Swiss company Proton, Andy Yen.

“(Trump) thinks he probably wants to regulate his tech companies himself,” Yen told CNBC in a November interview at the Web Summit technology conference in Lisbon, Portugal. “I don't want Europe to get involved in this.”

UK copyright review

British Prime Minister Keir Starmer gives an interview to the media during the 79th United Nations General Assembly at the United Nations Headquarters in New York, September 25, 2024.

LeonNeal | Via Reuters

One country worth paying attention to is Great Britain. Previously it was Great Britain refused to introduce statutory obligations for creators of artificial intelligence models due to the fear that the new regulations may be too restrictive.

However, Keir Starmer's government has stated that it plans to develop regulations on artificial intelligence, although the details are not yet known. The UK is generally expected to adopt a more rules-based approach to regulating AI, as opposed to the EU's risk-based framework.

Last month, the government dropped its first major indicator of where regulation is heading, announcing a consultation on measures to regulate the use of copyrighted content for training artificial intelligence models. Copyright is a serious issue especially in the case of Generative Artificial Intelligence and LLM.

Most LLMs use public data from the open web to train their AI models. However, this often includes examples of works of art and other copyrighted material. Artists and publishers such as New York Times they claim that such systems exist unfairly scraping their valuable content without consent to generate original results.

To address this issue, the UK government is considering introducing an exception to copyright law for AI model training, while allowing rights holders to opt out of having their works used for training purposes.

Appian's Calkins said the UK could become a “global leader” on copyright infringement by AI models, adding that the country was not “subject to the same overwhelming lobbying attack from national AI leaders as the US.”

US-China relations could become a point of tension

U.S. President Donald Trump (right) and Xi Jinping, president of China, pass members of the People's Liberation Army (PLA) during a welcome ceremony outside the Great Hall of the People in Beijing, China, Thursday, November 9, 2017.

Qilai Shen | Bloomberg | Getty Images

Finally, as world governments seek to regulate rapidly developing artificial intelligence systems, there is a risk of escalating geopolitical tensions between the United States and China under the Trump administration.

During his first term as president, Trump imposed a series of hawkish policy measures on China, including the decision to add Huawei to a trade blacklist restricting it from doing business with US technology suppliers. It also launched an attempt to block TikTok, owned by Chinese company ByteDance, in the US – although it has since softened his stance on TikTok.

China is race to beat the US for supremacy in AI. At the same time, the US has taken action to limit China's access to key technologies, mainly chips, including: those designed by Nvidiathat are required to train more advanced AI models. China responded by trying to build its own homegrown chip industry.

Tech scientists fear that a geopolitical split between the U.S. and China over artificial intelligence could result in other threats, such as the possibility that one of them could develop a form of artificial intelligence smarter than humans.

Max Tegmark, founder of the nonprofit Future of Life Institute, believes the United States and China could in the future create a form of artificial intelligence that can improve itself and design new systems without human supervision, potentially forcing the governments of both countries to to individually develop rules regarding Artificial Intelligence Security.

“My optimistic view is that the United States and China are unilaterally imposing national security standards to prevent them from harming their own companies and building unchecked AGI, not to appease rival superpowers, but simply to protect themselves” – Tegmark he told CNBC in a November interview.

Governments are already trying to work together to figure out how to create regulations and frameworks for artificial intelligence. In 2023, the UK was the host country global AI security summitattended by both the US and Chinese administrations to discuss potential guardrails around the technology.

– CNBC's Arjun Kharpal contributed to this report



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *