Unintended Consequences: US Election Results Show Irresponsible Development of AI


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. learn more


While the 2024 US election focused on traditional issues like the economy and immigration, it had a quiet influence on AI policy that could be even more transformative. Without a single debate question or major campaign promise about AI, voters unfortunately tipped the scales in favor of the hawks—those advocating rapid AI development with few regulatory hurdles. The implications of this acceleration are profound, heralding a new era of AI policy that prioritizes innovation over caution and marks a decisive shift in the debate between The risks and potential benefits of AI.

President-elect Donald Trump's pro-business stance leads many to believe that his administration will favor those developing and marketing AI and other advanced technologies. His party platform there is not much to say about AI. However, it emphasizes a policy approach aimed at rolling back AI regulations, specifically targeting what it described as “radical left-wing ideas” within the administration's current executive orders. is leaving In contrast, the platform supported AI development aimed at fostering free speech and “human flourishing,” calling for policies that enable innovation in AI while they oppose measures that are seen to hinder technological progress.

Early indications based on appointments to key government positions confirm this direction. However, a bigger story is emerging: The resolution of the difficult debate over there The future of AI.

An intense debate

Since then ChatGPT appear in November 2022, there has been a heated debate between those in the field of AI who want to accelerate the development of AI and those who want to cheat.

In particular, in March 2023 the group finally proposed a six-month AI pause in the development of the most advanced systems, warning in an open letter that AI machines present “serious threats to society and humanity.” This letter, headed by the Future Life Instituteinspired by OpenAI releasing the GPT-4 large language model (LLM), several months after the launch of ChatGPT.

The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple Co-Founder Steve Wozniak, 2020 presidential candidate Andrew Yang, podcaster Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell . Eventually the number of signatories to the letter increased to over 33,000. Together, they became known as “doomers”, a term to capture their concerns about potential threats from AI.

Not everyone agreed. OpenAI CEO Sam Altman did not sign it. Bill Gates and many others were also absent. Their reasons for not doing so varied, although many raised concerns about potential harm from AI. This led to many discussions about the potential for AI to go wrong, leading to disaster. It became fashionable for many in the AI ​​field to talk about them assessment of the probability of doomoften stated as an equation: p(doom). Nevertheless, work on the development of AI did not stop.

For the record, my p(doom) in june 2023 was 5%. That may seem low, but it was not zero. I felt that the major AI labs were sincere in their efforts to rigorously test new models before release and in providing important guardrails for their use.

Many observers concerned about the risks of AI have rated existential risks higher than 5%, and some have rated it much higher. AI safety researcher Roman Yampolskiy assessed the probability of AI end humanity at over 99%. That said, a study published early this year, well before the election and representing the views of more than 2,700 AI researchers, shows that “the median prediction for extreme outcomes, such as extinction, is 5%.” Would you board a plane if there was a 5% chance it might crash? This is the dilemma facing AI researchers and policy makers.

Must go faster

Others have been openly dismissing concerns about AI, pointing out instead what they see as the technology's grand underpinnings. Among them are Andrew Ng (founder and director of the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of “Key Algorithm“). They argued, instead, that AI is part of the solution. As put forward by Ng, there are indeed risks, such as climate change and future pandemics, and AI can be part of how these are addressed and mitigated.

Ng argued that the development of AI should not be stopped, but should be accelerated. This utopian view of technology is repeated by others called “efficient accelerators” or “e/acc” for short. They argue that technology – and especially AI – is not the problem, but the solution to most, if not all, of the world's issues. Startup accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, put the term “e/acc” in their usernames on X to show alignment with the vision. Reporter Kevin Roose at the New York Times he caught the terror of these loaders by saying they have a “gas, no brake approach.”

A Substack newsletter a few years ago he described the principles underlying effective acceleration. Here is the amount they offer at the end of the article, as well as a comment from OpenAI CEO Sam Altman.

AI acceleration ahead

The 2024 election outcome could be seen as a turning point, putting the accelerator vision in a position to shape US AI policy for the next several years. For example, the President-elect recently appointed technology entrepreneur and venture capitalist David Sacks as “AI czar.”

Sacks, a vocal critic of AI regulation and advocate of market-driven innovation, brings his experience as a technology investor to the role. He is one of the leading voices in the AI ​​industry, and much of what he has said about AI aligns with the accelerationist ideas put forward by the incoming party platform.

In response to the AI ​​executive order from the Biden administration in 2023, Sacks fall: “The US political and fiscal situation is hopelessly broken, but we have one unparalleled asset as a country: Ground-breaking innovation in AI driven by a completely free and unregulated market for software development soft That ended.” Although the extent of Bagnan's impact on AI policy remains to be seen, his role signals a move towards policies that favor business autonomy and rapid innovation.

Elections have consequences

I doubt that the majority of the voting public gave much thought to the policy implications of AI when they voted. Nevertheless, in a very visible way, the valuers have won as a result of the election, which could be on the side of those who advocate a more cautious approach with the federal government to mitigate the long-term risks of AI.

As accelerators chart the way forward, the stakes couldn't be higher. It remains to be seen whether this period will usher in unprecedented progress or unforeseen disaster. As AI development accelerates, the need for informed public engagement and oversight becomes even more important. How we navigate through this time will define not only technological progress but also our collective future.

As a balance against inaction at the federal level, it is possible that one or more states will adopt different rules, which has already happened to some extent in California and Colorado. For example, California's AI safety bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices, offering models for regulation at the state level. Now, all eyes will be on the voluntary tests and automated guardrails of Anthropic, Google, OpenAI and other AI model developers.

In summary, the acceleration effect means fewer constraints on AI innovation. This increased speed may indeed lead to faster innovation, but also an increased risk of unintended consequences. I am now revising my p(doom) to 10%. What do you have?

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Data Decision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people who do data work, can share insights and innovation related to data.

If you want to read about cutting-edge ideas and information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You may even be considering contributing to an article by yourself!

Read more from DataDecisionMakers



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *