Hiya, the people Welcome to TechCrunch's regular AI newsletter. Subscribe if you want it in your inbox every Wednesday! Here.
This week has been a swan song for the Biden administration.
Monday The White House was announced. New restrictions on the export of AI chips – loudly criticized by tech giants including Nvidia. (It will be Nvidia's business. seriously affected. It should take effect as proposed with restrictions.) Then on Tuesday, the administration issued a statement. Executive order It opened up federal land for AI data centers.
But the obvious question is, Will the activities have a lasting impact? Will Trump, who takes office on January 20, simply roll back Biden's provisions? So far, Trump has not revealed his intentions. But it certainly has the power to undo Biden's latest AI actions.
Biden's export rules are scheduled to take effect after a 120-day comment period. The Trump administration will have wide leeway over how to implement the measures — and in what ways to change them.
Trump could rescind executive orders related to federal land use. David Sacks, former PayPal COO; Trump's AI and crypto “czar” Recently Biden vows to repeal executive order on AI that sets standards for AI safety and security
However, there is reason to believe that the incoming administration will not rock the boat too much.
Trump along the lines of Biden's campaign to free up federal resources for data centers Recently committed. Fast track approval is also provided for companies investing at least $1 billion in the US. pick Lee Zeldin, who has pledged to cut regulations he sees as a burden on businesses, to lead the EPA.
Aspects of Biden's export regulations was able to stand. well. Some regulations target China, and Trump has. It's not a secret. He sees China as America's main competitor in AI.
One question is Israel's inclusion on the list of countries that have imposed trade caps on AI hardware. As recently as October, Trump described himself as a “defender” of Israel. signaled. Israel's military activities in the region are likely to be allowed.
In any event, you will get a clearer picture within a week.
News

ChatGPT remind me…: Paying users of OpenAI's ChatGPT can now ask the AI assistant to schedule reminders or make recurring requests. A new beta called Tasks is rolling out to ChatGPT Plus worldwide this week. Available for Team and Pro users.
Meta and OpenAI Executives and researchers lead Meta's AI efforts obsessed with defeating OpenAI. GPT-4 Meta's own development model Lama 3 According to a message sealed in court Tuesday by the model's family.
OpenAI's board is growing: OpenAI has appointed Adebayo “Bayo” Ogunlesi, an executive at investment firm BlackRock, to its board of directors. The company's current board is somewhat similar to OpenAI's board in late 2023; Members fired CEO Sam Altman, only to rehire him days later.
Blaize is revealed publicly: Blaize plans to become the first AI chip startup to go public by 2025. The company was founded by former Intel engineers in 2011, and has made cameras, It raised $335 million from investors including Samsung chips for drones and other peripherals.
A model of “reasoning” thought in Chinese: OpenAI's o1 AI reasoning model is sometimes Chinese; France They “think” in languages like Hindi and Thai, but even when asked a question in English — no one really knows why.
Research paper of the week
Recently Study. AI, co-authored by Dan Hendrycks, an advisor to billionaire Elon Musk's AI company xAI, suggests that many safety standards for AI are tied to the capabilities of AI systems. That is, As the overall performance of a system improves. It is “better” on standards — it makes the model appear “safer”.
“Our analysis showed that AI safety standards—about half—tend to inadvertently capture latent factors that are closely related to general capabilities and raw training computation,” wrote the researchers behind the study. “In general, it is difficult to avoid measuring upstream model capabilities in AI safety standards.”
In the study, The researchers propose what they describe as an empirical foundation for making safety measurements “more meaningful” in AI safety assessments.
Model of the week

Japanese AI company Sakana AI said in a technical paper released on Tuesday. Detailed Transformer² (“Transformer-squared”), The AI system dynamically adjusts to new tasks.
Transformer² is tasked with understanding its needs—for example, Coding— first analyzed to understand its requirements. It then applies “task-specific adaptation” and optimization to suit that task.
The methods behind Transformer² can be applied to open-source models like Meta's Llama, Sakana says, providing a “glimpse into the future where AI models are no longer static.”
Take the bag

A small group of developers have released open alternatives like AI-powered search engines. Confused and OpenAI SearchGPT.
A project called PrAIvateSearch is available. GitHub under one My licenseIt means that it can be used largely without restrictions. It is supported by open AI models and services, including Alibaba's Qwen models and search engine DuckDuckGo.
The PrAIvateSearch team says its goal is to “implement similar features to SearchGPT,” but in an “open source, local, and proprietary way.” For tips on getting it started, Check out the team's latest releases. Blog post.