Under Trump, AI scientists were required to eliminate 'bias in ideology' from strong models.


The National Institute of Standards and Technology (NIST) has issued new guidelines for scientists to cooperate with the American Artificial Intelligence Institute (AISI), eliminating to mention AI, who is responsible, and who is fair.

The information given as part of the research agreement and development of cooperation to update to the members of the Institute of AI Safety, which was sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and repair behavior of discrimination related to gender, race, age or wealthy inequality. Such prejudices are extremely important because they can directly affect end users and cause disproportionate harm to minority groups and groups in economic difficulties.

The newly eliminated agreement refers to the development of tools, to verify the content and monitor its origin, as well as label the general content, signal less interest in monitoring false information and deep fake information. It also emphasized to put the United States first, asking a team to develop test tools to expand the global AI position of the United States.

The Trump administration eliminated safety, fairness, false information and responsibility as what it values ​​to whom, which I think is telling itself, a researcher at an organization working with the Institute of Safety AI, the requester is not named for revenge.

The researcher believes that ignoring these issues can be harmful to users regularly by being able to allow other untreated non -control -based discrimination algorithms or demographic algorithms. Unless you are a technology billionaire, this will lead to a worse future for you and your interested people. Hopefully, who will not be fair, discriminate, unsafe and irresponsible deployment, the researchers claim.

This is another research company that has worked with the Institute of Safe AI in the past. “What does it even mean to humans to develop?”

Elon MuskThe current leader A controversial effort To cut the government's spending and bureaucracy on behalf of President Trump, criticized AI models built by Openai and Google. Last February, he posted a meme to X, in which Gemini and Openai were labeled “racism” and “waking up”. He often Cites An incident in which one of Google models has debated about whether it is wrong to make someone wrong even if it will prevent a nuclear apolocalypse is a very unlikely scenario. Besides Tesla And SpaceXMusk runs Xai, a company that AI competes directly with Openai and Google. A recent advisory researcher has developed a new technique to be able to change the political trends of large language models, such as report By wire.

A developing research agency shows political bias in AI models that can affect both free and conservatives. For example, A study of Twitter's proposed algorithm Published in 2021 shows that users are more likely to display the right -inclined views on the platform.

Since January, the so -called Musk Efficiency of the Ministry of the Government . Some government agencies such as the Ministry of Education have stored and deleted documents mentioning DEI. Doge has also targeted Nist, AISI's parent organization, in recent weeks. Dozens of employees were fired.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *