Google now thinks that you can use AI for weapons and observation


Google made one of the most significant changes in its The principles of AI Since the first publication in 2018. In the change discovered The Washington PostThe search giant edited the document to delete the promises that he made a promise, will not “develop or expand” AI tools for use in weapons or observation technology. Earlier, these guidelines included a section called “Applications that we will not pursue”, which is not in the current version of the document.

Instead, there is a section called “Responsible Development and Development”. There, Google states that “appropriate human control, proper prudence and feedback mechanisms will be realized in order to comply with user goals, social responsibility and widely accepted principles of international law and human rights”.

This is a much broader obligation than specific ones, which the company, which the company, made it recently at the end of last month, when the previous version of its principles of AI was still on its web site. For example, since this is connected with weapons, the company previously stated that it would not develop AI for use in “weapons or other technologies, the main goal or the implementation of which is to apply or directly facilitate people's injury.” As for the tools of observing artificial intelligence, the company said that it would not develop technology that violates the “international norms”.

The screenwriter showing the previous version of the principles of Google AI. The screenwriter showing the previous version of the principles of Google AI.

Google

When they asked for a comment, the Google representative pointed to Engadget on Blog post The company was published on Thursday. In it, DEEPMID Director General Demis Khassabis and James Mamik, senior vice president of research, laboratories, technologies and society in Google, say that the appearance of AI as “general technologies” required a change in politics.

“We believe that democracies should lead the development of AI, guided by the main values, such as freedom, equality and respect for human rights. And we believe that companies, governments and organizations sharing these values ​​should work together to create AI that protects people, promotes global growth and supports national security, ”both wrote. “… guided by our principles of AI, we will continue to focus on the research and applications of AI, which are consistent with our mission, our scientific goal and our fields of knowledge, and remain in accordance with the widely adopted principles of international law and human rights – always evaluating a specific work, Carefully evaluating whether potential risks are significantly exceeding.

When Google first published his principles of AI in 2018, he did it after MAVEN projectThis was a controversial state contract, which, if Google decided to extend it, would see that the company provided the software for artificial intelligence to the Ministry of Defense for the analysis of drones of drones. Dozens of Google employees Leave the company In protest from the contract, with thousands of another thousand Signing a petition in the oppositionWhen Google, the field ultimately published his new leadership principles, the general director of Sundar Pichai, reportedly informed the employees that he hopes that they will withstand the Testing Testing.

However, by 2021, Google began to conclude military contracts again, and was reported. “Aggressive” bet For the joint consent of the Pentagon Cloud Cloud cloud capabilities. At the beginning of this year, The Washington Post reported that Google employees have repeatedly worked with the Ministry of Defense of Israel. Expand the use of the AI ​​toolsField



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *