Former Google Eric Schmidt and Scale AI, the founder of Alexandr Wang, is the author of a new report called the world. The important part of the dispute is to create such programs, which will lead to retaliation or sabotage by the opposite side, as the countries compete to have the most powerful AI in the battlefield. But the United States should focus on development methods, such as cyber attacks that can disable the threat AI project.
Schmidt and Wang are large boosters of AI potential in social development through use, such as drug development and work efficiency. At the same time, the government sees it as a border in the prevention and the two industrial leaders are concerned that the countries will end with competition to create more dangerous weapons. As well as methods that international agreements have been renovated in the development of nuclear weapons and palaces, believe that the nation should be slow in the development of AI and not being a victim of competition in AI kills that are driven by AI.
However, at the same time, both Schmidt and Wang are creating AI products for the defense sector. The white bird of the past is creating an independent murder technology while Wang's Scale AI this week. Sign the contract With the Ministry of Defense to create a “agent” AI that can help in military planning and action. After many years of escape from sales technology that can be used in war. Silicon Valley is currently having patriotism to gather a profitable protection contract.
All military contractor have conflicts of interest in promoting kinetics, although not being morally proven in other countries, with their own military industry complex. But in the end, innocent people have to suffer and while the powerful people play chess
Palmer Luckey, the founder of the beloved Anduril Tech Tech, has argued that the AI mumbled, AI mumbled, is safer than the launch of a nuclear that may have a large impact zone or a non -targeted land mine. And if other countries are about to continue to create AI weapons, we should have the same ability as inhibiting the Anduril, providing drones that can target and attack Russian military equipment through enemies.
Recently, Anduril conducted an advertising campaign that shows the basic text “Work at Anduril.com”, which is covered with the word “not” written with a giant graffiti spray paint that plays with the idea that working for the military industry.
https://www.youtube.com/watch?v=gxQRCI3WF8
Schmidt and Wang argue that humans should always be in the decision to decide any AI-SASSIST, but when the report recently shows that the Israeli army is Relying on the wrong AI program To make a serious decision The drone has a split topic in the long bones as the critics say that the soldiers are more satisfied when they are not directly in the wire or do not see the consequences from their actions directly. Remembering the famous AI images for making a mistake and we are heading to the point where Droan killed will fly back and forth in accordance with the uncertain goal.
Schmidt and Wang paper assumptions that AI will be a “driver” can be performed well if not better than humans in most work. That is a great assumption as the most advanced “thinking” model still produces important Gaffs and various companies are flooded with poorly written applications, which have been assisted by these models of AI models, which are unbeatable humans and strange behavior.
Schmidt and Wang are selling the world's vision and solving their problems. If AI is about to be dangerous and dangerous, the government should go to them and buy their products because they are responsible actors. In the same veins, Openai's Sam Altman has been criticized to have a high demand about AI risks, which some say that it is an attempt to influence policies in Washington and seizure of power. It is like saying, “AI is very powerful, it can destroy the world. But we have a safe model that we are happy to sell you. “
Schmidt's warning should not have much effect because President Trump decreases. Guidelines in the BIDEN era during the safety range of AI And pushing the United States to become a distinctive power in AI in November. Proposed The Manhattan project for AI at Schmidt warned about and while people like Sam Altman and Elon Musk were influenced in Washington. It was easy to see that it was stable. If continuing to alert paper, such as China may respond in a form, such as digested intentions or attacking physical infrastructure. It is not a threat that has never been heard before, because China has entered into a large US technology company such as Microsoft and others, such as Russia, reporting that using a ship to attack cable, fiber optic cables under the sea. Of course, we will do the same as them. It all combined
It is not clear how the world will make any agreements to stop playing with these weapons. In that sense, the idea of sabotage to the AI project to protect them may be a good thing.