OpenAI shuts down developer who created AI-powered turrets


OpenAI has cut off a developer who created a device that could respond to ChatGPT commands to aim and fire an automatic rifle. The device quickly went viral after a video on Reddit showed the developer reading firing instructions out loud. After that, the rifle next to He quickly began aiming and shooting at the nearby wall.

“ChatGPT, we're being attacked from the front left and front right,” the developers told the system in a video. “Respond accordingly.” The speed and accuracy of the rifle's responses was impressive, relying on OpenAI's Realtime API to interpret the input. and returns the direction the device can understand. All it takes is simple training for ChatGPT to pick up commands like “turn left” and understand how to translate them into machine-readable language.

In a statement to FuturismOpenAI said it had viewed the video and shut down the developers behind it. “We proactively identify violations of our policies. and notify the developer to stop this activity before receiving your inquiry,” the company told the outlet.

The potential to automatically create deadly weapons is one thing critics have raised about AI technology like the one developed by OpenAI. Many of the company's models have the ability to interpret audio and visual input to understand their environment. individuals and answer questions about what they saw. Autonomous drones are has been developed that can be used on the battlefield to identify and attack targets without human input. That, of course, is a war crime. And there is a risk that humans will become complacent, making it difficult for AI to make decisions and hold anyone accountable.

This concern does not appear to hold up in theory either, most recently. report from washington post Israel has been found to have used AI to select bomb targets. which sometimes does not discriminateElders who are poorly trained in using technology to attack human targets without confirming Lavender's predictions at all,” the story reads, referring to the piece of AI software. “Sometimes, that's the only confirmation that's needed. That is, the target is a man.”

Battlefield AI proponents say It will make soldiers safer. It allows them to stay away from the front line and counter targets such as missile depots. or conduct reconnaissance from a distance And AI-powered drones can attack with precision. But that depends on how you use it. Critics say the United States should Better at disrupting enemy communications systems. Conversely, adversaries like Russia have a harder time launching their own drones or nukes.

OpenAI prohibits using its products to develop or use weapons, or to “automate certain systems that may compromise personal safety,” but the company last year Announcement of cooperation It teamed up with defense technology company Anduril, a maker of AI-powered drones and missiles, to create a system that can protect against drone attacks. The company said it would “Quickly synthesize time-sensitive data. Reduce the burden on human operators and improve situational awareness.”

It's not hard to understand why tech companies would be interested in going to war. The United States spends nearly a trillion dollars a year on defense. And it remains an unpopular idea to reduce such spending. With President Donald Trump filling his Cabinet with conservative tech figures like Elon Musk and David Sacks, many defense technology players are expected to benefit significantly. And it could replace existing defense companies like Lockheed Martin.

Although OpenAI blocks customers from using AI to create weapons, But there are many open source models that can be used for the same purpose.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *