AI dealers are becoming better in writing code – and hacking it


Latest Artificial intelligence Model not only It is noteworthy about software technologyInnew's research shows that they are also being stolen in finding errors in the software.

AI researchers at UC Berkeley tested the latest AI models and agents that could find the latest holes in 188 large open source facilities. Use a New benchmark call CybergymAI models have identified 17 new errors including 15 unknown errors or no day. Many of these holes are very important

Many experts expect AI models to become formidable network security weapons. A Tool AI from the current XBow boot Has crept into the ranks of HackeroneRanking for hunting errors and currently sitting at the top position. The company has recently announced $ 75 million.

But saying that the encryption skills of the latest AI models combined with improving the theoretical ability are starting to change the context of network security. This is an important moment, she said. This is really beyond our common expectations.

When the models continue to improve them will automate the process of exploring and exploiting security holes. This can help companies keep their software safely but can also support hackers to break the systems. We didn't even try our best. If we strengthen the budget, allowing agents to run longer, they can do better.

UC Berkeley has tested regular AI Frontier models from Openai, Google and Anthropic, as well as source services opened from Meta, Deepseek and Alibaba combined with a number of agents to find errors, including, including Open -NHANDThen CybenchAnd Secret.

Researchers have used descriptions of known software gaps from 188 software projects. They then provided descriptions to cybersecurity agents provided by Frontier AI models to see if they could identify similar holes for themselves by analyzing new encryption, testing and manufacturing evidence exploitation of conceptual evidence. The team also requires agents to hunt for new holes in codes.

Through this process, AI tools have created hundreds of evidence of the concept and among these tools, the researchers have identified 15 unprecedented holes before and the two holes have been revealed and patched before. In addition to the increasing evidence, AI can automate the detection of holes that are not day by day, which is dangerous (and valuable) because they can provide a way to hack directly.

Who seems to be predetermined to become an important part of the cyber security industry. Security expert Sean Heelan Recently discovered A 0 -day vulnerability in Linux nucleus is widely used with the help from the theoretical model of Openai O3. Last November, Google Notification That it has discovered an unknown software gap by using AI through a program called Project Zero.

Like other parts of the software industry, many network security companies are fascinated with whose potential. The new job really shows that anyone can often find new errors, but it also highlights the rest of the technology with technology. AI systems cannot find most holes and stumbled by particularly complex systems.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *