How to steal AI models without hacking anything


Artificial intelligence models are surprisingly capable of stealing If you could smell a model's electromagnetic signature, This is despite repeatedly emphasizing that they actually don't want to help people attack neural networks. Researchers from North Carolina State University Such techniques are described in new paper– What they need is an electromagnetic probe, several pre-trained open source AI models. and the Google Edge Tensor Processor (TPU). Their method involves analyzing electromagnetic radiation while the TPU chip is running.

“Building and training neural networks is very expensive,” said the study's lead author and NC State Ph.D. student Ashley Kurian, calling Gizmodo. “It's intellectual property that the company owns. And it takes a lot of computation time and resources. For example, ChatGPT is built from billions of parameters. which is considered a secret When someone steals it, ChatGPT belongs to them. Did you know that they don't have to pay? And they can sell it too.”

Theft is a major problem in the AI ​​world, but usually the reverse is true. This is because AI developers train their models on copyrighted works without permission from human creators. This overwhelming pattern is sparks Lawsuit and even tool to Help artists fight back. At the generator of “toxic” art

“Electromagnetic data from sensors essentially gives us a ‘signature’ of AI processing behavior,” Kurian explains in statementCall it the “easy part”, but in order to decipher the hyperparameters of the model which is architecture and detailing They had to compare electromagnetic field data with recorded data. While other AI models run on the same type of chip,

In doing so, they “can define specific architectures and characteristics called layer details. We had to create a copy of the AI ​​model,” explains Kurian, who added that they were able to do it with “99.91% accuracy.” To pull this off Researchers have physical access to the chip for both verification and other model use. They are also working directly with Google to help the company identify areas where its chips can be attacked.

Kurian speculates that it might also be possible to take photos of models working on smartphones. But the ultra-compact design makes monitoring of electromagnetic signals more difficult.

“Side-channel attacks on edge devices are not new,” Mehmet Sencan, a security researcher at AI standards non-profit Atlas Computing, told Gizmodo, but this specific technique “To extract all model architecture hyperparameters is important,” because AI hardware “performs inferences in plain text,” Sencan explains. “Anyone running their model on the edge or in any server that is not physically safe It must assume that its architecture can be extracted through careful examination.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *