Technology

How to steal an AI model without actually hacking anything

Artificial Intelligence models are surprisingly stealable, provided you can detect the electromagnetic signature of the model. Researchers at North Carolina State University have described a method to attack neural networks in a recent paper, despite repeatedly stating that they don’t want to do so. They only needed an electromagnetic probe, a few pre-trained open-source AI model and a Google Edge Tensor Processing Unit. Their method entails analyzing electromagnetic radiations while a TPU chip is actively running.

“It’s quite expensive to build and train a neural network,” said study lead author and NC State Ph.D. student Ashley Kurian in a call with Gizmodo. It’s intellectual property owned by a company, and it requires a lot of computing power and time. ChatGPT, for example, is made up of billions parameters. This is a kind of secret. ChatGPT belongs to the person who steals it. They don’t even have to pay, and could sell it. “

Theft has already become a major concern in the AI industry. This pattern is causing lawsuits and even tools to help artists fight back by “poisoning” art generators. This overwhelming pattern is sparking lawsuits and even tools to help artists fight back by “poisoning” art generators.

“The electromagnetic data from the sensor essentially gives us a ‘signature’ of the AI processing behavior,” explained Kurian in a statement, calling it “the easy part.” But in order to decipher the model’s hyperparameters–its architecture and defining details–they had to compare the electromagnetic field data to data captured while other AI models ran on the same kind of chip.

In doing so, they “were able to determine the architecture and specific characteristics–known as layer details–we would need to make a copy of the AI model,” explained Kurian, who added that they could do so with “99.91% accuracy.” To pull this off, the researchers had physical access to the chip both for probing and running other models. They also worked directly with Google to help the company determine the extent to which its chips were attackable.

Kurian speculated that capturing models running on smartphones, for example, would also be possible — but their super-compact design would inherently make it trickier to monitor the electromagnetic signals.

“Side channel attacks on edge devices are nothing new,” Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing, told Gizmodo. Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing told Gizmodo that side channel attacks on edge devices are not new.

story originally seen here

Editorial Staff

Founded in 2020, Millenial Lifestyle Magazine is both a print and digital magazine offering our readers the latest news, videos, thought-pieces, etc. on various Millenial Lifestyle topics.

Leave a Reply

Your email address will not be published. Required fields are marked *