AI blueprints were stolen and countermeasures proposed

AI News


<ハン・ジュン教授>

From facial recognition on smartphones to self-driving cars, artificial intelligence (AI) has long been protected as a “black box.” However, a joint research team from KAIST and an international organization has discovered a new security threat that can “peek” at AI blueprints from behind the wall. The team also introduced corresponding defense techniques. This discovery is expected to be used to strengthen the security of AI in various fields such as autonomous driving, healthcare, and finance.

On the 31st, a research team led by Professor Han Jun of the KAIST Graduate School of Computing, in collaboration with the National University of Singapore (NUS) and Zhejiang University, announced that they have developed an attack system called “ModelSpy” that can remotely hijack the structure of an AI model using only a small antenna.

This technology works like a wiretap, capturing and analyzing the minute signals emitted by the AI ​​while it is operating, and reconstructing its internal structure. The research team focused on electromagnetic waves (EM) generated by graphics processing units (GPUs), which process AI calculations.

When the AI ​​performs complex calculations, the GPU emits weak electromagnetic signals. By analyzing the patterns of these signals, they succeeded in restoring the layer structure and detailed parameter settings of the AI ​​model.

As a result of the experiment, it was found that five types of the latest GPUs could identify the structure of an AI model with high accuracy even from up to 6 meters away or through a wall. Notably, the team estimated the core structure (the layers of the deep learning model) with up to 97.6% accuracy.

< バッグの中に隠したアンテナを使って、AI モデルの構造物を壁越しに盗むことができる>

This technology is considered a significant security threat because, unlike traditional hacking, it does not require direct intrusion into servers or the installation of malware. The attack can be carried out using only a portable antenna small enough to fit in a bag.

Recognizing that this technology could lead to the leakage of a company’s core AI assets, the researchers also proposed defensive measures such as electromagnetic interference and computational obfuscation. This is recognized as responsible security research that not only demonstrates attacks but also suggests realistic protection methods.

“This study shows that AI systems can be exposed to new types of attacks even in physical environments,” said Professor Jun Han. “Establishing a ‘cyber-physical security’ system that includes both hardware and software is essential to protect critical AI infrastructure such as autonomous driving and national facilities.”

Professor Jun Han of KAIST Graduate School of Computing participated as co-corresponding author. This research was presented at NDSS (Network and Distributed Systems Security Symposium) 2026, the top academic conference in computer security, and won the Best Paper Award for its innovation.

Paper Title: Peering in the Black Box: Long-Range and Scalable Model Architecture Snooping via GPU Electromagnetic Sidechannels

/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.



Source link