


< Professor Jun Han >
From smartphone facial recognition to autonomous vehicles, Artificial Intelligence (AI) has long been protected as a "black box." However, a joint research team from KAIST and international institutions has uncovered a new security threat capable of "peeking" at AI blueprints from behind walls. The team also presented corresponding defense technologies. This discovery is expected to be utilized in strengthening AI security across various sectors, including autonomous driving, healthcare, and finance.
On the 31st, Professor Jun Han’s research team from the KAIST School of Computing announced that they, in collaboration with the National University of Singapore (NUS) and Zhejiang University, developed "ModelSpy"—an attack system capable of hijacking AI model structures from a distance using only a small antenna.
This technology works much like a bugging device, capturing and analyzing minute signals emitted while an AI is operational to reconstruct its internal structure. The research team focused on the electromagnetic (EM) waves generated by Graphics Processing Units (GPUs), which handle AI computations.
When an AI performs complex calculations, the GPU emits subtle electromagnetic signals. By analyzing the patterns of these signals, the team successfully restored the layer configurations and detailed parameter settings of the AI model.
Experimental results showed that the structure of AI models could be identified with high accuracy from up to 6 meters away or through walls, across five types of the latest GPUs. Notably, the team estimated the core structure—the layers of the deep learning model—with an accuracy of up to 97.6%.

< AI model structures can be stolen through walls using an antenna hidden in a bag >
This technology is considered a significant security threat because, unlike traditional hacking, it does not require direct server infiltration or malware installation. An attack can be carried out using only a portable antenna small enough to fit in a bag.
Recognizing that this technology could lead to the leakage of a company's core AI assets, the research team also proposed defensive measures, such as electromagnetic interference and computational obfuscation. This is being hailed as a responsible security study that goes beyond demonstrating an attack to suggesting realistic protection methods.
"This research demonstrates that AI systems can be exposed to new types of attacks even in physical environments," said Professor Jun Han. "To protect critical AI infrastructure, such as autonomous driving and national facilities, it is essential to establish 'cyber-physical security' systems that encompass both hardware and software."

< Research Image (AI-generated) >
Professor Jun Han of the KAIST School of Computing participated as a co-corresponding author. The study was presented at the NDSS (Network and Distributed System Security Symposium) 2026, a top-tier academic conference in computer security, where it received the Distinguished Paper Award in recognition of its innovation.
Paper Title: Peering Inside the Black-Box: Long-Range and Scalable Model Architecture Snooping via GPU Electromagnetic Side-Chan
<(From Left) Professor YongKeun Park, Professor Seung-Mo Hong, Professor Seokwoo Jeon, Ph.D candidate Juheon Lee> KAIST announced on the 7th of May that a research team led by Professor YongKeun Park of the Department of Physics, in collaboration with Professor Seung-Mo Hong’s team at Asan Medical Center and Professor Seokwoo Jeon’s team at Korea University, has developed, for the first time in the world, “incoherent Dielectric Tensor Tomography (iDTT)*,” a techn
2026-05-07<(From Left) Professor Yang-Kyu Choi, Ph.D. candidate Seong-Yun Yun, (Upper Right) Professor Sanghyeon Kim, Dr. Joon Pyo Kim> In the era of big data and artificial intelligence, a new approach has emerged for solving combinatorial optimization problems, which involve finding the most efficient solution among many possible options and can otherwise take thousands of years to compute. A KAIST research team has developed computational hardware that can be implemented entirely using existin
2026-05-06<(From Left)Dr. Joonkyo Jung. Professor Jonghwa Shin> A new type of hologram technology has been developed that uses the motion of light as a “key,” revealing information only under specific conditions. This is gaining attention as a novel approach that can simultaneously overcome the limitations of existing optical communication and security technologies. KAIST (President Kwang Hyung Lee) announced on the 4th of May that a research team led by Professor Jonghwa Shin from t
2026-05-06<(From Left) Professor Hyungjun Kim, Ph.D candidate Dong Hyun Kim, Ph.D candidate Minho M. Kim, Ph.D candidate Junsic Cho, Professor Chang Hyuck Choi, Professor Seung-Jae Shin> From smartphone charging to hydrogen production, the fundamental principles of energy technology have been revealed. Korean researchers have, for the first time, identified how molecular structures change within the ultra-small space called the “electric double layer” (a thin interface where the elect
2026-05-04< KAIST Research Day Group Photo > KAIST held the ‘2026 KAIST Research Day’ at the Chung Kunmo Conference Hall in the Academic Cultural Complex at the main Daejeon campus on the morning of the 28th starting at 10:00 AM. ‘Research Day’ is an annual festival for campus researchers that has been held since 2016. It serves as a platform to reward and encourage excellent researchers for their hard work and to exchange R&D information by introducing selected outst
2026-04-30