KAIST Develops Wearable Ultrasound Sensor Enabling Noninvasive Treatment Without Surgery
<(From Left) Professor Hyunjoo Jenny Lee, Dr.Sang-Mok Lee, Ph.D candidate Xiaojia Liang>
Conventional wearable ultrasound sensors have been limited by low power output and poor structural stability, making them unsuitable for high-resolution imaging or therapeutic applications. A KAIST research team has now overcome these challenges by developing a flexible ultrasound sensor with statically adjustable curvature. This breakthrough opens new possibilities for wearable medical devices that can capture precise, body-conforming images and perform noninvasive treatments using ultrasound energy.
KAIST (President Kwang Hyung Lee) announced on November 12 that a research team led by Professor Hyunjoo Jenny Lee from the School of Electrical Engineering developed a “flex-to-rigid (FTR)” capacitive micromachined ultrasonic transducer (CMUT) capable of transitioning freely between flexibility and rigidity using a semiconductor wafer process (MEMS).
The team incorporated a low-melting-point alloy (LMPA) inside the device. When an electric current is applied, the metal melts, allowing the structure to deform freely; upon cooling, it solidifies again, fixing the sensor into the desired curved shape.
Conventional polymer-membrane-based CMUTs have suffered from a low elastic modulus, resulting in insufficient acoustic power and blurred focal points during vibration. They have also lacked curvature control, limiting precise focusing on target regions.
Professor Lee’s team designed an FTR structure that combines a rigid silicon substrate with a flexible elastomer bridge, achieving both high output performance and mechanical flexibility. The embedded LMPA enables dynamic adjustment and fixation of the transducer’s shape by toggling between solid and liquid states through electrical control.
As a result, the new sensor can automatically focus ultrasound on a specific region according to its curvature—without requiring separate beamforming electronics—and maintains stable electrical and acoustic performance even after repeated bending.
The device’s acoustic output reaches the level of low-intensity focused ultrasound (LIFU), which can gently stimulate tissues to induce therapeutic effects without causing damage. Experiments on animal models demonstrated that noninvasive spleen stimulation reduced inflammation and improved mobility in arthritis models.
In the future, the team plans to extend this technology to a two-dimensional (2D) array structure—arranging multiple sensors in a grid—to enable simultaneous high-resolution ultrasound imaging and therapeutic applications, paving the way for a new generation of smart medical systems.
Because the technology is compatible with semiconductor fabrication processes, it can be mass-produced and adapted for wearable and home-use ultrasound systems.
This study was conducted by Sang-Mok Lee, Xiaojia Liang (co–first authors), and their collaborators under the supervision of Professor Hyunjoo Jenny Lee. The results were published online on October 23 in npj Flexible Electronics (Impact Factor: 15.5).
Paper title: “Flexible ultrasound transducer array with statically adjustable curvature for anti-inflammatory treatment”DOI: [10.1038/s41528-025-00484-7]
The research was supported by the Bio & Medical Technology Development Program (Brain Science Convergence Research Program) of the Ministry of Science and ICT (MSIT) and the Korea Medical Device Development Fund, a multi-ministerial R&D initiative.
KAIST Researchers Uncover Critical Security Flaws in Global Mobile Networks
Breakthrough Discovery Reveals How Attackers Can Remotely Manipulate User Data Without Physical Proximity
DAEJEON, South Korea — In an era when recent cyberattacks on major telecommunications providers have highlighted the fragility of mobile security, researchers at the Korea Advanced Institute of Science and Technology have identified a class of previously unknown vulnerabilities that could allow remote attackers to compromise cellular networks serving billions of users worldwide.
The research team, led by Professor Yongdae Kim of KAIST's School of Electrical Engineering, discovered that unauthorized attackers could remotely manipulate internal user information in LTE core networks — the central infrastructure that manages authentication, internet connectivity, and data transmission for mobile devices and IoT equipment.
The findings, presented at the 32nd ACM Conference on Computer and Communications Security in Taipei, Taiwan, earned the team a Distinguished Paper Award, one of only 30 such honors selected from approximately 2,400 submissions to one of the field's most prestigious venues.
A New Class of Vulnerability
The vulnerability class, which the researchers termed "Context Integrity Violation" (CIV), represents a fundamental breach of a basic security principle: unauthenticated messages should not alter internal system states. While previous security research has primarily focused on "downlink" attacks — where networks compromise devices — this study examined the less-scrutinized "uplink" security, where devices can attack core networks.
"The problem stems from gaps in the 3GPP standards," Professor Kim explained, referring to the international body that establishes operational rules for mobile networks. "While the standards prohibit processing messages that fail authentication, they lack clear guidance on handling messages that bypass authentication procedures entirely."
The team developed CITesting, the world's first systematic tool for detecting these vulnerabilities, capable of examining between 2,802 and 4,626 test cases — a vast expansion from the 31 cases covered by the only previous comparable research tool, LTEFuzz.
Widespread Impact Confirmed
Testing four major LTE core network implementations — both open-source and commercial systems — revealed that all contained CIV vulnerabilities. The results showed:
Open5GS: 2,354 detections, 29 unique vulnerabilities
srsRAN: 2,604 detections, 22 unique vulnerabilities
Amarisoft: 672 detections, 16 unique vulnerabilities
Nokia: 2,523 detections, 59 unique vulnerabilities
The research team demonstrated three critical attack scenarios: denial of service by corrupting network information to block reconnection; IMSI exposure by forcing devices to retransmit user identification numbers in plaintext; and location tracking by capturing signals during reconnection attempts.
Unlike traditional attacks requiring fake base stations or signal interference near victims, these attacks work remotely through legitimate base stations, affecting anyone within the same MME (Mobility Management Entity) coverage area as the attacker — potentially spanning entire metropolitan regions.
Industry Response and Future Implications
Following responsible disclosure protocols, the research team notified affected vendors. Amarisoft deployed patches, and Open5GS integrated the team's fixes into its official repository. Nokia, however, stated it would not issue patches, asserting compliance with 3GPP standards and declining to comment on whether telecommunications companies currently use the affected equipment.
"Uplink security has been relatively neglected due to testing difficulties, implementation diversity, and regulatory constraints," Professor Kim noted. "Context integrity violations can pose serious security risks."
The research team, which included KAIST doctoral students Mincheol Son and Kwangmin Kim as co-first authors, along with Beomseok Oh and Professor CheolJun Park of Kyung Hee University, plans to extend their validation to 5G and private 5G environments. The tools could prove particularly critical for industrial and infrastructure networks, where breaches could have consequences ranging from communication disruption to exposure of sensitive military or corporate data.
The research was supported by the Ministry of Science and ICT through the Institute for Information & Communications Technology Planning & Evaluation, as part of a project developing security technologies for 5G private networks.
With mobile networks forming the backbone of modern digital infrastructure, the discovery underscores the ongoing challenge of securing systems designed in an era when such sophisticated attacks were barely conceivable — and the urgent need for updated standards to address them.
KAIST Develops Multimodal AI That Understands Text and Images Like Humans
<(From Left) M.S candidate Soyoung Choi, Ph.D candidate Seong-Hyeon Hwang, Professor Steven Euijong Whang>
Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types of data. KAIST researchers have now developed a new multimodal AI training technology that enables models to recognize both text and images evenly, enabling far more accurate predictions.
KAIST (President Kwang Hyung Lee) announced on the 14th that a research team led by Professor Steven Euijong Whang from the School of Electrical Engineering has developed a novel data augmentation method that enables multimodal AI systems—those that must process multiple data types simultaneously—to make balanced use of all input data.
Multimodal AI combines various forms of information, such as text and video, to make judgments. However, AI models often show a tendency to rely excessively on one particular type of data, resulting in degraded prediction performance.
To solve this problem, the research team deliberately trained AI models using mismatched or incongruent data pairs. By doing so, the model learned to rely on all modalities—text, images, and even audio—in a balanced way, regardless of context.
The team further improved performance stability by incorporating a training strategy that compensates for low-quality data while emphasizing more challenging examples. The method is not tied to any specific model architecture and can be easily applied to various data types, making it highly scalable and practical.
<Model Prediction Changes with a Data-Centric Multimodal AI Training Framework>
Professor Steven Euijong Whang explained, “Improving AI performance is not just about changing model architectures or algorithms—it’s much more important how we design and use the data for training.” He continued, “This research demonstrates that designing and refining the data itself can be an effective approach to help multimodal AI utilize information more evenly, without becoming biased toward a specific modality such as images or text.”
The study was co-led by doctoral student Seong-Hyeon Hwang and master’s student Soyoung Choi, with Professor Steven Euijong Whang serving as the corresponding author. The results will be presented at NeurIPS 2025 (Conference on Neural Information Processing Systems), the world’s premier conference in the field of AI, which will be held this December in San Diego, USA, and Mexico City, Mexico.
※ Paper title: “MIDAS: Misalignment-based Data Augmentation Strategy for Imbalanced Multimodal Learning,” Original paper: https://arxiv.org/pdf/2509.25831
The research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the projects “Robust, Fair, and Scalable Data-Centric Continual Learning” (RS-2022-II220157) and “AI Technology for Non-Invasive Near-Infrared-Based Diagnosis and Treatment of Brain Disorders” (RS-2024-00444862).
Semiconductor Leadership Spotlighted in Nature Sister Journal
<(From Left) Prof. Shinhyun Choi, Prof. Young Gyu Yoon, Prof.Seunghyub Yoo from the School of Electrical Engineering, Prof. Kyung Min Kim from Materials Science and Engineering>
KAIST (President Kwang Hyung Lee) announced on the 5th of September that its semiconductor research and education achievements were highlighted on August 18 in Nature Reviews Electrical Engineering, a sister journal of the world-renowned scientific journal Nature.
Title: Semiconductor-related research and education at KAIST DOI: 10.1038/s44287-025-00204-3
This special "Focus" article provides a detailed look at KAIST's leadership in next-generation semiconductor research, talent development, and global industry-academia collaboration, presenting a future blueprint for Korea's semiconductor industry. Editor Silvia Conti personally conducted the interviews, with KAIST professors including Kyung Min Kim from the Department of Materials Science and Engineering, and Young Gyu Yoon, Shinhyun Choi, Sung-Yool Choi, and Seunghyub Yoo from the School of Electrical Engineering, participating.
KAIST operates educational programs such as the School of Electrical Engineering, the Department of Semiconductor Systems Engineering, and the Graduate School of Semiconductor Engineering. It is leading next-generation semiconductor research in areas like neuromorphic computing, in-memory computing, and 2D new material-based devices. Building on this foundation, researchers are developing new architectures and devices that transcend the limitations of existing silicon, driving innovation in various application fields such as artificial intelligence, robotics, and medicine.
Notably, research on implementing biological functions like synapses and neurons into hardware platforms using new types of memory such as RRAM and PRAM is gaining international attention. This work opens up possibilities for applications in robots, edge computing, and on-sensor AI systems.
Furthermore, KAIST has operated EPSS (Samsung Advanced Human Resources Training Program) and KEPSI (SK Hynix Semiconductor Advanced Human Resources Training Program) based on long-standing partnerships with Samsung Electronics and SK Hynix. Graduate students in these programs receive full scholarships and are guaranteed employment after graduation. The Department of Semiconductor Systems Engineering, newly established in 2022, selects 100 undergraduate students each year to provide systematic education. Additionally, the KAIST–Samsung Electronics Industry-Academia Cooperation Center, which involves more than 70 labs annually, serves as a long-term hub for joint industry-academia research, contributing to solving critical issues within the industry.
The article emphasizes KAIST's growth beyond a simple research institution into an international research hub. KAIST is enhancing diversity and inclusivity by expanding the hiring of female faculty and establishing a Global Talent Visa Center to support foreign professors and students, attracting outstanding talent from around the world. As a core university within the Daedeok Research Complex (Daedeok Innopolis), it serves as the heart of "Korea's Silicon Valley."
KAIST researchers predict that the future of semiconductor technology is not in simple device miniaturization but in a convergent approach involving neuromorphic technology, 3D packaging technology, and AI applications. This article shows that KAIST's strategic research direction and leadership are gaining attention from both the global academic and industrial communities.
Professor Kyung Min Kim stated, "I am very pleased that KAIST's next-generation semiconductor research and talent development strategy has been widely publicized to domestic and international academia and industry through this article, and we will continue to contribute to the development of future semiconductor technology with innovative convergence research."
KAIST President Kwang Hyung Lee remarked, "Being highlighted for our semiconductor research and education achievements in a world-renowned science journal is a testament to the dedication and pioneering spirit of our university members. I am delighted that KAIST's growth as a global research hub is gaining recognition, and we will continue to expand industry-academia collaboration to lead next-generation semiconductor innovation and play a key role in helping Korea become a future semiconductor powerhouse."
KAIST develops world’s most sensitive light-powered photodetector—20 times more sensitive, operating without electricity
<(From left) Ph.D candidate Jaeha Hwang, Ph.D candidate Jungi Song ,Professor Kayoung Lee from Electrical Engineering>
Silicon semiconductors used in existing photodetectors have low light responsivity, and the two-dimensional semiconductor MoS₂ (molybdenum disulfide) is so thin that doping processes to control its electrical properties are difficult, limiting the realization of high-performance photodetectors. The KAIST research team has overcome this technical limitation and developed the world’s highest-performing self-powered photodetector, which operates without electricity in environments with a light source. This paves the way for an era where precise sensing is possible without batteries in wearable devices, biosignal monitoring, IoT devices, autonomous vehicles, and robots, as long as a light source is present.
KAIST (President Kwang Hyung Lee) announced on the 14th of August that Professor Kayoung Lee’s research team from the School of Electrical Engineering has developed a self-powered photodetector that operates without external power supply. This sensor demonstrated a sensitivity up to 20 times higher than existing products, marking the highest performance level among comparable technologies reported to date.
Professor Kayoung Lee’s team fabricated a “PN junction structure” photodetector capable of generating electrical signals on its own in environments with light, even without an electrical energy supply, by introducing a “van der Waals bottom electrode” that makes semiconductors extremely sensitive to electrical signals without doping.
First, a “PN junction” is a structure formed by joining p-type (hole-rich) and n-type (electron-rich) materials in a semiconductor. This structure causes current to flow in one direction when exposed to light, making it a key component in photodetectors and solar cells.
Normally, to create a proper PN junction, a process called “doping” is required, which involves deliberately introducing impurities into the semiconductor to alter its electrical properties. However, two-dimensional semiconductors such as MoS₂ are only a few atoms thick, so doping in the conventional way can damage the structure or reduce performance, making it difficult to create an ideal PN junction.
To overcome these limitations and maximize device performance, the research team designed a new device structure incorporating two key technologies: the “van der Waals electrode” and the “partial gate.”
The “partial gate” structure applies an electrical signal only to part of the two-dimensional semiconductor, controlling one side to behave like p-type and the other like n-type. This allows the device to function electrically like a PN junction without doping.
Furthermore, considering that conventional metal electrodes can chemically bond strongly to the semiconductor and damage its lattice structure, the “van der Waals bottom electrode” was attached gently using van der Waals forces. This preserved the original structure of the two-dimensional semiconductor while ensuring effective electrical signal transfer.
This innovative approach secured both structural stability and electrical performance, enabling the realization of a PN junction in thin two-dimensional semiconductors without damaging their structure.
Thanks to this innovation, the team succeeded in implementing a high-performance PN junction without doping. The device can generate electrical signals with extreme sensitivity as long as there is light, even without an external power source. Its light detection sensitivity (responsivity) exceeds 21 A/W, more than 20 times higher than powered conventional sensors, 10 times higher than silicon-based self-powered sensors, and over twice as high as existing MoS₂ sensors. This level of sensitivity means it can be applied immediately to high-precision sensors capable of detecting biosignals or operating in dark environments.
Professor Kayoung Lee stated that they “have achieved a level of sensitivity unimaginable in silicon sensors, and although two-dimensional semiconductors are too thin for conventional doping processes, [they] succeeded in implementing a PN junction that controls electrical flow without doping.” She added, “This technology can be used not only in sensors but also in key components that control electricity inside smartphones and electronic devices, providing a foundation for miniaturization and self-powered operation of next-generation electronics.”
Jaeha Hwang, Jungi Song, Experimnet in Porgress>
This research, with doctoral students Jaeha Hwang and Jungi Song as co-first authors, was published online on July 26 in Advanced Functional Materials (IF 19), a leading journal in materials science.
※ Paper title: Gated PN Junction in Ambipolar MoS₂ for Superior Self-Powered Photodetection
※ DOI: https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.202510113
Meanwhile, this work was supported by the National Research Foundation of Korea, the Korea Basic Science Institute, Samsung Electronics, and the Korea Institute for Advancement of Technology.
KAIST Develops World’s First Wireless OLED Contact Lens for Retinal Diagnostics
<ID-style photograph against a laboratory background featuring an OLED contact lens sample (center), flanked by the principal authors (left: Professor Seunghyup Yoo ; right: Dr. Jee Hoon Sim). Above them (from top to bottom) are: Professor Se Joon Woo, Professor Sei Kwang Hahn, Dr. Su-Bon Kim, and Dr. Hyeonwook Chae>
Electroretinography (ERG) is an ophthalmic diagnostic method used to determine whether the retina is functioning normally. It is widely employed for diagnosing hereditary retinal diseases or assessing retinal function decline.
A team of Korean researchers has developed a next-generation wireless ophthalmic diagnostic technology that replaces the existing stationary, darkroom-based retinal testing method by incorporating an “ultrathin OLED” into a contact lens. This breakthrough is expected to have applications in diverse fields such as myopia treatment, ocular biosignal analysis, augmented-reality (AR) visual information delivery, and light-based neurostimulation.
On the 12th, KAIST (President Kwang Hyung Lee) announced that a research team led by Professor Seunghyup Yoo from the School of Electrical Engineering, in collaboration with Professor Se Joon Woo of Seoul National University Bundang Hospital (Director Jeong-Han Song), Professor Sei Kwang Hahn of POSTECH (President Sung-Keun Kim) and CEO of PHI Biomed Co., and the Electronics and Telecommunications Research Institute (ETRI, President Seungchan Bang) under the National Research Council of Science & Technology (NST, Chairman Youngshik Kim), has developed the world’s first wireless contact lens-based wearable retinal diagnostic platform using organic light-emitting diodes (OLEDs).
<Figure 1. Schematic and photograph of the wireless OLED contact lens>
This technology enables ERG simply by wearing the lens, eliminating the need for large specialized light sources and dramatically simplifying the conventional, complex ophthalmic diagnostic environment.
Traditionally, ERG requires the use of a stationary Ganzfeld device in a dark room, where patients must keep their eyes open and remain still during the test. This setup imposes spatial constraints and can lead to patient fatigue and compliances challenges.
To overcome these limitations, the joint research team integrated an ultrathin flexible OLED —approximately 12.5 μm thick, or 6–8 times thinner than a human hair— into a contact lens electrode for ERG. They also equipped it with a wireless power receiving antenna and a control chip, completing a system capable of independent operation.
For power transmission, the team adopted a wireless power transfer method using a 433 MHz resonant frequency suitable for stable wireless communication. This was also demonstrated in the form of a wireless controller embedded in a sleep mask, which can be linked to a smartphone —further enhancing practical usability.
<Figure 2. Schematic of the electroretinography (ERG) testing system using a wireless OLED contact lens and an example of an actual test in progress>
While most smart contact lens–type light sources developed for ocular illumination have used inorganic LEDs, these rigid devices emit light almost from a single point, which can lead to excessive heat accumulation and thus usable light intensity. In contrast, OLEDs are areal light sources and were shown to induce retinal responses even under low luminance conditions. In this study, under a relatively low luminance* of 126 nits, the OLED contact lens successfully induced stable ERG signals, producing diagnostic results equivalent to those obtained with existing commercial light sources.
*Luminance: A value indicating how brightly a surface or screen emits light; for reference, the luminance of a smartphone screen is about 300–600 nits (can exceed 1000 nits at maximum).
Animal tests confirmed that the surface temperature of a rabbit’s eye wearing the OLED contact lens remained below 27°C, avoiding corneal heat damage, and that the light-emitting performance was maintained even in humid environments—demonstrating its effectiveness and safety as an ERG diagnostic tool in real clinical settings.
Professor Seunghyup Yoo stated that “integrating the flexibility and diffusive light characteristics of ultrathin OLEDs into a contact lens is a world-first attempt,” and that “this research can help expand smart contact lens technology into on-eye optical diagnostic and phototherapeutic platforms, contributing to the advancement of digital healthcare technology.”
< Wireless operation of the OLED contact lens >
Jee Hoon Sim, Hyeonwook Chae, and Su-Bon Kim, PhD researchers at KAIST, played a key role as co-first authors alongside Dr. Sangbaie Shin of PHI Biomed Co.. Corresponding authors are Professor Seunghyup Yoo (School of Electrical Engineering, KAIST), Professor Sei Kwang Hahn (Department of Materials Science and Engineering, POSTECH), and Professor Se Joon Woo (Seoul National University Bundang Hospital). The results were published online in the internationally renowned journal ACS Nano on May 1st.
● Paper title: Wireless Organic Light-Emitting Diode Contact Lenses for On-Eye Wearable Light Sources and Their Application to Personalized Health Monitoring
● DOI: https://doi.org/10.1021/acsnano.4c18563
● Related video clip: http://bit.ly/3UGg6R8
< Close-up of the OLED contact lens sample >
Is 24-hour health monitoring possible with ambient light energy?
<(From left) Ph.D candidate Youngmin Sim, Ph.D candidate Do Yun Park, Dr. Chanho Park, Professor Kyeongha Kwon>
Miniaturization and weight reduction of medical wearable devices for continuous health monitoring such as heart rate, blood oxygen saturation, and sweat component analysis remain major challenges. In particular, optical sensors consume a significant amount of power for LED operation and wireless transmission, requiring heavy and bulky batteries. To overcome these limitations, KAIST researchers have developed a next-generation wearable platform that enables 24-hour continuous measurement by using ambient light as an energy source and optimizing power management according to the power environment.
KAIST (President Kwang Hyung Lee) announced on the 30th that Professor Kyeongha Kwon's team from the School of Electrical Engineering, in collaboration with Dr. Chanho Park’s team at Northwestern University in the U.S., has developed an adaptive wireless wearable platform that reduces battery load by utilizing ambient light.
To address the battery issue of medical wearable devices, Professor Kyeongha Kwon’s research team developed an innovative platform that utilizes ambient natural light as an energy source. This platform integrates three complementary light energy technologies.
<Figure1.The wireless wearable platform minimizes the energy required for light sources through i) Photometric system that directly utilizes ambient light passing through windows for measurements, ii) Photovoltaic system that receives power from high-efficiency photovoltaic cells and wireless power receiver coils, and iii) Photoluminescent system that stores light using photoluminescent materials and emits light in dark conditions to support the two aforementioned systems. In-sensor computing minimizes power consumption by wirelessly transmitting only essential data. The adaptive power management system efficiently manages power by automatically selecting the optimal mode among 11 different power modes through a power selector based on the power supply level from the photovoltaic system and battery charge status.>
The first core technology, the Photometric Method, is a technique that adaptively adjusts LED brightness depending on the intensity of the ambient light source. By combining ambient natural light with LED light to maintain a constant total illumination level, it automatically dims the LED when natural light is strong and brightens it when natural light is weak.
Whereas conventional sensors had to keep the LED on at a fixed brightness regardless of the environment, this technology optimizes LED power in real time according to the surrounding environment. Experimental results showed that it reduced power consumption by as much as 86.22% under sufficient lighting conditions.
The second is the Photovoltaic Method using high-efficiency multijunction solar cells. This goes beyond simple solar power generation to convert light in both indoor and outdoor environments into electricity. In particular, the adaptive power management system automatically switches among 11 different power configurations based on ambient conditions and battery status to achieve optimal energy efficiency.
The third innovative technology is the Photoluminescent Method. By mixing strontium aluminate microparticles* into the sensor’s silicone encapsulation structure, light from the surroundings is absorbed and stored during the day and slowly released in the dark. As a result, after being exposed to 500W/m² of sunlight for 10 minutes, continuous measurement is possible for 2.5 minutes even in complete darkness.
*Strontium aluminate microparticles: A photoluminescent material used in glow-in-the-dark paint or safety signs, which absorbs light and emits it in the dark for an extended time.
These three technologies work complementarily—during bright conditions, the first and second methods are active, and in dark conditions, the third method provides additional support—enabling 24-hour continuous operation.
The research team applied this platform to various medical sensors to verify its practicality. The photoplethysmography sensor monitors heart rate and blood oxygen saturation in real time, allowing early detection of cardiovascular diseases. The blue light dosimeter accurately measures blue light, which causes skin aging and damage, and provides personalized skin protection guidance. The sweat analysis sensor uses microfluidic technology to simultaneously analyze salt, glucose, and pH in sweat, enabling real-time detection of dehydration and electrolyte imbalances.
Additionally, introducing in-sensor data computing significantly reduced wireless communication power consumption. Previously, all raw data had to be transmitted externally, but now only the necessary results are calculated and transmitted within the sensor, reducing data transmission requirements from 400B/s to 4B/s—a 100-fold decrease.
To validate performance, the research tested the device on healthy adult subjects in four different environments: bright indoor lighting, dim lighting, infrared lighting, and complete darkness. The results showed measurement accuracy equivalent to that of commercial medical devices in all conditions A mouse model experiment confirmed accurate blood oxygen saturation measurement in hypoxic conditions.
<Frigure2.The multimodal device applying the energy harvesting and power management platform consists of i) photoplethysmography (PPG) sensor, ii) blue light dosimeter, iii) photoluminescent microfluidic channel for sweat analysis and biomarker sensors (chloride ion, glucose, and pH), and iv) temperature sensor. This device was implemented with flexible printed circuit board (fPCB) to enable attachment to the skin. A silicon substrate with a window that allows ambient light and measurement light to pass through, along with photoluminescent encapsulation layer, encapsulates the PPG, blue light dosimeter, and temperature sensors, while the photoluminescent microfluidic channel is attached below the photoluminescent encapsulation layer to collect sweat>
Professor Kyeongha Kwon of KAIST, who led the research, stated, “This technology will enable 24-hour continuous health monitoring, shifting the medical paradigm from treatment-centered to prevention-centered shifting the medical paradigm from treatment-centered to prevention-centered,” further stating that “cost savings through early diagnosis as well as strengthened technological competitiveness in the next-generation wearable healthcare market are anticipated.”
This research was published on July 1 in the international journal Nature Communications, with Do Yun Park, a doctoral student in the AI Semiconductor Graduate Program, as co–first author.
※ Paper title: Adaptive Electronics for Photovoltaic, Photoluminescent and Photometric Methods in Power Harvesting for Wireless and Wearable Sensors ※ DOI: https://doi.org/10.1038/s41467-025-60911-1 ※ URL: https://www.nature.com/articles/s41467-025-60911-1
This research was supported by the National Research Foundation of Korea (Outstanding Young Researcher Program and Regional Innovation Leading Research Center Project), the Ministry of Science and ICT and Institute of Information & Communications Technology Planning & Evaluation (IITP) AI Semiconductor Graduate Program, and the BK FOUR Program (Connected AI Education & Research Program for Industry and Society Innovation, KAIST EE).
Vulnerability Found: One Packet Can Paralyze Smartphones
<(From left) Professor Yongdae Kim, PhD candidate Tuan Dinh Hoang, PhD candidate Taekkyung Oh from KAIST, Professor CheolJun Park from Kyung Hee University; and Professor Insu Yun from KAIST>
Smartphones must stay connected to mobile networks at all times to function properly. The core component that enables this constant connectivity is the communication modem (Baseband) inside the device. KAIST researchers, using their self-developed testing framework called 'LLFuzz (Lower Layer Fuzz),' have discovered security vulnerabilities in the lower layers of smartphone communication modems and demonstrated the necessity of standardizing 'mobile communication modem security testing.'
*Standardization: In mobile communication, conformance testing, which verifies normal operation in normal situations, has been standardized. However, standards for handling abnormal packets have not yet been established, hence the need for standardized security testing.
Professor Yongdae Kim's team from the School of Electrical Engineering at KAIST, in a joint research effort with Professor CheolJun Park's team from Kyung Hee University, announced on the 25th of July that they have discovered critical security vulnerabilities in the lower layers of smartphone communication modems. These vulnerabilities can incapacitate smartphone communication with just a single manipulated wireless packet (a data transmission unit in a network). In particular, these vulnerabilities are extremely severe as they can potentially lead to remote code execution (RCE)
The research team utilized their self-developed 'LLFuzz' analysis framework to analyze the lower layer state transitions and error handling logic of the modem to detect security vulnerabilities. LLFuzz was able to precisely extract vulnerabilities caused by implementation errors by comparing and analyzing 3GPP* standard-based state machines with actual device responses.
*3GPP: An international collaborative organization that creates global mobile communication standards.
The research team conducted experiments on 15 commercial smartphones from global manufacturers, including Apple, Samsung Electronics, Google, and Xiaomi, and discovered a total of 11 vulnerabilities. Among these, seven were assigned official CVE (Common Vulnerabilities and Exposures) numbers, and manufacturers applied security patches for these vulnerabilities. However, the remaining four have not yet been publicly disclosed.
While previous security research primarily focused on higher layers of mobile communication, such as NAS (Network Access Stratum) and RRC (Radio Resource Control), the research team concentrated on analyzing the error handling logic of mobile communication's lower layers, which manufacturers have often neglected.
These vulnerabilities occurred in the lower layers of the communication modem (RLC, MAC, PDCP, PHY*), and due to their structural characteristics where encryption or authentication is not applied, operational errors could be induced simply by injecting external signals.
*RLC, MAC, PDCP, PHY: Lower layers of LTE/5G communication, responsible for wireless resource allocation, error control, encryption, and physical layer transmission.
The research team released a demo video showing that when they injected a manipulated wireless packet (malformed MAC packet) into commercial smartphones via a Software-Defined Radio (SDR) device using packets generated on an experimental laptop, the smartphone's communication modem (Baseband) immediately crashed
※ Experiment video: https://drive.google.com/file/d/1NOwZdu_Hf4ScG7LkwgEkHLa_nSV4FPb_/view?usp=drive_link
The video shows data being normally transmitted at 23MB per second on the fast.com page, but immediately after the manipulated packet is injected, the transmission stops and the mobile communication signal disappears. This intuitively demonstrates that a single wireless packet can cripple a commercial device's communication modem.
The vulnerabilities were found in the 'modem chip,' a core component of smartphones responsible for calls, texts, and data communication, making it a very important component.
Qualcomm: Affects over 90 chipsets, including CVE-2025-21477, CVE-2024-23385.
MediaTek: Affects over 80 chipsets, including CVE-2024-20076, CVE-2024-20077, CVE-2025-20659.
Samsung: CVE-2025-26780 (targets the latest chipsets like Exynos 2400, 5400).
Apple: CVE-2024-27870 (shares the same vulnerability as Qualcomm CVE).
The problematic modem chips (communication components) are not only in premium smartphones but also in low-end smartphones, tablets, smartwatches, and IoT devices, leading to the widespread potential for user harm due to their broad diffusion.
Furthermore, the research team experimentally tested 5G vulnerabilities in the lower layers and found two vulnerabilities in just two weeks. Considering that 5G vulnerability checks have not been generally conducted, it is possible that many more vulnerabilities exist in the mobile communication lower layers of baseband chips.
Professor Yongdae Kim explained, "The lower layers of smartphone communication modems are not subject to encryption or authentication, creating a structural risk where devices can accept arbitrary signals from external sources." He added, "This research demonstrates the necessity of standardizing mobile communication modem security testing for smartphones and other IoT devices."
The research team is continuing additional analysis of the 5G lower layers using LLFuzz and is also developing tools for testing LTE and 5G upper layers. They are also pursuing collaborations for future tool disclosure. The team's stance is that "as technological complexity increases, systemic security inspection systems must evolve in parallel."
First author Tuan Dinh Hoang, a Ph.D. student in the School of Electrical Engineering, will present the research results in August at USENIX Security 2025, one of the world's most prestigious conferences in cybersecurity.
※ Paper Title: LLFuzz: An Over-the-Air Dynamic Testing Framework for Cellular Baseband Lower Layers (Tuan Dinh Hoang and Taekkyung Oh, KAIST; CheolJun Park, Kyung Hee Univ.; Insu Yun and Yongdae Kim, KAIST)
※ Usenix paper site: https://www.usenix.org/conference/usenixsecurity25/presentation/hoang (Not yet public), Lab homepage paper: https://syssec.kaist.ac.kr/pub/2025/LLFuzz_Tuan.pdf
※ Open-source repository: https://github.com/SysSec-KAIST/LLFuzz (To be released)
This research was conducted with support from the Institute of Information & Communications Technology Planning & Evaluation (IITP) funded by the Ministry of Science and ICT.
KAIST Presents Game-Changing Technology for Intractable Brain Disease Treatment Using Micro OLEDs
<(From left)Professor Kyung Cheol Choi, Professor Hyunjoo J. Lee, Dr. Somin Lee from the School of Electrical Engineering>
Optogenetics is a technique that controls neural activity by stimulating neurons expressing light-sensitive proteins with specific wavelengths of light. It has opened new possibilities for identifying causes of brain disorders and developing treatments for intractable neurological diseases. Because this technology requires precise stimulation inside the human brain with minimal damage to soft brain tissue, it must be integrated into a neural probe—a medical device implanted in the brain. KAIST researchers have now proposed a new paradigm for neural probes by integrating micro OLEDs into thin, flexible, implantable medical devices.
KAIST (President Kwang Hyung Lee) announced on the 6th of July that professor Kyung Cheol Choi and professor Hyunjoo J. Lee from the School of Electrical Engineering have jointly succeeded in developing an optogenetic neural probe integrated with flexible micro OLEDs.
Optical fibers have been used for decades in optogenetic research to deliver light to deep brain regions from external light sources. Recently, research has focused on flexible optical fibers and ultra-miniaturized neural probes that integrate light sources for single-neuron stimulation.
The research team focused on micro OLEDs due to their high spatial resolution and flexibility, which allow for precise light delivery to small areas of neurons. This enables detailed brain circuit analysis while minimizing side effects and avoiding restrictions on animal movement. Moreover, micro OLEDs offer precise control of light wavelengths and support multi-site stimulation, making them suitable for studying complex brain functions.
However, the device's electrical properties degrade easily in the presence of moisture or water, which limited their use as implantable bioelectronics. Furthermore, optimizing the high-resolution integration process on thin, flexible probes remained a challenge.
To address this, the team enhanced the operational reliability of OLEDs in moist, oxygen-rich environments and minimized tissue damage during implantation. They patterned an ultrathin, flexible encapsulation layer* composed of aluminum oxide and parylene-C (Al₂O₃/parylene-C) at widths of 260–600 micrometers (μm) to maintain biocompatibility.
*Encapsulation layer: A barrier that completely blocks oxygen and water molecules from the external environment, ensuring the longevity and reliability of the device.
When integrating the high-resolution micro OLEDs, the researchers also used parylene-C, the same biocompatible material as the encapsulation layer, to maintain flexibility and safety. To eliminate electrical interference between adjacent OLED pixels and spatially separate them, they introduced a pixel define layer (PDL), enabling the independent operation of eight micro OLEDs.
Furthermore, they precisely controlled the residual stress and thickness in the multilayer film structure of the device, ensuring its flexibility even in biological environments. This optimization allowed for probe insertion without bending or external shuttles or needles, minimizing mechanical stress during implantation.
KAIST Succeeds in Real-Time Carbon Dioxide Monitoring Without Batteries or External Power
< (From left) Master's Student Gyurim Jang, Professor Kyeongha Kwon >
KAIST (President Kwang Hyung Lee) announced on June 9th that a research team led by Professor Kyeongha Kwon from the School of Electrical Engineering, in a joint study with Professor Hanjun Ryu's team at Chung-Ang University, has developed a self-powered wireless carbon dioxide (CO2) monitoring system. This innovative system harvests fine vibrational energy from its surroundings to periodically measure CO2 concentrations.
This breakthrough addresses a critical need in environmental monitoring: accurately understanding "how much" CO2 is being emitted to combat climate change and global warming. While CO2 monitoring technology is key to this, existing systems largely rely on batteries or wired power system, imposing limitations on installation and maintenance. The KAIST team tackled this by creating a self-powered wireless system that operates without external power.
The core of this new system is an "Inertia-driven Triboelectric Nanogenerator (TENG)" that converts vibrations (with amplitudes ranging from 20-4000 ㎛ and frequencies from 0-300 Hz) generated by industrial equipment or pipelines into electricity. This enables periodic CO2 concentration measurements and wireless transmission without the need for batteries.
< Figure 1. Concept and configuration of self-powered wireless CO2 monitoring system using fine vibration harvesting (a) System block diagram (b) Photo of fabricated system prototype >
The research team successfully amplified fine vibrations and induced resonance by combining spring-attached 4-stack TENGs. They achieved stable power production of 0.5 mW under conditions of 13 Hz and 0.56 g acceleration. The generated power was then used to operate a CO2 sensor and a Bluetooth Low Energy (BLE) system-on-a-chip (SoC).
Professor Kyeongha Kwon emphasized, "For efficient environmental monitoring, a system that can operate continuously without power limitations is essential." She explained, "In this research, we implemented a self-powered system that can periodically measure and wirelessly transmit CO2 concentrations based on the energy generated from an inertia-driven TENG." She added, "This technology can serve as a foundational technology for future self-powered environmental monitoring platforms integrating various sensors."
< Figure 2. TENG energy harvesting-based wireless CO2 sensing system operation results (c) Experimental setup (d) Measured CO2 concentration results powered by TENG and conventional DC power source >
This research was published on June 1st in the internationally renowned academic journal `Nano Energy (IF 16.8)`. Gyurim Jang, a master's student at KAIST, and Daniel Manaye Tiruneh, a master's student at Chung-Ang University, are the co-first authors of the paper.*Paper Title: Highly compact inertia-driven triboelectric nanogenerator for self-powered wireless CO2 monitoring via fine-vibration harvesting*DOI: 10.1016/j.nanoen.2025.110872
This research was supported by the Saudi Aramco-KAIST CO2 Management Center.
KAIST Research Team Develops Electronic Ink for Room-Temperature Printing of High-Resolution, Variable-Stiffness Electronics
A team of researchers from KAIST and Seoul National University has developed a groundbreaking electronic ink that enables room-temperature printing of variable-stiffness circuits capable of switching between rigid and soft modes. This advancement marks a significant leap toward next-generation wearable, implantable, and robotic devices.
< Photo 1. (From left) Professor Jae-Woong Jeong and PhD candidate Simok Lee of the School of Electrical Engineering, (in separate bubbles, from left) Professor Gun-Hee Lee of Pusan National University, Professor Seongjun Park of Seoul National University, Professor Steve Park of the Department of Materials Science and Engineering>
Variable-stiffness electronics are at the forefront of adaptive technology, offering the ability for a single device to transition between rigid and soft modes depending on its use case. Gallium, a metal known for its high rigidity contrast between solid and liquid states, is a promising candidate for such applications. However, its use has been hindered by challenges including high surface tension, low viscosity, and undesirable phase transitions during manufacturing.
On June 4th, a research team led by Professor Jae-Woong Jeong from the School of Electrical Engineering at KAIST, Professor Seongjun Park from the Digital Healthcare Major at Seoul National University, and Professor Steve Park from the Department of Materials Science and Engineering at KAIST introduced a novel liquid metal electronic ink. This ink allows for micro-scale circuit printing – thinner than a human hair – at room temperature, with the ability to reversibly switch between rigid and soft modes depending on temperature.
The new ink combines printable viscosity with excellent electrical conductivity, enabling the creation of complex, high-resolution multilayer circuits comparable to commercial printed circuit boards (PCBs). These circuits can dynamically change stiffness in response to temperature, presenting new opportunities for multifunctional electronics, medical technologies, and robotics.
Conventional electronics typically have fixed form factors – either rigid for durability or soft for wearability. Rigid devices like smartphones and laptops offer robust performance but are uncomfortable when worn, while soft electronics are more comfortable but lack precise handling. As demand grows for devices that can adapt their stiffness to context, variable-stiffness electronics are becoming increasingly important.
< Figure 1. Fabrication process of stable, high-viscosity electronic ink by dispersing micro-sized gallium particles in a polymer matrix (left). High-resolution large-area circuit printing process through pH-controlled chemical sintering (right). >
To address this challenge, the researchers focused on gallium, which melts just below body temperature. Solid gallium is quite stiff, while its liquid form is fluid and soft. Despite its potential, gallium’s use in electronic printing has been limited by its high surface tension and instability when melted.
To overcome these issues, the team developed a pH-controlled liquid metal ink printing process. By dispersing micro-sized gallium particles into a hydrophilic polyurethane matrix using a neutral solvent (dimethyl sulfoxide, or DMSO), they created a stable, high-viscosity ink suitable for precision printing. During post-print heating, the DMSO decomposes to form an acidic environment, which removes the oxide layer on the gallium particles. This triggers the particles to coalesce into electrically conductive networks with tunable mechanical properties.
The resulting printed circuits exhibit fine feature sizes (~50 μm), high conductivity (2.27 × 10⁶ S/m), and a stiffness modulation ratio of up to 1,465 – allowing the material to shift from plastic-like rigidity to rubber-like softness. Furthermore, the ink is compatible with conventional printing techniques such as screen printing and dip coating, supporting large-area and 3D device fabrication.
< Figure 2. Key features of the electronic ink. (i) High-resolution printing and multilayer integration capability. (ii) Batch fabrication capability through large-area screen printing. (iii) Complex three-dimensional structure printing capability through dip coating. (iv) Excellent electrical conductivity and stiffness control capability.>
The team demonstrated this technology by developing a multi-functional device that operates as a rigid portable electronic under normal conditions but transforms into a soft wearable healthcare device when attached to the body. They also created a neural probe that remains stiff during surgical insertion for accurate positioning but softens once inside brain tissue to reduce inflammation – highlighting its potential for biomedical implants.
< Figure 3. Variable stiffness wearable electronics with high-resolution circuits and multilayer structure comparable to commercial printed circuit boards (PCBs). Functions as a rigid portable electronic device at room temperature, then transforms into a wearable healthcare device by softening at body temperature upon skin contact.>
“The core achievement of this research lies in overcoming the longstanding challenges of liquid metal printing through our innovative technology,” said Professor Jeong. “By controlling the ink’s acidity, we were able to electrically and mechanically connect printed gallium particles, enabling the room-temperature fabrication of high-resolution, large-area circuits with tunable stiffness. This opens up new possibilities for future personal electronics, medical devices, and robotics.”
< Figure 4. Body-temperature softening neural probe implemented by coating electronic ink on an optical waveguide structure. (Left) Remains rigid during surgery for precise manipulation and brain insertion, then softens after implantation to minimize mechanical stress on the brain and greatly enhance biocompatibility. (Right) >
This research was published in Science Advances under the title, “Phase-Change Metal Ink with pH-Controlled Chemical Sintering for Versatile and Scalable Fabrication of Variable Stiffness Electronics.” The work was supported by the National Research Foundation of Korea, the Boston-Korea Project, and the BK21 FOUR Program.
Professor Hyun Myung's Team Wins First Place in a Challenge at ICRA by IEEE
< Photo 1. (From left) Daebeom Kim (Team Leader, Ph.D. student), Seungjae Lee (Ph.D. student), Seoyeon Jang (Ph.D. student), Jei Kong (Master's student), Professor Hyun Myung >
A team of the Urban Robotics Lab, led by Professor Hyun Myung from the KAIST School of Electrical Engineering, achieved a remarkable first-place overall victory in the Nothing Stands Still Challenge (NSS Challenge) 2025, held at the 2025 IEEE International Conference on Robotics and Automation (ICRA), the world's most prestigious robotics conference, from May 19 to 23 in Atlanta, USA.
The NSS Challenge was co-hosted by HILTI, a global construction company based in Liechtenstein, and Stanford University's Gradient Spaces Group. It is an expanded version of the HILTI SLAM (Simultaneous Localization and Mapping)* Challenge, which has been held since 2021, and is considered one of the most prominent challenges at 2025 IEEE ICRA.*SLAM: Refers to Simultaneous Localization and Mapping, a technology where robots, drones, autonomous vehicles, etc., determine their own position and simultaneously create a map of their surroundings.
< Photo 2. A scene from the oral presentation on the winning team's technology (Speakers: Seungjae Lee and Seoyeon Jang, Ph.D. candidates of KAIST School of Electrical Engineering) >
This challenge primarily evaluates how accurately and robustly LiDAR scan data, collected at various times, can be registered in situations with frequent structural changes, such as construction and industrial environments. In particular, it is regarded as a highly technical competition because it deals with multi-session localization and mapping (Multi-session SLAM) technology that responds to structural changes occurring over multiple timeframes, rather than just single-point registration accuracy.
The Urban Robotics Lab team secured first place overall, surpassing National Taiwan University (3rd place) and Northwestern Polytechnical University of China (2nd place) by a significant margin, with their unique localization and mapping technology that solves the problem of registering LiDAR data collected across multiple times and spaces. The winning team will be awarded a prize of $4,000.
< Figure 1. Example of Multiway-Registration for Registering Multiple Scans >
The Urban Robotics Lab team independently developed a multiway-registration framework that can robustly register multiple scans even without prior connection information. This framework consists of an algorithm for summarizing feature points within scans and finding correspondences (CubicFeat), an algorithm for performing global registration based on the found correspondences (Quatro), and an algorithm for refining results based on change detection (Chamelion). This combination of technologies ensures stable registration performance based on fixed structures, even in highly dynamic industrial environments.
< Figure 2. Example of Change Detection Using the Chamelion Algorithm>
LiDAR scan registration technology is a core component of SLAM (Simultaneous Localization And Mapping) in various autonomous systems such as autonomous vehicles, autonomous robots, autonomous walking systems, and autonomous flying vehicles.
Professor Hyun Myung of the School of Electrical Engineering stated, "This award-winning technology is evaluated as a case that simultaneously proves both academic value and industrial applicability by maximizing the performance of precisely estimating the relative positions between different scans even in complex environments. I am grateful to the students who challenged themselves and never gave up, even when many teams abandoned due to the high difficulty."
< Figure 3. Competition Result Board, Lower RMSE (Root Mean Squared Error) Indicates Higher Score (Unit: meters)>
The Urban Robotics Lab team first participated in the SLAM Challenge in 2022, winning second place among academic teams, and in 2023, they secured first place overall in the LiDAR category and first place among academic teams in the vision category.