KAIST Directly Visualizes the Hidden Spatial Order of Electrons in a Quantum Material
<(Back row, from left) Yeongkwan Kim, SungBin Lee, Heejun Yang, Yongsoo Yang_(Front row, from left) Jemin Park, Seokjo Hong, Jaewhan Oh>
· Cryogenic 4D-STEM reveals how charge density waves form, fragment, and persist across a phase transition
· First direct measurement of electronic amplitude correlations uncovers strain-driven inhomogeneity and localized order above the transition temperature
Electronic order in quantum materials often emerges not uniformly, but through subtle and complex patterns that vary from place to place. One prominent example is the charge density wave (CDW), an ordered state in which electrons arrange themselves into periodic patterns at low temperatures. Although CDWs have been studied for decades, how their strength and spatial coherence evolve across a phase transition has remained largely inaccessible experimentally.
Now, a team led by Professor Yongsoo Yang of the Department of Physics at KAIST (Korea Advanced Institute of Science and Technology), together with Professors SungBin Lee, Heejun Yang, and Yeongkwan Kim, and in collaboration with Stanford University, has for the first time directly visualized the spatial evolution of charge density wave amplitude order inside a quantum material.
A New Way to See Electronic Order at the Nanoscale
Using a liquid-helium-cooled electron microscope setup combined with four-dimensional scanning transmission electron microscopy (4D-STEM), the researchers mapped how CDW order develops, weakens, and fragments as temperature changes. This approach allowed them to reconstruct nanoscale maps of the CDW amplitude, revealing not just whether the order exists, but how strong it is and how it is spatially connected.
This study is similar to filming the growth of ice crystals as water freezes using an ultra-high-magnification camera. In this case, however, the researchers observed electrons arranging themselves at cryogenic temperatures of around –253°C, and used an electron microscope capable of resolving features one hundred-thousandth the width of a human hair instead of a conventional camera. The results showed that the electronic patterns do not appear uniformly across the material. In some regions, clear patterns are visible, while in neighboring areas they are entirely absent, much like a lake that does not freeze all at once, with patches of ice interspersed with liquid water.
How Electronic Order Breaks Apart in Real Space
The team further demonstrated that this spatial inhomogeneity is closely linked to local strain inside the crystal. Even extremely small distortions that are far below optical resolution strongly suppress the CDW amplitude. This clear anticorrelation between strain and electronic order provides direct evidence that local lattice distortions play a decisive role in shaping CDW patterns.
Unexpectedly, the researchers also observed that localized regions of CDW order can persist even above the transition temperature, where long-range order is generally thought to disappear. These isolated pockets of electronic order suggest that the CDW transition is not a simple, uniform melting process, but instead involves gradual loss of spatial coherence.
A key advance of this work is the world’s first direct measurement of CDW amplitude correlations. By quantifying how the strength of electronic order at one location is related to that at another, the study reveals how CDW coherence collapses across the transition, while local amplitude remains finite. Such information could not be obtained with conventional diffraction or scanning probe techniques.
Toward a New Framework for Studying Electronic Order
Charge density waves are a central feature of many quantum materials and often coexist or compete with other electronic states. By directly accessing their spatial structure and correlations, this study provides a new experimental framework for understanding how collective electronic order forms and evolves in real materials.
Dr. Yongsoo Yang, who led the research, explained the significance of the results: “Until now, the spatial coherence of charge density waves was largely inferred indirectly. Our approach allows us to directly visualize how electronic order varies across space and temperature, and to identify the factors that locally stabilize or suppress it.”
[Figure 1] Schematic illustration of an experiment employing 4D-STEM to probe the spatial variations of charge density waves in the prototypical quantum material NbSe2 under a liquid-helium cryogenic environment (AI-generated image).
This research, with Seokjo Hong, Jaewhan Oh and Jemin Park of KAIST as co-first authors, was published online in Physical Review Letters on January 6th (Title: Spatial correlations of charge density wave order across the transition in 2H-NbSe2).
The study was mainly supported by the National Research Foundation of Korea (NRF) Grants (Individual Basic Research Program, Basic Research Laboratory Program, Nanomaterial Technology Development Program) funded by the Korean Government (MSIT).
KAIST detects ‘hidden defects’ that degrade semiconductor performance with 1,000× higher sensitivity
<(From Left) Professor Byungha Shin, Ph.D candidate Chaeyoun Kim, Dr. Oki Gunawan>
Semiconductors are used in devices such as memory chips and solar cells, and within them may exist invisible defects that interfere with electrical flow. A joint research team has developed a new analysis method that can detect these “hidden defects” (electronic traps) with approximately 1,000 times higher sensitivity than existing techniques. The technology is expected to improve semiconductor performance and lifetime, while significantly reducing development time and costs by enabling precise identification of defect sources.
KAIST (President Kwang Hyung Lee) announced on January 8th that a joint research team led by Professor Byungha Shin of the Department of Materials Science and Engineering at KAIST and Dr. Oki Gunawan of the IBM T. J. Watson Research Center has developed a new measurement technique that can simultaneously analyze defects that hinder electrical transport (electronic traps) and charge carrier transport properties inside semiconductors.
Within semiconductors, electronic traps can exist that capture electrons and hinder their movement. When electrons are trapped, electrical current cannot flow smoothly, leading to leakage currents and degraded device performance. Therefore, accurately evaluating semiconductor performance requires determining how many electronic traps are present and how strongly they capture electrons.
The research team focused on Hall measurements, a technique that has long been used in semiconductor analysis. Hall measurements analyze electron motion using electric and magnetic fields. By adding controlled light illumination and temperature variation to this method, the team succeeded in extracting information that was difficult to obtain using conventional approaches.
Under weak illumination, newly generated electrons are first captured by electronic traps. As the light intensity is gradually increased, the traps become filled, and subsequently generated electrons begin to move freely. By analyzing this transition process, the researchers were able to precisely calculate the density and characteristics of electronic traps.
The greatest advantage of this method is that multiple types of information can be obtained simultaneously from a single measurement. It allows not only the evaluation of how fast electrons move, how long they survive, and how far they travel, but also the properties of traps that interfere with electron transport.
The team first validated the accuracy of the technique using silicon semiconductors and then applied it to perovskites, which are attracting attention as next-generation solar cell materials. As a result, they successfully detected extremely small quantities of electronic traps that were difficult to identify using existing methods—demonstrating a sensitivity approximately 1,000 times higher than that of conventional techniques.
< Conceptual Diagram of the Evolution of Hall Characterization (Analysis) Techniques >
Professor Byungha Shin stated, “This study presents a new method that enables simultaneous analysis of electrical transport and the factors that hinder it within semiconductors using a single measurement,” adding that “it will serve as an important tool for improving the performance and reliability of various semiconductor devices, including memory semiconductors and solar cells.”
The results of this research were published on January 1 in Science Advances, an international academic journal, with Chaeyoun Kim, a doctoral student in the Department of Materials Science and Engineering, as the first author.
※ Paper title: “Electronic trap detection with carrier-resolved photo-Hall effect,” DOI: https://doi.org/10.1126/sciadv.adz0460
This research was supported by the Ministry of Science and ICT and the National Research Foundation of Korea.
< Conceptual Diagram of Charge Transport and Trap Characterization Using Photo-Hall Measurements (AI-generated image) >
AI-Engineered "Nasal Spray Antiviral Platform" Developed to Block Flu and COVID-19
<(From Left) Professor Hyun Jung Chung, Professor Ho Min Kim, Professor Ji Eun Oh>
<(From Left) Dr. Seungju Yang, Dr. Jeongwon Yun, Ph.D candidate Jae Hyuk Kwon>
Respiratory viruses that have diverse strains and mutate rapidly, such as influenza and COVID-19, are difficult to block perfectly with vaccines alone. To solve this problem, KAIST's research team has successfully developed a nasal (intranasal) antiviral platform using AI technology to overcome the existing limitations of interferon-lambda treatments—namely, being "weak against heat and disappearing quickly from the nasal mucosa."
KAIST announced on December 15th that a joint research team—consisting of Professor Ho Min Ktim and Professor Hyun Jung Chung from the Department of Biological Sciences, and Professor Ji Eun Oh from the Graduate School of Medical Science and Engineering used AI to stably redesign the interferon-lambda protein and combined it with a delivery technology that ensures effective diffusion and long-term retention in the nasal mucosa, thereby implementing a universal prevention technology for various respiratory viruses.
Interferon-lambda is an innate immune protein produced by the body to block viral infections, playing a crucial role in stopping respiratory viruses like the common cold, flu, and COVID-19. However, when formulated as a treatment for nasal administration, its actual efficacy was limited by its vulnerability to heat, degrading enzymes, mucus, and ciliary motion.
The research team used AI protein design technology to precisely reinforce the structural weaknesses of interferon-lambda.
First, they significantly increased stability by changing the loose "loop" structures of the protein—which were prone to instability—into rigid "helix" structures that lock in place like a firm spring.
Additionally, to prevent "aggregation" (proteins sticking together to form lumps), they applied "surface engineering" to make the surface more water-compatible. They also introduced "glycoengineering," adding sugar chain (glycan) structures to the protein surface to make it even more robust and stable.
As a result, the newly produced interferon-lambda showed a massive improvement in stability, surviving for two weeks 50℃ and demonstrated the ability to diffuse rapidly even through thick nasal mucus.
The research team further protected the protein by encapsulating it in microscopic "nanoliposomes" and coated the surface with "low-molecular-weight chitosan." This significantly enhanced "mucoadhesion," allowing the treatment to stick to the nasal lining for an extended period.
When this delivery platform was applied to animal models infected with influenza, a powerful inhibitory effect was confirmed, with the virus level in the nasal cavity decreasing by more than 85%.
This technology is a mucosal immune platform that can block viral infections in their early stages simply by spraying it into the nose. It is expected to be a new therapeutic strategy that can respond quickly not only to seasonal flu but also to unexpected new or mutant viruses.
Professor Ho Min Kim stated, "Through AI-based protein design and mucosal delivery technology, we have simultaneously overcome the stability and retention time limitations of existing interferon-lambda treatments. This platform, which is stable at high temperatures and stays in the mucosa for a long time, is an innovative technology that can be used even in developing countries lacking strict cold-chain infrastructure. It also has great scalability for developing various treatments and vaccines." He added, "This is a meaningful achievement resulting from multidisciplinary convergence research, covering everything from AI protein design to drug delivery optimization and immune evaluation through infection models."
This research involved Dr. Jeongwon Yun from the KAIST InnoCORE (AI-Co-Research & Eudcation for innovative Drug Institute, AI-CRED Institute) Dr. Seungju Yang from the Department of Biological Sciences, and PhD student Jae Hyuk Kwon from the Graduate School of Medical Science and Engineering as co-first authors. The results were published consecutively in the renowned international journals Advanced Science (Nov 20) and Biomaterials Research (Nov 21).
Paper 1: Computational Design and Glycoengineering of Interferon-Lambda for Nasal Prophylaxis against Respiratory Viruses, Advanced Science, DOI: 10.1002/advs.202506764
Paper 2: Intranasal Nanoliposomes Delivering Interferon Lambda with Enhanced Mucosal Retention as an Antiviral, Biomaterials Research, DOI: 10.34133/bmr.0287
This research was conducted with support from the KAIST InnoCORE Program, Mid-Career Researcher Support Program and the Bio-Medical Technology Development Program through the National Research Foundation of Korea (NRF), Healthcare Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), the KAIST Convergence Research Institute Operation Program, and the Institute for Basic Science (IBS).
Jaewook Myung, First Korean Selected as '40 Under 40 Recognition Program' Next Generation Environmental Engineering Leader
< Professor Jaewook Myung of KAIST Department of Civil and Environmental Engineering >
KAIST announced on December 12th that Professor Jaewook Myung of the Department of Civil and Environmental Engineering was selected as the first Korean recipient of the '40 Under 40 Recognition Program' for Next Generation Environmental Engineering Leaders, organized by the American Academy of Environmental Engineers and Scientists (AAEES).
< The '40 Under 40 Recognition Program' is an international award program selecting next-generation leaders in the field of Environmental Engineering and Science >
This award is presented annually by AAEES to select next-generation environmental engineering researchers who demonstrate innovative research achievements, social contribution, and educational leadership. Professor Myung's selection is particularly significant as he is the first Korean to be chosen since the program's inception. The award ceremony is scheduled to be held in Washington D.C. in April 2026.
AAEES is the world's highest-authority professional organization leading the global environmental engineering sector through operating the Professional Environmental Engineer (PEE) certification system, policy consultation, and international academic exchange. This award is highly regarded for greatly enhancing the international standing of domestic environmental engineering and sustainability research.
Amid the deepening problems of plastic waste increase and greenhouse gas emissions, where existing technologies are showing limitations in providing solutions, Professor Jaewook Myung has garnered significant attention from academia and industry by developing technology to convert greenhouse gases such as methane ($CH_4$) and carbon dioxide ($CO_2$) into biodegradable plastics. His research is highly praised for presenting a new industrial paradigm that fuses environmental microbiology and materials science to convert greenhouse gases into high-value bio-materials.
Professor Myung's research team secured microbial metabolic control technology to transform greenhouse gases into materials, an accelerated process that simultaneously enhances the synthesis and decomposition efficiency of plastics, and pilot process design and engineering technology applicable in industrial settings. This established a sustainable circular technology model capable of simultaneously addressing greenhouse gas reduction and plastic pollution issues.
Furthermore, the research team expanded these foundational technologies to develop various application products, such as biodegradable coating materials that naturally decompose in the ocean, biocompatible bio-based electronic materials, and industrial 3D printing filaments, realizing full-cycle innovation from basic research to application and industrialization. These achievements are recognized as world-class sustainable technology alternatives that can simultaneously overcome the problems of plastic downcycling and the economic limitations of greenhouse gas utilization technology.
Professor Myung also shows excellent performance in nurturing talent. His advised students are growing into next-generation environmental and sustainability researchers, having won major awards both domestically and internationally, including the American Chemical Society (ACS) Environmental Chemistry Graduate Student Award, the Presidential Science Scholarship, the Merck Innovation Cup Prize, and the Republic of Korea Talent Award. He is also establishing himself as a leading researcher in the commercialization of sustainable technology by expanding his research achievements into the social and industrial ecosystem through technology collaboration with industries, patents, and consultation with public institutions.
The AAEES Selection Committee evaluated Professor Jaewook Myung as "a researcher possessing technical excellence, social responsibility, and educational leadership, and an innovator who has pioneered new areas of environmental engineering." Professor Myung expressed his thoughts, saying, "This award is a result made possible by the students who researched and challenged alongside me and the collaborative research culture of KAIST," and added, "I will contribute to brightening the future of humanity and the planet through sustainable resource circulation technology."
AI Finds Urban Commercial Districts Resilient to Climate Risk
< (From left) Integrated M.S.-Ph.D candidate Keonhee Jang, Postdoctoral Researcher Namwoo Kim, Professor Yoonjin Yoon, Researcher Seok-woo Yoon, Postdoctoral Researcher Young-jun Park, (Top) M.S candidate Juneyoung Ro >
KAIST announced on October 29th that its Urban AI Research Institute (Director, Distinguished Professor Yoonjin Yoon of Civil and Environmental Engineering conducted joint research in the field of 'Urban AI' with MIT's Senseable City Lab (Director, Professor Carlo Ratti) and disclosed the results at the 'Smart Life Week 2025' exhibition held at COEX, Seoul, in late September.
KAIST and MIT have been pursuing the 'Urban AI Joint Research Program' to interpret major urban problems using artificial intelligence. At this exhibition, the research results were presented in a form that citizens could directly experience, focusing on three themes: ▲Urban Climate Change, ▲Green Environment, and ▲Data Inclusivity.
Through this collaboration, the two institutions demonstrated that AI technology can expand beyond a tool for calculating urban problems to a new intelligence that promotes social understanding and empathy. They carried out three projects: ▲Urban Heat and Sales, ▲Nature That Heals, Seoul, and ▲Data Sonification.
The first project, 'Urban Heat and Sales,' is a study that analyzes the impact of climate change on urban commercial areas and the small business ecosystem using AI. An AI model was trained on over 300 million data points, including sales and weather for 96 business categories across 426 administrative dong (neighborhoods) in Seoul, to quantify the effect of climatic factors, such as temperature and humidity, on sales by industry type.
The results were visualized into 40,896 'Urban Heat Resilience' indicators, which score how well each region and business category can adapt to and recover from climate change. This allows the level of commercial area resilience to climate risk to be grasped at a glance, showing which areas are strong against temperature risks.
According to the study, for the convenience store sector, 64.7% of the total 426 dong were analyzed as 'climate-neutral areas,' which are relatively stable against climate change, while the remaining 35.3% belong to 'climate-sensitive areas,' which are significantly affected by climate change. This suggests that the operating environment for convenience stores varies significantly by region in terms of climate impact, and the data can be utilized for future location strategy planning from an urban resilience perspective.
< '3D Mesh Structure' that visually represents sales data for 426 regions in Seoul. The height and color of each region indicate the scale of sales. The left shows the distribution of sales in Seoul under actual temperature conditions, and the right shows the sales change predicted by AI when the temperature rises by 5 degrees. >
Visitors to the exhibition could select a region and business type on a real Seoul map and experience a system where the AI predicted sales changes in real-time based on future temperature rise scenarios.
This prediction model is a proprietary technology developed by KAIST, and plans are underway to expand cooperation with other major global cities, such as Boston and London. This research is expected to propose a new direction for establishing opening strategies for small business owners and developing urban climate risk response policies.
< Numerous visitors listening to explanations and experiencing the KAIST-MIT exhibition space >
The second project, 'Nature That Heals, Seoul,' is an extension of MIT's global project 'Feeling Nature' to Seoul. It combines urban environment data (Street View, maps, satellite images, etc.) with citizen survey data to train an AI to estimate the 'psychological green'—the actual psychological experience of green spaces felt by Seoul citizens.
This approach goes beyond simply calculating the area of trees or parks, offering new urban design directions that reflect citizens' emotional resilience and well-being. This research is expected to provide scientific evidence for future Seoul green space policies and locally tailored urban design.
The final project, 'Data Sonification,' is the world's first AI technology that translates over 300 million data points into sounds, like music, to be 'heard.' The AI uses data such as temperature, humidity, and sales to represent information through sound: for example, the pitch rises when the temperature goes up, and the sound lowers when sales decrease. This provides a new sensory experience of 'listening' to urban data through sound instead of sight.
This technology is a prime example of 'Barrier-Free AI' (AI for All), an inclusive AI technology that helps people with visual impairments or children—who may have difficulty accessing visual information—to intuitively understand data.
< A visitor experiencing Data Sonification, the world's first AI technology that converts data into sound >
Man-ki Kim, Chairman of the Seoul AI Hub (Seoul AI Foundation), which sponsored this research, stated, "We have achieved meaningful results by analyzing the urban environment and citizens' lives with artificial intelligence through collaboration with world-class research institutions like KAIST and MIT," adding, "This research has laid the groundwork for understanding urban change from the perspective of citizens and connecting it to policy and daily life."
Director Yoonjin Yoon remarked, "This exhibition demonstrated that artificial intelligence can evolve beyond a technology that merely calculates the city to an intelligence that understands and empathizes with people and the city," and concluded, "We will create data and experiences together with citizens, and collaborate with various cities worldwide to open a more inclusive and sustainable urban future."
This achievement is a global collaborative research project in the AI sector involving the KAIST Urban AI Research Institute and the MIT Senseable City Lab, and was conducted with sponsorship from the Seoul AI Hub.
※Research Results Images/Videos: https://05970c0c.slw-6vy.pages.dev/
Mobility 2025 Technology Demonstration Day Held... Commercialization Achievements Unveiled
< Kitae Jang, Director of Mobility Research Institute, Hyeong-sik Jeon, Vice Governor for Political Affairs of South Chungcheong Province and demonstration officials >
KAIST's e announced on the September 23rd that it held the "2025 Technology Demonstration Day" at the Naepo Knowledge Industry Center in Chungcheongnam-do to showcase successful cases of its research findings being adopted by the industry. The event was organized to present the process of commercializing KAIST's accumulated mobility research achievements through collaboration with companies.
The KAIST Mobility Research Institute aims to solve our society's mobility problems by conducting industry-academia research in various technology fields, including autonomous driving, Urban Air Mobility (UAM/UAV), eco-friendly mobility technologies, as well as artificial intelligence (AI) and energy. This demonstration was the result of a project linked to a consignment from Chungcheongnam-do, and it showed a practical example of research achievements connecting with the local industry.
At the demonstration, achievements that have entered the commercialization stage were presented with collaborating companies, including faculty startups FutureEV Co., Ltd. (CEO: Kim Kyung-soo), Dochak Co., Ltd. (CEO: Kim In-hee), and alumnus startup NOTA Co., Ltd. (CEO: Chae Myung-soo). The six core technologies unveiled were: △Mobile Energy Storage System (ESS) Power Platform △Naepo Digital Twin △Autonomous Driving Robots Specialized for SMEs △Remote-Driving Valet Parking △Autonomous Driving Testbed △AI Computing Center.
< Image of the remote-controlled autonomous vehicle developed by Professor In-hee Kim >
The "Mobile Energy Storage System (ESS) Power Platform" is a technology led by Professor Lee Yoon-gu and co-developed with FutureEV Co., Ltd., ECOCAB Co., Ltd., Hanyang Electric Co., Ltd., and Uptech Co., Ltd. It's a solution that can establish a stable power grid in areas with difficult power supply, such as disaster sites or islands, proving its commercial potential in the eco-friendly power sector.
The "Naepo Digital Twin" was commercialized by a research team led by Senior Researcher Kim Tae-kyun in collaboration with Dochak Co., Ltd. It can simulate real-world city and traffic conditions in a 3D virtual environment for traffic monitoring, situation prediction, disaster response, and policy verification. It's gaining attention as a core technology for building smart cities.
The "Autonomous Driving Robots Specialized for SMEs" was developed by research teams led by Professors Kim Kyung-soo and Choi Geun-ha in collaboration with L-Line Co., Ltd. and Torrent Systems Co., Ltd. This autonomous logistics robot, optimized for the logistics environment of small and medium-sized enterprises, demonstrated precise movement and stacking of logistics racks inside a factory at the event, confirming the potential for innovation in the SME manufacturing sector.
The "Remote-Driving Valet Parking Technology" is being commercialized by Professor Kim In-hee in collaboration with Dochak Co., Ltd., Torrent Systems Co., Ltd., E-motion Co., Ltd., and the National Science and Technology Research Network KREONET (operated by the Korea Institute of Science and Technology Information). During the demonstration, a vehicle remotely controlled from Daejeon traveled to the Naepo Research Institute and completed parking at its destination, proving the stability and practicality of remote autonomous driving.
< Image of the KAIST Mobility Research Institute Technology Demonstration Day poster >
The "Autonomous Driving Testbed" is a platform built by Professors Ahn Hee-jin and Noh Min-kyun. It's an example of expanding research achievements in reduced-scale vehicle-based autonomous driving into a platform for education and industrial verification. The KAIST Mobility Research Institute plans to use this as a foundation for the "2025 KAIST Mobility Challenge Competition" next year to simultaneously foster next-generation talent and promote technology commercialization.
The "AI Computing Center" was unveiled by NOTA Co., Ltd., which is soon to be listed on KOSDAQ. The company introduced its RE100-based power system and AI optimization technology and presented its vision for collaboration with tenant companies, stating its goal to contribute to the expansion of the AI ecosystem.
Kitae Jang, Director of the KAIST Mobility Research Institute, stated, "This demonstration was an opportunity to show the concrete process of KAIST's research achievements being adopted by the industry." He added, "We will continue to lead the commercialization of future mobility and AI technologies and the development of local industries through close collaboration with local governments and companies."
KAIST President Kwang Hyung Lee emphasized, "KAIST's mission is to contribute to the nation and local communities through technological innovation. We find it meaningful to see our research achievements creating real change in the industry and will continue to lead global mobility innovation and the creation of new value through collaboration with companies and local governments."
KAIST Researchers Unveil an AI that Generates "Unexpectedly Original" Designs
< Photo 1. Professor Jaesik Choi, KAIST Kim Jaechul Graduate School of AI >
Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text "creative," its ability to generate truly creative images remains limited. KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary.
Professor Jaesik Choi's research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training.
< Photo 2. Gayoung Lee, Researcher at NAVER AI Lab; Dahee Kwon, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Jiyeon Han, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Junho Kim, Researcher at NAVER AI Lab >
Professor Choi's research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns. Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation.
Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model.
Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training.
< Figure 1. Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. >
The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility.
In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods.
Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, "This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation."
They added, "This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem."
< Figure 2. Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. >
This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.* Paper Title: Enhancing Creative Generation on Stable Diffusion-based Models* DOI: https://doi.org/10.48550/arXiv.2503.23538
This research was supported by the KAIST-NAVER Ultra-creative AI Research Center, the Innovation Growth Engine Project Explainable AI, the AI Research Hub Project, and research on flexible evolving AI technology development in line with increasingly strengthened ethical policies, all funded by the Ministry of Science and ICT through the Institute for Information & Communications Technology Promotion. It also received support from the KAIST AI Graduate School Program and was carried out at the KAIST Future Defense AI Specialized Research Center with support from the Defense Acquisition Program Administration and the Agency for Defense Development.
Professor Hyun Myung's Team Wins First Place in a Challenge at ICRA by IEEE
< Photo 1. (From left) Daebeom Kim (Team Leader, Ph.D. student), Seungjae Lee (Ph.D. student), Seoyeon Jang (Ph.D. student), Jei Kong (Master's student), Professor Hyun Myung >
A team of the Urban Robotics Lab, led by Professor Hyun Myung from the KAIST School of Electrical Engineering, achieved a remarkable first-place overall victory in the Nothing Stands Still Challenge (NSS Challenge) 2025, held at the 2025 IEEE International Conference on Robotics and Automation (ICRA), the world's most prestigious robotics conference, from May 19 to 23 in Atlanta, USA.
The NSS Challenge was co-hosted by HILTI, a global construction company based in Liechtenstein, and Stanford University's Gradient Spaces Group. It is an expanded version of the HILTI SLAM (Simultaneous Localization and Mapping)* Challenge, which has been held since 2021, and is considered one of the most prominent challenges at 2025 IEEE ICRA.*SLAM: Refers to Simultaneous Localization and Mapping, a technology where robots, drones, autonomous vehicles, etc., determine their own position and simultaneously create a map of their surroundings.
< Photo 2. A scene from the oral presentation on the winning team's technology (Speakers: Seungjae Lee and Seoyeon Jang, Ph.D. candidates of KAIST School of Electrical Engineering) >
This challenge primarily evaluates how accurately and robustly LiDAR scan data, collected at various times, can be registered in situations with frequent structural changes, such as construction and industrial environments. In particular, it is regarded as a highly technical competition because it deals with multi-session localization and mapping (Multi-session SLAM) technology that responds to structural changes occurring over multiple timeframes, rather than just single-point registration accuracy.
The Urban Robotics Lab team secured first place overall, surpassing National Taiwan University (3rd place) and Northwestern Polytechnical University of China (2nd place) by a significant margin, with their unique localization and mapping technology that solves the problem of registering LiDAR data collected across multiple times and spaces. The winning team will be awarded a prize of $4,000.
< Figure 1. Example of Multiway-Registration for Registering Multiple Scans >
The Urban Robotics Lab team independently developed a multiway-registration framework that can robustly register multiple scans even without prior connection information. This framework consists of an algorithm for summarizing feature points within scans and finding correspondences (CubicFeat), an algorithm for performing global registration based on the found correspondences (Quatro), and an algorithm for refining results based on change detection (Chamelion). This combination of technologies ensures stable registration performance based on fixed structures, even in highly dynamic industrial environments.
< Figure 2. Example of Change Detection Using the Chamelion Algorithm>
LiDAR scan registration technology is a core component of SLAM (Simultaneous Localization And Mapping) in various autonomous systems such as autonomous vehicles, autonomous robots, autonomous walking systems, and autonomous flying vehicles.
Professor Hyun Myung of the School of Electrical Engineering stated, "This award-winning technology is evaluated as a case that simultaneously proves both academic value and industrial applicability by maximizing the performance of precisely estimating the relative positions between different scans even in complex environments. I am grateful to the students who challenged themselves and never gave up, even when many teams abandoned due to the high difficulty."
< Figure 3. Competition Result Board, Lower RMSE (Root Mean Squared Error) Indicates Higher Score (Unit: meters)>
The Urban Robotics Lab team first participated in the SLAM Challenge in 2022, winning second place among academic teams, and in 2023, they secured first place overall in the LiDAR category and first place among academic teams in the vision category.
KAIST's Pioneering VR Precision Technology & Choreography Tool Receive Spotlights at CHI 2025
Accurate pointing in virtual spaces is essential for seamless interaction. If pointing is not precise, selecting the desired object becomes challenging, breaking user immersion and reducing overall experience quality. KAIST researchers have developed a technology that offers a vivid, lifelike experience in virtual space, alongside a new tool that assists choreographers throughout the creative process.
KAIST (President Kwang-Hyung Lee) announced on May 13th that a research team led by Professor Sang Ho Yoon of the Graduate School of Culture Technology, in collaboration with Professor Yang Zhang of the University of California, Los Angeles (UCLA), has developed the ‘T2IRay’ technology and the ‘ChoreoCraft’ platform, which enables choreographers to work more freely and creatively in virtual reality. These technologies received two Honorable Mention awards, recognizing the top 5% of papers, at CHI 2025*, the best international conference in the field of human-computer interaction, hosted by the Association for Computing Machinery (ACM) from April 25 to May 1.
< (From left) PhD candidates Jina Kim and Kyungeun Jung along with Master's candidate, Hyunyoung Han and Professor Sang Ho Yoon of KAIST Graduate School of Culture Technology and Professor Yang Zhang (top) of UCLA >
T2IRay: Enabling Virtual Input with Precision
T2IRay introduces a novel input method that allows for precise object pointing in virtual environments by expanding traditional thumb-to-index gestures. This approach overcomes previous limitations, such as interruptions or reduced accuracy due to changes in hand position or orientation.
The technology uses a local coordinate system based on finger relationships, ensuring continuous input even as hand positions shift. It accurately captures subtle thumb movements within this coordinate system, integrating natural head movements to allow fluid, intuitive control across a wide range.
< Figure 1. T2IRay framework utilizing the delicate movements of the thumb and index fingers for AR/VR pointing >
Professor Sang Ho Yoon explained, “T2IRay can significantly enhance the user experience in AR/VR by enabling smooth, stable control even when the user’s hands are in motion.”
This study, led by first author Jina Kim, was supported by the Excellent New Researcher Support Project of the National Research Foundation of Korea under the Ministry of Science and ICT, as well as the University ICT Research Center (ITRC) Support Project of the Institute of Information and Communications Technology Planning and Evaluation (IITP).
▴ Paper title: T2IRay: Design of Thumb-to-Index Based Indirect Pointing for Continuous and Robust AR/VR Input▴ Paper link: https://doi.org/10.1145/3706598.3713442
▴ T2IRay demo video: https://youtu.be/ElJlcJbkJPY
ChoreoCraft: Creativity Support through VR for Choreographers
In addition, Professor Yoon’s team developed ‘ChoreoCraft,’ a virtual reality tool designed to support choreographers by addressing the unique challenges they face, such as memorizing complex movements, overcoming creative blocks, and managing subjective feedback.
ChoreoCraft reduces reliance on memory by allowing choreographers to save and refine movements directly within a VR space, using a motion-capture avatar for real-time interaction. It also enhances creativity by suggesting movements that naturally fit with prior choreography and musical elements. Furthermore, the system provides quantitative feedback by analyzing kinematic factors like motion stability and engagement, helping choreographers make data-driven creative decisions.
< Figure 2. ChoreoCraft's approaches to encourage creative process >
Professor Yoon noted, “ChoreoCraft is a tool designed to address the core challenges faced by choreographers, enhancing both creativity and efficiency. In user tests with professional choreographers, it received high marks for its ability to spark creative ideas and provide valuable quantitative feedback.”
This research was conducted in collaboration with doctoral candidate Kyungeun Jung and master’s candidate Hyunyoung Han, alongside the Electronics and Telecommunications Research Institute (ETRI) and One Million Co., Ltd. (CEO Hye-rang Kim), with support from the Cultural and Arts Immersive Service Development Project by the Ministry of Culture, Sports and Tourism.
▴ Paper title: ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tools▴ Paper link: https://doi.org/10.1145/3706598.3714220
▴ ChoreoCraft demo video: https://youtu.be/Ms1fwiSBjjw
*CHI (Conference on Human Factors in Computing Systems): The premier international conference on human-computer interaction, organized by the ACM, was held this year from April 25 to May 1, 2025.
KAIST & CMU Unveils Amuse, a Songwriting AI-Collaborator to Help Create Music
Wouldn't it be great if music creators had someone to brainstorm with, help them when they're stuck, and explore different musical directions together? Researchers of KAIST and Carnegie Mellon University (CMU) have developed AI technology similar to a fellow songwriter who helps create music.
KAIST (President Kwang-Hyung Lee) has developed an AI-based music creation support system, Amuse, by a research team led by Professor Sung-Ju Lee of the School of Electrical Engineering in collaboration with CMU. The research was presented at the ACM Conference on Human Factors in Computing Systems (CHI), one of the world’s top conferences in human-computer interaction, held in Yokohama, Japan from April 26 to May 1. It received the Best Paper Award, given to only the top 1% of all submissions.
< (From left) Professor Chris Donahue of Carnegie Mellon University, Ph.D. Student Yewon Kim and Professor Sung-Ju Lee of the School of Electrical Engineering >
The system developed by Professor Sung-Ju Lee’s research team, Amuse, is an AI-based system that converts various forms of inspiration such as text, images, and audio into harmonic structures (chord progressions) to support composition.
For example, if a user inputs a phrase, image, or sound clip such as “memories of a warm summer beach”, Amuse automatically generates and suggests chord progressions that match the inspiration.
Unlike existing generative AI, Amuse is differentiated in that it respects the user's creative flow and naturally induces creative exploration through an interactive method that allows flexible integration and modification of AI suggestions.
The core technology of the Amuse system is a generation method that blends two approaches: a large language model creates music code based on the user's prompt and inspiration, while another AI model, trained on real music data, filters out awkward or unnatural results using rejection sampling.
< Figure 1. Amuse system configuration. After extracting music keywords from user input, a large language model-based code progression is generated and refined through rejection sampling (left). Code extraction from audio input is also possible (right). The bottom is an example visualizing the chord structure of the generated code. >
The research team conducted a user study targeting actual musicians and evaluated that Amuse has high potential as a creative companion, or a Co-Creative AI, a concept in which people and AI collaborate, rather than having a generative AI simply put together a song.
The paper, in which a Ph.D. student Yewon Kim and Professor Sung-Ju Lee of KAIST School of Electrical and Electronic Engineering and Carnegie Mellon University Professor Chris Donahue participated, demonstrated the potential of creative AI system design in both academia and industry. ※ Paper title: Amuse: Human-AI Collaborative Songwriting with Multimodal Inspirations DOI: https://doi.org/10.1145/3706598.3713818
※ Research demo video: https://youtu.be/udilkRSnftI?si=FNXccC9EjxHOCrm1
※ Research homepage: https://nmsl.kaist.ac.kr/projects/amuse/
Professor Sung-Ju Lee said, “Recent generative AI technology has raised concerns in that it directly imitates copyrighted content, thereby violating the copyright of the creator, or generating results one-way regardless of the creator’s intention. Accordingly, the research team was aware of this trend, paid attention to what the creator actually needs, and focused on designing an AI system centered on the creator.”
He continued, “Amuse is an attempt to explore the possibility of collaboration with AI while maintaining the initiative of the creator, and is expected to be a starting point for suggesting a more creator-friendly direction in the development of music creation tools and generative AI systems in the future.”
This research was conducted with the support of the National Research Foundation of Korea with funding from the government (Ministry of Science and ICT). (RS-2024-00337007)
KAIST Develops Insect-Eye-Inspired Camera Capturing 9,120 Frames Per Second
< (From left) Bio and Brain Engineering PhD Student Jae-Myeong Kwon, Professor Ki-Hun Jeong, PhD Student Hyun-Kyung Kim, PhD Student Young-Gil Cha, and Professor Min H. Kim of the School of Computing >
The compound eyes of insects can detect fast-moving objects in parallel and, in low-light conditions, enhance sensitivity by integrating signals over time to determine motion. Inspired by these biological mechanisms, KAIST researchers have successfully developed a low-cost, high-speed camera that overcomes the limitations of frame rate and sensitivity faced by conventional high-speed cameras.
KAIST (represented by President Kwang Hyung Lee) announced on the 16th of January that a research team led by Professors Ki-Hun Jeong (Department of Bio and Brain Engineering) and Min H. Kim (School of Computing) has developed a novel bio-inspired camera capable of ultra-high-speed imaging with high sensitivity by mimicking the visual structure of insect eyes.
High-quality imaging under high-speed and low-light conditions is a critical challenge in many applications. While conventional high-speed cameras excel in capturing fast motion, their sensitivity decreases as frame rates increase because the time available to collect light is reduced.
To address this issue, the research team adopted an approach similar to insect vision, utilizing multiple optical channels and temporal summation. Unlike traditional monocular camera systems, the bio-inspired camera employs a compound-eye-like structure that allows for the parallel acquisition of frames from different time intervals.
< Figure 1. (A) Vision in a fast-eyed insect. Reflected light from swiftly moving objects sequentially stimulates the photoreceptors along the individual optical channels called ommatidia, of which the visual signals are separately and parallelly processed via the lamina and medulla. Each neural response is temporally summed to enhance the visual signals. The parallel processing and temporal summation allow fast and low-light imaging in dim light. (B) High-speed and high-sensitivity microlens array camera (HS-MAC). A rolling shutter image sensor is utilized to simultaneously acquire multiple frames by channel division, and temporal summation is performed in parallel to realize high speed and sensitivity even in a low-light environment. In addition, the frame components of a single fragmented array image are stitched into a single blurred frame, which is subsequently deblurred by compressive image reconstruction. >
During this process, light is accumulated over overlapping time periods for each frame, increasing the signal-to-noise ratio. The researchers demonstrated that their bio-inspired camera could capture objects up to 40 times dimmer than those detectable by conventional high-speed cameras.
The team also introduced a "channel-splitting" technique to significantly enhance the camera's speed, achieving frame rates thousands of times faster than those supported by the image sensors used in packaging. Additionally, a "compressed image restoration" algorithm was employed to eliminate blur caused by frame integration and reconstruct sharp images.
The resulting bio-inspired camera is less than one millimeter thick and extremely compact, capable of capturing 9,120 frames per second while providing clear images in low-light conditions.
< Figure 2. A high-speed, high-sensitivity biomimetic camera packaged in an image sensor. It is made small enough to fit on a finger, with a thickness of less than 1 mm. >
The research team plans to extend this technology to develop advanced image processing algorithms for 3D imaging and super-resolution imaging, aiming for applications in biomedical imaging, mobile devices, and various other camera technologies.
Hyun-Kyung Kim, a doctoral student in the Department of Bio and Brain Engineering at KAIST and the study's first author, stated, “We have experimentally validated that the insect-eye-inspired camera delivers outstanding performance in high-speed and low-light imaging despite its small size. This camera opens up possibilities for diverse applications in portable camera systems, security surveillance, and medical imaging.”
< Figure 3. Rotating plate and flame captured using the high-speed, high-sensitivity biomimetic camera. The rotating plate at 1,950 rpm was accurately captured at 9,120 fps. In addition, the pinch-off of the flame with a faint intensity of 880 µlux was accurately captured at 1,020 fps. >
This research was published in the international journal Science Advances in January 2025 (Paper Title: “Biologically-inspired microlens array camera for high-speed and high-sensitivity imaging”).
DOI: https://doi.org/10.1126/sciadv.ads3389
This study was supported by the Korea Research Institute for Defense Technology Planning and Advancement (KRIT) of the Defense Acquisition Program Administration (DAPA), the Ministry of Science and ICT, and the Ministry of Trade, Industry and Energy (MOTIE).
Professor Jimin Park and Dr. Inho Kim join the ranks of the 2024 "35 Innovators Under 35" by the MIT Technology Review
< (From left) Professor Jimin Park of the Department of Chemical and Biomolecular Engineering and Dr. Inho Kim, a graduate of the Department of Materials Science and Engineering >
KAIST (represented by President Kwang-Hyung Lee) announced on the 13th of September that Professor Jimin Park from KAIST’s Department of Chemical and Biomolecular Engineering and Dr. Inho Kim, a graduate from the Department of Materials Science and Engineering (currently a postdoctoral researcher at Caltech), were selected by the MIT Technology Review as the 2024 "35 Innovators Under 35”.
The MIT Technology Review, first published in 1899 by the Massachusetts Institute of Technology, is the world’s oldest and most influential magazine on science and technology, offering in-depth analysis across various technology fields, expanding knowledge and providing insights into cutting-edge technology trends.
Since 1999, the magazine has annually named 35 innovators under the age of 35, recognizing young talents making groundbreaking contributions in modern technology fields. The recognition is globally considered a prestigious honor and a dream for young researchers in the science and technology community.
< Image 1. Introduction for Professor Jimin Park at the Meet 35 Innovators Under 35 Summit 2024 >
Professor Jimin Park is developing next-generation bio-interfaces that link artificial materials with living organisms, and is engaged in advanced research in areas such as digital healthcare and carbon-neutral compound manufacturing technologies. In 2014, Professor Park was also recognized as one of the ‘Asia Pacific Innovators Under 35’ by the MIT Technology Review, which highlights young scientists in the Asia-Pacific region.
Professor Park responded, “It’s a great honor to be named as one of the young innovators by the MIT Technology Review, a symbol of innovation with a long history. I will continue to pursue challenging, interdisciplinary research to develop next-generation interfaces that seamlessly connect artificial materials and living organisms, from atomic to system levels.”
< Image 2. Introduction for Dr. Inho Kim as the 2024 Innovator of Materials Science for 35 Innovators Under 35 >
Dr. Inho Kim, who earned his PhD from KAIST in 2020 under the supervision of Professor Sang Ouk Kim from the Department of Materials Science and Engineering, recently succeeded in developing a new artificial muscle using composite fibers. This new material is considered the most human-like muscle ever reported in scientific literature, while also being 17 times stronger than natural human muscle.
Dr. Kim is researching the application of artificial muscle fibers in next-generation wearable assistive devices that move more naturally, like humans or animals, noting that the fibers are lightweight, flexible, and exhibit conductivity during contraction, enabling real-time feedback. Recognized for this potential, Dr. Inho Kim was named one of the '35 Innovators Under 35' this year, making him the first researcher to win the honor with the research conducted at KAIST and a PhD earned from Korea.
Dr. Kim stated, “I aim to develop robots using these new materials that can replace today’s expensive and heavy exoskeleton suits by eliminating motors and rigid frames. This will significantly reduce costs and allow for better customization, making cutting-edge technology more accessible to those who need it most, like children with cerebral palsy.”