KAIST Develops Self-Regenerating Catalyst That Restores Its Own Performance, Opening a Breakthrough for CO₂ Conversion Technology
<(From Left) Professor Dong Young Chung, Ph.D Candidate Hongmin An, Hanjoo Kim>
Technologies that convert carbon dioxide (CO₂) emitted from factories and power plants into useful chemical feedstocks are considered key to achieving carbon neutrality. However, rapid degradation of catalyst performance has long hindered commercialization. KAIST researchers have now developed a “self-regenerating” catalyst that restores its activity during operation, offering a potential solution to this challenge.
KAIST (President Kwang Hyung Lee) announced on the 11th of March that a research team led by Professor Dong Young Chung from the Department of Chemical and Biomolecular Engineering has identified the fundamental cause of catalyst degradation in electrochemical reactions that convert CO₂ into useful materials and has developed a new design strategy that allows catalysts to maintain their active state during the reaction.
<Schematic Illustration of Copper Catalyst Reconstruction>
The research team focused particularly on copper (Cu) catalysts, which are widely used in CO₂ conversion reactions. Copper catalysts are known not to simply degrade during reactions but instead undergo a process called surface reconstruction, in which their surface structure continuously changes. The study revealed that the performance and lifetime of the catalyst vary significantly depending on how this reconstruction occurs.
The researchers discovered that copper catalyst reconstruction occurs mainly through two different mechanisms. The first involves formation and reduction of oxide layers on the catalyst surface. While this temporarily increases catalytic activity, it ultimately leads to long-term degradation of catalyst performance.
The second mechanism involves partial dissolution of the catalyst metal into the electrolyte followed by redeposition onto the catalyst surface. During this process, new reactive sites—known as active sites—are continuously created on the catalyst surface.
Based on this mechanism, the team proposed a method that allows the catalyst to maintain its active state during the reaction. By introducing a trace amount of copper ions into the electrolyte, dissolution and redeposition of copper occur in a balanced cycle on the catalyst surface. This continuous cycle generates new active sites, enabling the catalyst to maintain stable performance over extended periods.
Importantly, this technology can be implemented without complex additional processes or high-voltage conditions, significantly reducing energy consumption while enabling stable production of high-value C₂ compounds such as ethylene and ethanol. C₂ compounds are molecules containing two carbon atoms and are industrially important chemicals used as feedstocks for plastics, fuels, and other materials.
This research is significant because it proposes a new design concept in which catalysts are not merely optimized at the initial stage but are engineered to maintain their optimal state throughout the reaction process. The concept is expected to be applicable not only to CO₂ conversion technologies but also to a wide range of electrochemical energy conversion systems.
Professor Dong Young Chung stated, “This research approached catalyst degradation not as an inevitable phenomenon but as a controllable process,” adding, “We proposed a new strategy that allows catalysts to continuously maintain optimal activity during the reaction.”
The study was led by Hanjoo Kim, a doctoral student at KAIST, and Hongmin An, a combined master’s-doctoral student, as co-first authors. The research was published online on February 5 in the Journal of the American Chemical Society (JACS), one of the world’s most prestigious journals in chemistry.
※ Paper title: “Dynamic Interface Engineering via Mechanistic Understanding of Copper Reconstruction in Electrochemical CO₂ Reduction Reaction” DOI: 10.1021/jacs.5c16244
This research was supported by the Global Young Connect Program for Materials and the National Strategic Materials Technology Development Program funded through the National Research Foundation of Korea.
KAIST Team Led by Dong-won Lee Wins Grand Prize at the 2nd Global Quantum AI Competition
< (From Left) M.S candidate Dongwon Lee from School of Electrical Engineering, Ph.D candidate Jaehun Han from Graduate School of Quantum Science and Technology >
"Team Yangja-jorim," consisting of Dongwon Lee, Gyungjun Kim and Jaehun Han , has been honored with the Grand Prize at the '2026 2nd Global Quantum AI Competition.' The event was hosted and organized by NORMA, a specialized quantum computing company.
This global competition was designed to expand hands-on experience with quantum cloud services and to discover next-generation talent in the field of quantum artificial intelligence. The event spanned approximately 70 days, beginning with the preliminary opening ceremony held at Korea University’s Hana Square on December 17 last year. The final winners were announced during an awards ceremony held at NORMA's headquarters on the 27th of last month.
The competition attracted significant interest from quantum technology talent worldwide, including university students, developers, and researchers. A total of 137 teams participated in the preliminaries, with the top 10 teams advancing to the finals—a competitive ratio of approximately 13.7 to 1.
< An acquaintance attended the awards ceremony of the 2nd Global Quantum AI Competition to accept the prize on behalf of the team. >
In the final round, participants were presented with four generative problems utilizing the Quantum Circuit Born Machine (QCBM) model. To overcome the current limitations of quantum machine learning, the contestants were tasked with designing and validating Quantum-Classical Hybrid Generative AI models that integrate classical techniques. Notably, the final problem provided an opportunity to verify the proposed methods using a real Quantum Processing Unit (QPU) from Rigetti Computing, a leading global quantum computing firm.
The judging process employed a double-blind system, where the identities of both evaluators and participants remained undisclosed to ensure maximum fairness and credibility.
"Through this competition, we were able to explore the research potential of the quantum AI field more deeply," said KAIST's Team Yangja-jorim in their acceptance speech. "We hope to continue contributing to the advancement of quantum technology through consistent research and new challenges."
KAIST Explores Solutions for African Youth Employment with World Bank and African Union
< Group photo of meeting participants >
KAIST announced on the 6th that the 'Jobs for Youth in Africa Knowledge Exchange' platform was held in Nairobi, Kenya, from March 3 to 5 (local time). The event was hosted by the Kenyan government and co-organized by the World Bank Group, the African Union, and the KAIST Global Center for Development and Strategy (G-CODEs).
As a high-level policy implementation platform dedicated to addressing youth employment challenges in Africa, the event drew approximately 200 participants, including government officials from over 20 African nations, international organizations, the private sector, academia, and development cooperation partners. KAIST participated as a key global partner linking technology and policy, presenting innovation models for employment systems based on digital and Artificial Intelligence (AI) technologies.
< Scene from the meeting hosted by the Kenyan government >
With Africa’s youth population projected to double by 2050, the continent faces significant hurdles such as high unemployment rates and informal employment. This event marked the second face-to-face meeting of the 'Jobs for Youth in Africa Community of Practice (CoP),' which was launched in Kigali, Rwanda, in 2025. The meeting aimed to share policy experiences among member states and materialize scalable implementation models. Salim Mvurya, Kenya's Cabinet Secretary for Youth Affairs, Creative Economy, and Sports, attended the opening ceremony and emphasized that youth job creation is a critical priority at both national and continental levels.
The program focused on several key themes:
Evidence-based youth employment strategies
Innovation in employment systems through digital and AI technologies
Improving labor market outcomes through Recognition of Prior Learning (RPL)
Business environment reforms and strengthening value chain linkages
Notably, in the session titled "Digital and AI-based Employment System Innovation," Professor Kyung Ryul Park of KAIST shared Korea’s digital transformation experiences and AI application cases, proposing directions for data-driven policy design and the development of technology-based employment platforms. Additionally, KAIST Professor Ga-young Park facilitated mutual learning and connected cases of scalable youth employment projects across countries during the "Global Cafe Session."
< Professor Kyung Ryul Park of KAIST delivering a presentation >
Participants visited the project site of "National Youth Opportunities Towards Advancement (NYOTA)," an initiative pursued by the Kenyan government and the World Bank. There, they observed a comprehensive youth employment model that integrates vocational training, job matching, and entrepreneurship support. The site visit served as a practical learning opportunity to share the processes of policy design and execution.
Since last year, KAIST has been involved in digital innovation projects for youth employment in East Africa through the Korea-World Bank Partnership Facility (KWPF). Through this event, the university reaffirmed its status as a global cooperation hub leading technology-based policy innovation.
"The issue of youth employment is a structural challenge that combines digital transformation, industrial strategy, and educational reform," stated Professor Kyung Ryul Park. "KAIST will continue to present actionable policy models based on data and technology while strengthening international cooperation."
This Knowledge Exchange platform is evaluated as a significant milestone that reaffirmed the African youth employment agenda as a core priority of international cooperation and solidified the foundation for enhancing policy implementation capabilities. A follow-up workshop is scheduled to be held early next year at the Kenya Advanced Institute of Science and Technology (Kenya-AIST) campus in Konza, Nairobi, which is modeled after KAIST.
KAIST Develops mRNA Platform That Remains Effective Even in Aging and Obesity
<(From Left) Dr. Subin Yoon, Ph.D candidate Hyeonggon Cho, Prof. Jae-Hwan Nam, Prof. Young-suk Lee>
Since the COVID-19 pandemic, mRNA vaccines have gained attention as a next-generation pharmaceutical technology. mRNA therapeutics work by delivering genetic instructions that enable cells to produce specific proteins for therapeutic effects. However, their efficacy has been reported to decline in elderly individuals or patients with obesity. To address this limitation, Korean researchers have newly designed a key regulatory region of mRNA that improves therapeutic protein production efficiency, developing a next-generation mRNA platform that maintains effectiveness even in aging and obesity conditions.
KAIST (President Kwang Hyung Lee) announced on the 10th of March that a joint research team led by Professor Young-suk Lee of the Department of Bio and Brain Engineering and Professor Jae-Hwan Nam of The Catholic University of Korea (President Jun-Gyu Choi) has developed a new mRNA platform by precisely designing the sequence of the 5′ untranslated region (5′UTR)*, a key regulatory region of mRNA.*5′ untranslated region (5′UTR): A region of mRNA that initiates and regulates protein production. The design of this region influences both the amount and speed of protein synthesis.
The research team analyzed large-scale bioinformatics datasets to identify 5′UTR sequences that enable proteins to be produced more efficiently across diverse cellular environments. When applied, the designed sequences significantly enhanced protein production and immune responses even in preclinical models of aging and obesity.
mRNA is a long single-stranded RNA molecule that serves as the blueprint for producing proteins required by the body. It consists of several components: the 5′UTR, which initiates and regulates the rate of protein production; the coding sequence (CDS), which contains the genetic information for a specific protein; the 3′ untranslated region (3′UTR), which helps maintain mRNA stability within cells; and the poly(A) tail, which further enhances stability and supports protein synthesis.
Among these components, the 5′UTR and 3′UTR do not determine the type of protein produced, but they play a critical role in regulating how efficiently the protein is synthesized. For this reason, these regions are receiving increasing attention as key bioengineering platforms for improving the performance of various mRNA therapeutics, including vaccines and treatments.
<Schematic Diagram of mRNA Therapeutic Design and Validation Using Bioinformatics>
To identify highly efficient 5′UTR sequences capable of promoting protein production across multiple tissues and cellular environments, the team conducted an integrated analysis of large-scale biological datasets. This included multiple analytical approaches such as RNA sequencing (RNA-seq) for analyzing gene activity across tissues, single-cell RNA sequencing (scRNA-seq) for examining gene expression at the individual cell level, and ribosome profiling (Ribo-seq) for measuring actual protein translation efficiency.
The researchers also focused on the fact that in aging or obesity conditions, cells often experience high levels of stress—particularly oxidative stress—which can reduce their ability to synthesize proteins. When the newly designed mRNA therapeutics were applied to preclinical models of aging and obesity, the results showed significantly improved protein production and immune responses compared with existing approaches. This research is expected to be applicable not only to mRNA vaccines but also to a wide range of biopharmaceutical technologies, including gene therapies and immunotherapies.
<Multimodal Bio–Big Data Analysis–Based mRNA Therapeutic Design (AI-Generated Image)>
Professor Young-suk Lee of KAIST Department of Bio and Brain Engineering stated, “This study identified a design strategy that enables mRNA to produce proteins more efficiently by analyzing large-scale biological data,” adding, “This technology will provide an important foundation for ensuring that mRNA vaccines and therapeutics remain effective even in environments where drug efficacy may decline, such as in elderly or obese patients.”
In this study, Dr. Subin Yoon from The Catholic University of Korea and doctoral candidate Hyeonggon Cho from KAIST participated as co-first authors. The research findings were published online on January 2 in the internationally renowned journal Molecular Therapy (IF = 12.0), a leading journal in gene and cell therapy.
(Paper title: ”Designing 5′UTR sequences improves the capacity of mRNA therapeutics in preclinical models of aging and obesity” DOI: https://doi.org/10.1016/j.ymthe.2025.12.060)
This research was supported by the Excellent Young Researcher Program and the Bio-Medical Technology Development Program of the National Research Foundation of Korea funded by the Ministry of Science and ICT, the Infectious Disease Response Innovative Technology Support Program of the Ministry of Food and Drug Safety, and the Infectious Disease Prevention and Therapeutics Technology Development Program of the Korea Health Industry Development Institute.
Professor Kuk-Jin Yoon’s Research Team at the Department of Mechanical Engineering Achieves Landmark Success with 10 Papers Accepted at CVPR 2026
<Professor Kuk-Jin Joon from Department of Mechanical Engineering>
Professor Kuk-Jin Yoon’s research team from our university’s Department of Mechanical Engineering has once again demonstrated its overwhelming academic prowess by having a total of 10 papers accepted as lead authors at the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2026 (CVPR 2026).
CVPR is the most influential international conference in the fields of artificial intelligence and visual intelligence. Since its inception in 1983, it has selected outstanding research through a rigorous peer-review process every year. For CVPR 2026, a total of 16,092 papers were submitted worldwide, with 4,090 accepted, resulting in a competitive acceptance rate of approximately 25.42%. Achieving 10 accepted papers as lead or corresponding authors from a single laboratory is regarded as an exceptionally rare and world-class feat.
Professor Kuk-Jin Yoon’s team conducts extensive research with the ultimate goal of achieving human-level visual intelligence. The papers accepted this year cover cutting-edge topics in computer vision, including:
Event camera-based technologies
Perception technologies for autonomous driving
AI optimization and adaptation techniques
This achievement follows the team's remarkable success at ICCV 2025 last year, where they published 12 papers as lead/corresponding authors. The results at CVPR 2026 further solidify the laboratory's position as a global hub for pioneering computer vision research. The research team plans to continue contributing to the advancement of future AI technologies by tackling challenging research that transcends the limitations of existing methods.
Meanwhile, CVPR 2026 is scheduled to be held in Denver, Colorado, USA, from June 3 to June 7.
<CVPR 2026 (Denver, USA)>
KAIST Develops Brain-Like AI… Thinks One More Time Even When Predictions Are Wrong
<(From left) Professor Sang Wan Lee, Myoung Hoon Ha, and Dr. Yoondo Sung>
Artificial intelligence now plays Go, paints pictures, and even converses like a human. However, there remains a decisive difference: AI requires far more electricity than the human brain to operate. Scientists have long asked the question, “How can the brain learn so intelligently using so little energy?” KAIST researchers have moved one step closer to the answer.
KAIST (President Kwang Hyung Lee) announced on the 29th that a research team led by Distinguished Professor Sang Wan Lee of the Department of Brain and Cognitive Sciences has developed a new technology that applies the learning principles of the human brain to deep learning, enabling stable training even in deep artificial intelligence models.
Our brain does not passively receive the world. Instead of merely perceiving what is happening in the present, it first predicts what will happen next and, when reality differs from that prediction, adjusts itself to reduce the difference (i.e., prediction error). This is similar to anticipating an opponent’s next move in Go and changing strategy if the prediction turns out to be wrong. This mode of information processing is known as “Predictive Coding.”
< Predictive Coding (PC) Module >
Scientists have attempted to apply this principle to AI, but encountered difficulties. As neural networks become deeper, errors tend to concentrate in specific layers or vanish altogether, repeatedly leading to performance degradation.
The research team mathematically identified the cause of this problem and proposed a new solution. The key idea is simple: instead of predicting only the final outcome, the AI is designed to also predict how its prediction errors will change in the future. The team refers to this as “Meta Prediction.” In simple terms, it is an AI that “thinks once more about its mistakes.” When this method was applied, learning proceeded stably in deep neural networks without halting.
<Analysis of Instability in Predictive Coding Model Errors>
The experimental results were also impressive. In 29 out of 30 experiments, the proposed method achieved higher accuracy than the current standard AI training method, backpropagation. Backpropagation is the representative learning method in which AI “goes backward by the amount of error and corrects it.”
Conventional AI training methods (backpropagation) require tightly interconnected layers, meaning the entire network must be computed and updated simultaneously. In contrast, this new approach demonstrates that, like the brain, large AI models can be effectively trained even when learning occurs in a distributed and partially independent manner.
<Performance Comparison of Predictive Coding Models>
This technology is expected to expand into various fields where power efficiency is critical, including neuromorphic computing, robot AI that must adapt to changing environments, and edge AI operating within devices.
Distinguished Professor Sang Wan Lee stated, “The key to this research is not simply imitating the structure of the brain, but enabling AI to follow the brain’s learning principles themselves,” adding, “We have opened the possibility of artificial intelligence that learns efficiently like the brain.”
This study was conducted with Dr. Myoung Hoon Ha as the first author and Professor Sang Wan Lee as the corresponding author. The paper was accepted to the International Conference on Learning Representations (ICLR 2026) and was published online on January 26.
※ Paper title: “Stable and Scalable Deep Predictive Coding Networks with Meta Prediction Errors”Original paper: https://openreview.net/forum?id=kE5jJUHl9i¬eId=e6T5T9cYqO
This research was supported by the Ministry of Science and ICT and the Institute of Information & Communications Technology Planning & Evaluation (IITP) through the Digital Global Research Support Program (joint research with Microsoft Research), the Samsung Electronics SAIT NPRC Program, and the SW Star Lab Program.
Designing the Heart of Hydrogen Cars with AI... Development of Next-Generation Super Catalyst
<(From left) KAIST Ph.D. Candidate HyunWoo Chang, Professor EunAe Cho. (Top, from left) Seoul National University Professor Won Bo Lee, Dr. Jae Hyun Ryu.>
In the era of climate crisis, hydrogen vehicles are emerging as an alternative for eco-friendly mobility. However, the fuel cell, known as the ‘heart of the hydrogen car,’ still faces limitations of high cost and short lifespan. The core cause is the platinum catalyst. While it is a decisive material for generating electricity, the reaction is slow, performance degrades over time, and manufacturing costs are high. Korean researchers have presented a clue to solving this difficult problem.
KAIST announced on February 26th that the research team led by Professor EunAe Cho of the Department of Materials Science and Engineering, together with the team of Professor Won Bo Lee of the School of Chemical and Biological Engineering at Seoul National University, has developed a technology that predicts the ‘atomic arrangement’ tendency of catalysts using artificial intelligence (AI).
This technology is akin to calculating beforehand which combination is advantageous for completing a puzzle before putting it together. By having AI calculate the arrangement speed of metal atoms first, it has become possible to efficiently design catalysts with better performance. The core of this research is that ‘AI revealed the fact that zinc plays a decisive role in the platinum-cobalt atomic arrangement.’
<Schematic diagram of AI-based atomic alignment prediction>
Despite the high performance of existing platinum-cobalt (Pt-Co) alloy catalysts, very high-temperature heat treatment was required to create the ‘intermetallic (L1₀)’ structure, where atoms are regularly arranged. In this process, particles would clump together, or the structure would become unstable, posing limitations for actual fuel cell application.
To solve this problem, the research team introduced machine learning-based quantum chemistry simulations. Through AI, they precisely predicted how atoms move and arrange themselves inside the catalyst.
As a result, they discovered that zinc (Zn) acts as a mediating element that promotes atomic arrangement. The principle is that when zinc is introduced, atoms find their places more easily, forming a more sophisticated and stable structure. In other words, AI has found the ‘optimal path for atomic arrangement creation’ in advance.
< Synthesis process of Zinc-introduced Platinum-Cobalt catalyst>
The zinc-platinum-cobalt catalyst, synthesized based on AI predictions, secured both higher activity and superior long-term durability compared to commercial platinum catalysts. This is a case proving that the ‘virtual blueprint’ calculated by artificial intelligence can be implemented as a high-performance catalyst in an actual laboratory.
In particular, this technology is expected to contribute to extending catalyst lifespan and reducing manufacturing costs across core carbon-neutral industries, such as hydrogen passenger cars, hydrogen trucks requiring long-distance operation, hydrogen ships, and energy storage systems (ESS).
< Conceptual diagram of AI-based catalyst development (AI-generated image) >
Professor EunAe Cho stated, “This research is a case of utilizing machine learning to predict the atomic arrangement tendency of catalysts in advance and implementing this through actual synthesis,” and added, “AI-based material design will become a new paradigm for the development of next-generation fuel cell catalysts.”
Ph.D. Candidate HyunWoo Chang from KAIST’s Department of Materials Science and Engineering and Dr. Jae Hyun Ryu from Seoul National University’s School of Chemical and Biological Engineering participated as co-first authors in this research. The research results were published on January 15, 2026, in ‘Advanced Energy Materials,’ a world-renowned academic journal in the energy materials field. ※ Paper Title: Machine Learning-Guided Design of L1₀-PtCo Intermetallic Catalysts: Zn-Mediated Atomic Ordering, DOI: https://doi.org/10.1002/aenm.202505211
This research was conducted with the support of the National Research Foundation of Korea’s Nano & Material Technology Development Program and the Korea Institute of Energy Technology Evaluation and Planning’s Energy Innovation Research Center for Fuel Cell Technology.
KAIST Uses Sandpaper to Polish Semiconductors… Opening a New Path for AI Semiconductor Processing
<(From Left) Dr. Sukkyung Kang, Professor Sanha Kim from Department of Mechanical Engineering>
The performance and stability of smartphones and artificial intelligence (AI) services depend on how uniformly and precisely semiconductor surfaces are processed. KAIST researchers have expanded the concept of everyday “sandpaper” into the realm of nanotechnology, developing a new technique capable of processing semiconductor surfaces uniformly down to the atomic level. This technology demonstrates the potential to significantly improve surface quality and processing precision in advanced semiconductor processes such as high-bandwidth memory (HBM).
KAIST (President Kwang Hyung Lee) announced on the 11th of February that a research team led by Professor Sanha Kim of the Department of Mechanical Engineering has developed a “nano sandpaper” that utilizes carbon nanotubes—tens of thousands of times thinner than a human hair—as abrasive materials. This technology enables more precise surface processing than existing semiconductor manufacturing processes, while also reducing environmental burdens generated during fabrication, presenting a new planarization technique.
< Nano Sandpaper AI-Generated Image >
Although sandpaper is a familiar tool used to smooth surfaces by rubbing, it has been difficult to apply it to fields such as semiconductors, where extremely precise surface processing is required. This limitation arises because conventional sandpaper is manufactured by attaching abrasive particles with adhesives, making it difficult to uniformly secure extremely fine particles.
To overcome such limitations, the semiconductor industry has adopted a planarization process known as chemical mechanical polishing (CMP), which uses a chemical slurry in which abrasive particles are dispersed in liquid. However, this method requires additional cleaning steps and generates large amounts of waste, making the process complex and environmentally burdensome.
To address these issues, the research team extended the concept of sandpaper to the nanoscale. By vertically aligning carbon nanotubes, fixing them inside polyurethane, and partially exposing them on the surface, they implemented a “nano sandpaper.” This structure structurally suppresses abrasive detachment, eliminating concerns about surface damage and maintaining stable performance even after repeated use.
The nano sandpaper developed in this study achieves an abrasive density approximately 500,000 times higher than that of the finest commercially available sandpaper. The precision of sandpaper is expressed in terms of “abrasive density (grit number),” which indicates how densely abrasive particles are arranged on the surface. While everyday sandpaper typically ranges from 40 to 3000 grit, the nano sandpaper exceeds 1,000,000,000 grit. Through this extremely dense structure, surfaces could be processed with precision down to several nanometers—equivalent to the thickness of only a few atoms.
The effectiveness of the nano sandpaper was confirmed through experiments. Rough copper surfaces were polished to a smoothness at the nanometer level, and in semiconductor pattern planarization experiments, the technique reduced dishing defects by up to 67% compared with conventional CMP processes. Dishing defects refer to the phenomenon in which the center of interconnect lines becomes recessed, a major defect affecting the performance and reliability of advanced semiconductors such as HBM.
In particular, because the abrasive materials are fixed on the sandpaper surface, the technology does not require continuous supply of slurry solutions as in conventional processes. This reduces cleaning steps and eliminates waste slurry, presenting the possibility of transitioning semiconductor manufacturing toward more environmentally friendly processes.
< Nano Sandpaper Schematic Diagram >
< Detailed Image of Nano Sandpaper >
The research team expects that this technology can be applied to advanced semiconductor planarization processes such as HBM used in AI servers, as well as to hybrid bonding processes, which are gaining attention as next-generation semiconductor interconnection technologies. The study is also significant in that it expands the everyday concept of sandpaper into nano-precision processing technology, suggesting the possibility of securing core technologies required for semiconductor manufacturing.
Professor Sanha Kim stated, “This is an original study demonstrating that the everyday concept of sandpaper can be extended to the nanoscale and applied to ultra-fine semiconductor manufacturing,” adding, “We hope this technology will lead not only to improved semiconductor performance but also to environmentally friendly manufacturing processes.”
In this study, Dr. Sukkyung Kang of the Department of Mechanical Engineering participated as the first author. The research was recognized for its excellence by receiving the Gold Prize (1st place) in the Mechanical Engineering Division at the 31st Samsung Human Tech Paper Award, hosted by Samsung Electronics. The findings were published online on January 8, 2026, in the international journal Advanced Composites and Hybrid Materials (IF 21.8).
※ Paper title: “Carbon nanotube sandpaper for atomic-precision surface finishing”
DOI: https://doi.org/10.1007/s42114-025-01608-3
This research was supported by the National Research Foundation of Korea (Mid-Career Researcher Program; Ministry of Science and ICT, NRF, RS-2025-00560856), the Glocal Lab Program (Ministry of Education, NRF, RS-2025-25406725), the InnoCORE Program (Ministry of Science and ICT, NRF, N10250154), and the KAIST Up Program.
KAIST Proposes a Multinational AI Cooperation Strategy Beyond U.S.–China Dominance
KAIST detects ‘hidden defects’ that degrade semiconductor performance with 1,000× higher sensitivity
<(From Left) Professor Byungha Shin, Ph.D candidate Chaeyoun Kim, Dr. Oki Gunawan>
Semiconductors are used in devices such as memory chips and solar cells, and within them may exist invisible defects that interfere with electrical flow. A joint research team has developed a new analysis method that can detect these “hidden defects” (electronic traps) with approximately 1,000 times higher sensitivity than existing techniques. The technology is expected to improve semiconductor performance and lifetime, while significantly reducing development time and costs by enabling precise identification of defect sources.
KAIST (President Kwang Hyung Lee) announced on January 8th that a joint research team led by Professor Byungha Shin of the Department of Materials Science and Engineering at KAIST and Dr. Oki Gunawan of the IBM T. J. Watson Research Center has developed a new measurement technique that can simultaneously analyze defects that hinder electrical transport (electronic traps) and charge carrier transport properties inside semiconductors.
Within semiconductors, electronic traps can exist that capture electrons and hinder their movement. When electrons are trapped, electrical current cannot flow smoothly, leading to leakage currents and degraded device performance. Therefore, accurately evaluating semiconductor performance requires determining how many electronic traps are present and how strongly they capture electrons.
The research team focused on Hall measurements, a technique that has long been used in semiconductor analysis. Hall measurements analyze electron motion using electric and magnetic fields. By adding controlled light illumination and temperature variation to this method, the team succeeded in extracting information that was difficult to obtain using conventional approaches.
Under weak illumination, newly generated electrons are first captured by electronic traps. As the light intensity is gradually increased, the traps become filled, and subsequently generated electrons begin to move freely. By analyzing this transition process, the researchers were able to precisely calculate the density and characteristics of electronic traps.
The greatest advantage of this method is that multiple types of information can be obtained simultaneously from a single measurement. It allows not only the evaluation of how fast electrons move, how long they survive, and how far they travel, but also the properties of traps that interfere with electron transport.
The team first validated the accuracy of the technique using silicon semiconductors and then applied it to perovskites, which are attracting attention as next-generation solar cell materials. As a result, they successfully detected extremely small quantities of electronic traps that were difficult to identify using existing methods—demonstrating a sensitivity approximately 1,000 times higher than that of conventional techniques.
< Conceptual Diagram of the Evolution of Hall Characterization (Analysis) Techniques >
Professor Byungha Shin stated, “This study presents a new method that enables simultaneous analysis of electrical transport and the factors that hinder it within semiconductors using a single measurement,” adding that “it will serve as an important tool for improving the performance and reliability of various semiconductor devices, including memory semiconductors and solar cells.”
The results of this research were published on January 1 in Science Advances, an international academic journal, with Chaeyoun Kim, a doctoral student in the Department of Materials Science and Engineering, as the first author.
※ Paper title: “Electronic trap detection with carrier-resolved photo-Hall effect,” DOI: https://doi.org/10.1126/sciadv.adz0460
This research was supported by the Ministry of Science and ICT and the National Research Foundation of Korea.
< Conceptual Diagram of Charge Transport and Trap Characterization Using Photo-Hall Measurements (AI-generated image) >
Breaking Performance Barriers of All Solid State Batteries
< (Bottom, from left) Professor Dong-Hwa Seo, Researcher Jae-Seung Kim, (Top, from left) Professor Kyung-Wan Nam, Professor Sung-Kyun Jung, Professor Youn-Seok Jung >
Batteries are an essential technology in modern society, powering smartphones and electric vehicles, yet they face limitations such as fire explosion risks and high costs. While all-solid-state batteries have garnered attention as a viable alternative, it has been difficult to simultaneously satisfy safety, performance, and cost. Recently, a Korean research team successfully improved the performance of all-solid-state batteries simply through structural design—without adding expensive metals.
KAIST announced on January 7th that a research team led by Professor Dong-Hwa Seo from the Department of Materials Science and Engineering, in collaboration with teams led by Professor Sung-Kyun Jung (Seoul National University), Professor Youn-Suk Jung (Yonsei University), and Professor Kyung-Wan Nam (Dongguk University), has developed a design method for core materials for all-solid-state batteries that uses low-cost raw materials while ensuring high performance and low risk of fire or explosion.
Conventional batteries rely on lithium ions moving through a liquid electrolyte. In contrast, all-solid-state batteries use a solid electrolyte. While this makes them safer, achieving rapid lithium-ion movement within a solid has typically required expensive metals or complex manufacturing processes.
To create efficient pathways for lithium-ion transport within the solid electrolyte, the research team focused on "divalent anions" such as oxygen and sulfur . Divalent anions play a crucial role in altering the crystal structure by integrating into the basic framework of the electrolyte.
The team developed a technology to precisely control the internal structure of low-cost zirconium (Zr)-based halide solid electrolytes by introducing these divalent anions. This design principle, termed the "Framework Regulation Mechanism," widens the pathways for lithium ions and lowers the energy barriers they encounter during transport. By adjusting the bonding environment and crystal structure around the lithium ions, the team enabled faster and easier movement.
To verify these structural changes, the researchers utilized various high-precision analysis techniques, including:
High-energy Synchrontron X-ray diffraction(Synchrotron XRD)
Pair Distribution Function (PDF) analysis
X-ray Absorption Spectroscopy (XAS)
Density Functional Theory (DFT) modeling for electronic structure and diffusion.
The results showed that electrolytes incorporating oxygen or sulfur improved lithium-ion mobility by 2 to 4 times compared to conventional zirconium-based electrolytes. This signifies that performance levels suitable for practical all-solid-state battery applications can be achieved using inexpensive materials.
Specifically, the ionic conductivity at room temperature was measured at approximately 1.78 mS/cm for the oxygen-doped electrolyte and 1.01 mS/cm for the sulfur-doped electrolyte. Ionic conductivity indicates how quickly and smoothly lithium ions move; a value above 1 mS/cm is generally considered sufficient for practical battery applications at room temperature.
< Structural Regulation Mechanism of Zr-based Halide Electrolytes via Divalent Anion Introduction >
< Atomic Rearrangement of Solid Electrolyte for All-Solid-State Batteries (AI-generated image) >
Professor Dong-Hwa Seo stated, "Through this research, we have presented a design principle that can simultaneously improve the cost and performance of all-solid-state batteries using cheap raw materials. Its potential for industrial application is very high." Lead author Jae-Seung Kim added that the study shifts the focus from "what materials to use" to "how to design them" in the development of battery materials.
This study, with Jae-Seung Kim (KAIST) and Da-Seul Han (Dongguk University) as co-first authors, was published in the international journal Nature Communications on November 27, 2025.
Paper Title: Divalent anion-driven framework regulation in Zr-based halide solid electrolytes for all-solid-state batteries
DOI: https://www.nature.com/articles/s41467-025-65702-2
This research was supported by the Samsung Electronics Future Technology Promotion Center, the National Research Foundation of Korea, and the National Supercomputing Center.
KAIST Awakens dormant immune cells inside tumors to attack cancer
<(From Left) Professor Ji-Ho Park, Dr. Jun-Hee Han from the Department of Bio and Brain Engineering>
Within tumors in the human body, there are immune cells (macrophages) capable of fighting cancer, but they have been unable to perform their roles properly due to suppression by the tumor. KAIST researchers have overcome this limitation by developing a new therapeutic approach that directly converts immune cells inside tumors into anticancer cell therapies.
KAIST (President Kwang Hyung Lee) announced on the 30th that a research team led by Professor Ji-Ho Park of the Department of Bio and Brain Engineering has developed a therapy in which, when a drug is injected directly into a tumor, macrophages already present in the body absorb it, produce CAR (a cancer-recognizing device) proteins on their own, and are converted into anticancer immune cells known as “CAR-macrophages.”
Solid tumors—such as gastric, lung, and liver cancers—grow as dense masses, making it difficult for immune cells to infiltrate tumors or maintain their function. As a result, the effectiveness of existing immune cell therapies has been limited.
CAR-macrophages, which have recently attracted attention as a next-generation immunotherapy, have the advantage of directly engulfing cancer cells while simultaneously activating surrounding immune cells to amplify anticancer responses.
However, conventional CAR-macrophage therapies require immune cells to be extracted from a patient’s blood, followed by cell culture and genetic modification. This process is time-consuming, costly, and has limited feasibility for real-world patient applications.
To address this challenge, the research team focused on “tumor-associated macrophages” that are already accumulated around tumors.
They developed a strategy to directly reprogram immune cells in the body by loading lipid nanoparticles—designed to be readily absorbed by macrophages—with both mRNA encoding cancer-recognition information and an immunostimulant that activates immune responses.
In other words, in this study, CAR-macrophages were created by “directly converting the body’s own macrophages into anticancer cell therapies inside the body.”
<Figure . Schematic illustration of the strategy for in vivo CAR-macrophage generation and cancer cell eradication via co-delivery of CAR mRNA and immunostimulants using lipid nanoparticles (LNPs)>
When this therapeutic agent was injected into tumors, macrophages rapidly absorbed it and began producing proteins that recognize cancer cells, while immune signaling was simultaneously activated. As a result, the generated “enhanced CAR-macrophages” showed markedly improved cancer cell–killing ability and activated surrounding immune cells, producing a powerful anticancer effect.
In animal models of melanoma (the most dangerous form of skin cancer), tumor growth was significantly suppressed, and the therapeutic effect was shown to have the potential to extend beyond the local tumor site to induce systemic immune responses.
Professor Ji-Ho Park stated, “This study presents a new concept of immune cell therapy that generates anticancer immune cells directly inside the patient’s body,” adding that “it is particularly meaningful in that it simultaneously overcomes the key limitations of existing CAR-macrophage therapies—delivery efficiency and the immunosuppressive tumor environment.”
This research was led by Jun-Hee Han, Ph.D., of the Department of Bio and Brain Engineering at KAIST as the first author, and the results were published on November 18 in ACS Nano, an international journal in the field of nanotechnology.
※ Paper title: “In Situ Chimeric Antigen Receptor Macrophage Therapy via Co-Delivery of mRNA and Immunostimulant,” Authors: Jun-Hee Han (first author), Erinn Fagan, Kyunghwan Yeom, Ji-Ho Park (corresponding author), DOI: 10.1021/acsnano.5c09138
This research was supported by the Mid-Career Researcher Program of the National Research Foundation of Korea.