KAIST Wins Bid for ‘Physical AI Core Technology Demonstration’ Pilot Project
KAIST (President Kwang Hyung Lee) announced on the 28th of August that, together with Jeonbuk State, Jeonbuk National University, and Sungkyunkwan University, it has jointly won the Ministry of Science and ICT’s pilot project for the “Physical AI Core Technology Proof of Concept (PoC)”, with KAIST serving as the overall research lead. The consortium also plans to participate in a full-scale demonstration project that is expected to reach a total scale of 1 trillion KRW in the future.
In this project, KAIST led the research planning under the theme of “Collaborative Intelligence Physical AI.” Based on this, Jeonbuk National University and Jeonbuk State will carry out joint research and establish a collaborative intelligence physical AI industrial ecosystem within the province. The pilot project will begin on September 1 this year and will run until the end of the year over the next five years. Through this effort, Jeonbuk State aims to be built into a global hub for physical AI.
KAIST will take charge of developing original research technologies, creating a research environment through the establishment of a testbed, and promoting industrial diffusion. Professor Young Jae Jang of the Department of Industrial and Systems Engineering at KAIST, who is the overall project director, has been leading research on collaborative intelligence physical AI since 2016. His “Collaborative Intelligence-Based Smart Manufacturing Innovation Technology” was selected as one of KAIST’s “Top 10 Research Achievements” in 2019.
“Physical AI” refers to cutting-edge artificial intelligence technology that enables physical devices such as robots, autonomous vehicles, and factory automation equipment to perform tasks without human instruction by understanding spatiotemporal concepts.
In particular, collaborative intelligence physical AI is a technology in which numerous robots and automated devices in a factory environment work together to achieve goals. It is attracting attention as a key foundation for realizing “dark factories” in industries such as semiconductors, secondary batteries, and automobile manufacturing.
Unlike existing manufacturing AI, this technology does not necessarily require massive amounts of historical data. Through real-time, simulation-based learning, it can quickly adapt even to manufacturing environments with frequent changes and has been deemed a next-generation technology that overcomes the limitations of data dependency.
Currently, the global AI industry is led by LLMs that simulate linguistic intelligence. However, physical AI must go beyond linguistic intelligence to include spatial intelligence and virtual environment learning, requiring the organic integration of hardware such as robots, sensors, and motors with software. As a manufacturing powerhouse, Korea is well-positioned to build such an ecosystem and seize the opportunity to lead global competition.
In fact, in April 2025, KAIST won first place at INFORMS (Institute for Operations Research and the Management Sciences), the world’s largest industrial engineering society, with its case study on collaborative intelligence physical AI, beating MIT and Amazon. This achievement is recognized as proof of Korea’s global competitiveness in the physical AI technology realm.
Professor Young Jae Jang, KAIST’s overall project director, said, “Winning this large-scale national project is the result of KAIST’s collaborative intelligence physical AI research capabilities accumulated over the past decade being recognized both domestically and internationally. This will be a turning point for establishing Korea’s manufacturing industry as a global leading ‘Physical AI Manufacturing Innovation Model.’”
KAIST President Kwang Hyung Lee emphasized that “KAIST is taking on the role of leading not only academic research but also the practical industrialization of national strategic technologies. Building on this achievement, we will collaborate with Jeonbuk National University and Jeonbuk State to develop Korea into a world-class hub for physical AI innovation.”
Through this project, KAIST, Jeonbuk National University, and Jeonbuk State plan to develop Korea into a global industrial hub for physical AI.
KAIST succeeds in controlling complex altered gene networks to restore them to normal
Previously, research on controlling gene networks has been carried out based on a single stimulus-response of cells. More recently, studies have been proposed to precisely analyze complex gene networks to identify control targets. A KAIST research team has succeeded in developing a universal technology that identifies gene control targets in altered cellular gene networks and restores them. This achievement is expected to be widely applied to new anticancer therapies such as cancer reversibility, drug development, precision medicine, and reprogramming for cell therapy.
KAIST (President Kwang Hyung Lee) announced on the 28th of August that Professor Kwang-Hyun Cho’s research team from the Department of Bio and Brain Engineering has developed a technology to systematically identify gene control targets that can restore the altered stimulus-response patterns of cells to normal by using an algebraic approach. The algebraic approach expresses gene networks as mathematical equations and identifies control targets through algebraic computations.
The research team represented the complex interactions among genes within a cell as a "logic circuit diagram" (Boolean network). Based on this, they visualized how a cell responds to external stimuli as a "landscape map" (phenotype landscape).
By applying a mathematical method called the "semi-tensor product,*" they developed a way to quickly and accurately calculate how the overall cellular response would change if a specific gene were controlled.
*Semi-tensor product: a method that calculates all possible gene combinations and control effects in a single algebraic formula
However, because the key genes that determine actual cellular responses number in the thousands, the calculations are extremely complex. To address this, the research team applied a numerical approximation method (Taylor approximation) to simplify the calculations. In simple terms, they transformed a complex problem into a simpler formula while still yielding nearly identical results.
Through this, the team was able to calculate which stable state (attractor) a cell would reach and predict how the cell’s state would change when a particular gene was controlled. As a result, they were able to identify core gene control targets that could restore abnormal cellular responses to states most similar to normal.
Professor Cho’s team applied the developed control technology to various gene networks and verified that it can accurately predict gene control targets that restore altered stimulus-response patterns of cells back to normal.
In particular, by applying it to bladder cancer cell networks, they identified gene control targets capable of restoring altered responses to normal. They also discovered gene control targets in large-scale distorted gene networks during immune cell differentiation that are capable of restoring normal stimulus-response patterns. This enabled them to solve problems that previously required only approximate searches through lengthy computer simulations in a fast and systematic way.
Professor Cho said, “This study is evaluated as a core original technology for the development of the Digital Cell Twin model*, which analyzes and controls the phenotype landscape of gene networks that determine cell fate. In the future, it is expected to be widely applicable across the life sciences and medicine, including new anticancer therapies through cancer reversibility, drug development, precision medicine, and reprogramming for cell therapy.”
*Digital Cell Twin model: a technology that digitally models the complex reactions occurring within cells, enabling virtual simulations of cellular responses instead of actual experiments
KAIST master’s student Insoo Jung, PhD student Corbin Hopper, PhD student Seong-Hoon Jang, and PhD student Hyunsoo Yeo participated in this study. The results were published online on August 22 in Science Advances, an international journal published by the American Association for the Advancement of Science (AAAS).
※ Paper title: “Reverse Control of Biological Networks to Restore Phenotype Landscapes”
※ DOI: https://www.science.org/doi/10.1126/sciadv.adw3995
This research was supported by the Mid-Career Researcher Program and the Basic Research Laboratory Program of the National Research Foundation of Korea, funded by the Ministry of Science and ICT.
KAIST–Princeton University Officially Launch “Net-Zero Korea” to Address Climate Crisis
KAIST (President Kwang Hyung Lee) announced on the 27th of August that a research team led by Professor Hae-Won Jeon of the Graduate School of Green Growth and Sustainable Development has signed a memorandum of understanding (MOU) with the Andlinger Center for Energy and the Environment at Princeton University in the United States to promote joint research on carbon neutrality, officially launching the Net-Zero Korea (NZK) project. This project was unveiled at the World Climate Industry EXPO (WCE) held in BEXCO, Busan, and will begin with seed funding from Google.
The NZK project aims, in the short term, to accelerate the transition of Korea’s energy and industrial sectors toward carbon neutrality, and in the mid- to long term, to strengthen Korea’s energy system modeling capabilities for policy formulation and implementation. Energy system modeling plays a critical role in studying the transition to clean energy and carbon neutrality.
In particular, this research plans to apply Princeton’s leading modeling methodologies from the Net-Zero America project—published in 2021 and widely recognized—to the Korean context by integrating them with KAIST’s integrated assessment modeling research.
The Net-Zero Korea project will be supported by funding from Google, KAIST, and Princeton University. This research is characterized by its detailed analysis of a wide range of factors, from regional land-use changes to job creation, and by concretely visualizing the resulting transformations in energy and industrial systems. It will also be conducted through an international collaborative network while reflecting Korea’s specific conditions. In particular, KAIST will develop an optimization-based open-source energy and industrial system model that integrates the effects of international trade, thereby contributing to global academia and policy research.
Therefore, the core of this modeling research is to apply to Korea the precise analysis and realistic approach that drew attention in Net-Zero America. Through this, it will be possible to visualize changes in the energy and industrial systems at high spatial, temporal, sectoral, and technological resolution, and to comprehensively analyze various factors such as regional land-use changes, capital investment requirements, job creation, and health impacts from air pollution. This will provide stakeholders with practical and reliable information.
In addition, the KAIST research team will collaborate with Princeton researchers, who have conducted national-scale decarbonization modeling studies with major research institutions in Australia, Brazil, China, India, Poland, and others, leveraging a global research network for joint studies.
Building on its experience in developing globally recognized integrated assessment models (IAM) tailored to Korea, KAIST will lead a new initiative to integrate international trade impacts into optimization-based open-source energy and industrial system models. This effort seeks to overcome the limitations of existing national energy modeling by reflecting the particularity of Korea, where trade plays a vital role across the economy.
Professor Wei Peng, Princeton’s principal investigator, said: “Through collaboration with KAIST’s world-class experts in integrated assessment modeling, we will be able to build new research that combines the strengths of macro-energy models and integrated assessment models, thereby developing capabilities applicable to many countries where trade plays a crucial role in the economy, such as Korea.”
Antonia Gawel, Director of Partnerships at Google, stated: “We are very pleased to support this meaningful research being conducted by KAIST and Princeton University in Korea. It will greatly help Google achieve our goal of net-zero emissions across our supply chain by 2030.”
Professor Haewon McJeon of KAIST commented: “Through joint research with Princeton University, which has been leading net-zero studies, we expect to provide science-based evidence to support Korea’s achievement of carbon neutrality and sustainable energy.”
President Kwang Hyung Lee of KAIST remarked: “It is deeply meaningful that KAIST, as Korea’s representative research institution, joins hands with Princeton University, a leading institution in the United States, to jointly build a science-based policy support system for responding to the climate crisis. This collaboration will contribute not only to achieving carbon neutrality in Korean society but also to the global response to the climate crisis.”
KAIST Develops AI that Automatically Detects Defects in Smart Factory Manufacturing Processes Even When Conditions Change
Recently, defect detection systems using artificial intelligence (AI) sensor data have been installed in smart factory manufacturing sites. However, when the manufacturing process changes due to machine replacement or variations in temperature, pressure, or speed, existing AI models fail to properly understand the new situation and their performance drops sharply. KAIST researchers have developed AI technology that can accurately detect defects even in such situations without retraining, achieving performance improvements up to 9.42%. This achievement is expected to contribute to reducing AI operating costs and expanding applicability in various fields such as smart factories, healthcare devices, and smart cities.
KAIST (President Kwang Hyung Lee) announced on the 26th of August that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new “time-series domain adaptation” technology that allows existing AI models to be utilized without additional defect labeling, even when manufacturing processes or equipment change.
Time-series domain adaptation technology enables AI models that handle time-varying data (e.g., temperature changes, machine vibrations, power usage, sensor signals) to maintain stable performance without additional training, even when the training environment (domain) and the actual application environment differ.
Professor Lee’s team paid attention to the fact that the core problem of AI models becoming confused by environmental (domain) changes lies not only in differences in data distribution but also in changes in defect occurrence patterns (label distribution) themselves. For example, in semiconductor wafer processes, the ratio of ring-shaped defects and scratch defects may change due to equipment modifications.
The research team developed a method for decomposing new process sensor data into three components—trends, non-trends, and frequencies—to analyze their characteristics individually. Just as humans detect anomalies by combining pitch, vibration patterns, and periodic changes in machine sounds, AI was enabled to analyze data from multiple perspectives.
In other words, the team developed TA4LS (Time-series domain Adaptation for mitigating Label Shifts) technology, which applies a method of automatically correcting predictions by comparing the results predicted by the existing model with the clustering information of the new process data. Through this, predictions biased toward the defect occurrence patterns of the existing process can be precisely adjusted to match the new process.
In particular, this technology is highly practical because it can be easily combined like an additional plug-in module inserted into existing AI systems without requiring separate complex development. That is, regardless of the AI technology currently being used, it can be applied immediately with only simple additional procedures.
In experiments using four benchmark datasets of time-series domain adaptation (i.e., four types of sensor data in which changes had occurred), the research team achieved up to 9.42% improvement in accuracy compared to existing methods.[TT1]
Especially when process changes caused large differences in label distribution (e.g., defect occurrence patterns), the AI demonstrated remarkable performance improvement by autonomously correcting and distinguishing such differences. These results proved that the technology can be used more effectively without defects in environments that produce small batches of various products, one of the main advantages of smart factories.
Professor Jae-Gil Lee, who supervised the research, said, “This technology solves the retraining problem, which has been the biggest obstacle to the introduction of artificial intelligence in manufacturing. Once commercialized, it will greatly contribute to the spread of smart factories by reducing maintenance costs and improving defect detection rates.”
This research was carried out with Jihye Na, a Ph.D. student at KAIST, as the first author, with Youngeun Nam, a Ph.D. student, and Junhyeok Kang, a researcher at LG AI Research, as co-authors. The research results were presented in August 2025 at KDD (the ACM SIGKDD Conference on Knowledge Discovery and Data Mining), the world’s top academic conference in artificial intelligence and data.
※Paper Title: “Mitigating Source Label Dependency in Time-Series Domain Adaptation under Label Shifts”
※DOI: https://doi.org/10.1145/3711896.3737050
This technology was developed as part of the research outcome of the SW Computing Industry Original Technology Development Program’s SW StarLab project (RS-2020-II200862, DB4DL: Development of Highly Available and High-Performance Distributed In-Memory DBMS for Deep Learning), supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP).
KAIST achieves over 95% high-purity CO₂ capture using only smartphone charging power
Direct Air Capture (DAC) is a technology that filters out carbon dioxide present in the atmosphere at extremely low concentrations (below 400 ppm). The KAIST research team has now succeeded in capturing over 95% high-purity carbon dioxide using only low power at the level of smartphone charging voltage (3V), without hot steam or complex facilities. While high energy cost has been the biggest obstacle for conventional DAC technologies, this study is regarded as a breakthrough demonstrating real commercialization potential. Overseas patent applications have already been filed, and because it can be easily linked with renewable energy such as solar and wind power, the technology is being highlighted as a “game changer” for accelerating the transition to carbon-neutral processes.
KAIST (President Kwang Hyung Lee) announced on the 25th of August that Professor Dong-Yeun Koh’s research team from the Department of Chemical and Biomolecular Engineering, in collaboration with Professor T. Alan Hatton’s group at MIT’s Department of Chemical Engineering, has developed the world’s first ultra-efficient e-DAC (Electrified Direct Air Capture) technology based on conductive silver nanofibers.
Conventional DAC processes required high-temperature steam (over 100℃) in the regeneration stage, where absorbed or adsorbed carbon dioxide is separated again. This process consumes about 70% of the total energy, making energy efficiency crucial, and requires complex heat-exchange systems, which makes cost reduction difficult. The joint research team, led by KAIST, solved this problem with “fibers that heat themselves electrically,” adopting Joule heating, a method that generates heat by directly passing electricity through fibers, similar to an electric blanket. By heating only where needed without an external heat source, energy loss was drastically reduced.
This technology can rapidly heat fibers to 110℃ within 80 seconds with only 3V—the energy level of smartphone charging. This shortens adsorption–desorption cycles dramatically even in low-power environments, while reducing unnecessary heat loss by about 20% compared to existing technologies.
The core of this research was not just making conductive fibers, but realizing a “breathable conductive coating” that achieves both “electrical conductivity” and “gas diffusion.”
The team uniformly coated porous fiber surfaces with a composite of silver nanowires and nanoparticles, forming a layer about 3 micrometers (µm) thick—much thinner than a human hair. This “3D continuous porous structure” allowed excellent electrical conductivity while securing pathways for CO₂ molecules to move smoothly into the fibers, enabling uniform, rapid heating and efficient CO₂ capture simultaneously.
Furthermore, when multiple fibers were modularized and connected in parallel, the total resistance dropped below 1 ohm (Ω), proving scalability to large-scale systems. The team succeeded in recovering over 95% high-purity CO₂ under real atmospheric conditions.
This achievement was the result of five years of in-depth research since 2020. Remarkably, in late 2022, long before the paper’s publication, the core technology had already been filed for PCT and domestic/international patents (WO2023068651A1, countries entered: US, EP, JP, AU, CN), securing foundational intellectual property rights. This indicates that the technology is not only highly advanced but also developed with practical commercialization in mind beyond the laboratory level.
The biggest innovation of this technology is that it runs solely on electricity, making it very easy to integrate with renewable energy sources such as solar and wind. It perfectly matches the needs of global companies that have declared RE100 and seek carbon-neutral process transitions.
Professor Dong-Yeun Koh of KAIST said, “Direct Air Capture (DAC) is not just a technology for reducing carbon dioxide emissions, but a key means of achieving ‘negative emissions’ by purifying the air itself. The conductive fiber-based DAC technology we developed can be applied not only to industrial sites but also to urban systems, significantly contributing to Korea’s leap as a leading nation in future DAC technologies.”
This study was led by Young Hun Lee (PhD, 2023 graduate of KAIST; currently at MIT Department of Chemical Engineering) and co-first-authored by Jung Hun Lee and Hwajoo Joo (MIT, Department of Chemical Engineering). The results were published online on August 1, 2025, in Advanced Materials, one of the world’s leading journals in materials science, and in recognition of its excellence, the work was also selected for the Front Inside Cover.
※ Paper title: “Design of Electrified Fiber Sorbents for Direct Air Capture with Electrically-Driven Temperature Vacuum Swing Adsorption”
※ DOI: https://doi.org/10.1002/adma.202504542
This study was supported by the Aramco–KAIST CO₂ Research Center and the National Research Foundation of Korea with funding from the Ministry of Science and ICT (No. RS-2023-00259416, DACU Source Technology Development Project).
KAIST to Host the ‘6th Emerging Materials Symposium’
KAIST (President Kwang Hyung Lee) announced on the 22nd of August that it will host the 6th KAIST Emerging Materials Symposium on the 26th in the Meta Convergence Hall (W13) on its main Daejeon campus, to explore the latest research trends in next-generation promising nanomaterials and discuss future visions.
Launched in 2020, this symposium marks its sixth year and has established itself as KAIST’s flagship academic event by inviting world-renowned scholars on next-generation materials to share groundbreaking achievements.
The event will feature six speakers from four prestigious overseas universities—the Massachusetts Institute of Technology (MIT), Yale University, UCLA, and Drexel University—providing an overview of cutting-edge global research trends in emerging materials, while also showcasing KAIST’s representative achievements.
Notably, Professor Yury Gogotsi of Drexel University, who gained global recognition for the pioneering development of MXene—an emerging material attracting attention for its high electrical conductivity and electromagnetic shielding capability—will deliver a lecture titled “The Future of MXene.”
In the session “Global Frontier in MIT,” three MIT professors will present the institute’s leading research: ▴Professor Ju Li, an authority on AI-robotics-based materials synthesis, ▴Professor Martin Z. Bazant, an expert in the fields of electrochemistry and electronic transport dynamics, and ▴Professor Jeehwan Kim, a leading researcher tackling the limitations of silicon wafer-based semiconductor manufacturing.
In the session “Emerging Materials and New Possibilities,” ▴Professor Yury Gogotsi of Drexel University, ▴Professor Liangbing Hu of Yale University, a pioneer in nanoparticle synthesis through rapid high-temperature thermal processing, and ▴Professor Jun Chen of UCLA, a key researcher in bioelectronic materials using multifunctional flexible materials, will present the development of core emerging materials and future directions.
Additionally, six professors from KAIST’s Department of Materials Science and Engineering will lead the session “KAIST’s MSE Entrepreneurial Spirit” where they will share the process of founding startups based on KAIST’s advanced materials technologies and how nanomaterials have taken root as foundational industries.
The session will include: ▴Professor Il-Doo Kim, founder of the nanofiber and colorimetric gas sensor company IDKLAB; ▴Professor Kibeom Kang, CEO of TDS Innovation, a company specializing in precursors and equipment for 2D material synthesis; ▴Professor Yeonsik Jeong, co-founder of Pico Foundry, a company producing SERS chips; ▴Professor Sang Wook Kim, founder of Materials Creation, which develops products based on high-quality graphene oxide; ▴Professor Jaebeom Jang, founder of Flashomic Inc., a leader in the commercialization of high-speed multiplexed protein imaging technology; and ▴Professor Steve Park, co-CEO of Aldaver, a company developing artificial cadavers (practice organs) that fully replicate the human body. They will each share their entrepreneurial cases, offering vivid lectures on the journey of scientific technologies into the marketplace.
The symposium will also feature a tour of the automated research lab at the Top-Tier KAIST-MIT Future Energy Initiative Research Center, jointly established by KAIST and MIT. The center, designed to build an AI-robotics-based autonomous research laboratory for the rapid development and application of advanced energy materials to help solve the global climate crisis, will operate for ten years. Overseas scholars will also be given an inside look at research and development using automated infrastructure, with discussions to follow on upcoming international collaborations.
Professor Il-Doo Kim of KAIST’s Department of Materials Science and Engineering, who organized the event, emphasized, “This symposium, featuring six global scholars and six KAIST entrepreneurial professors, will be a valuable opportunity to instill an international perspective and entrepreneurial mindset in students. It will also mark a turning point in KAIST’s innovative materials research and international collaborative research network.”
As part of the program, on Wednesday the 27th, KAIST will hold academic exchange sessions with overseas scholars. These will include discussions on international joint research, as well as sessions where KAIST students and early-career researchers can present their work and interact, opening opportunities for future collaborations.
The 6th KAIST Emerging Materials Symposium is open free of charge to all researchers interested in the latest research trends in chemistry, physics, biology, and materials science-related engineering fields.
Participation on the 26th will be available through on-site registration without prior application. Further details are available on the KAIST Department of Materials Science and Engineering EMS website (https://mse.kaist.ac.kr/index.php?mid=MSE_EMS).
In KAIST, Robots Now Untie Rubber Bands and Insert Wires Like Humans
The technology that allows robots to handle deformable objects such as wires, clothing, and rubber bands has long been regarded as a key task in the automation of manufacturing and service industries. However, since such deformable objects do not have a fixed shape and their movements are difficult to predict, robots have faced great difficulties in accurately recognizing and manipulating them. KAIST researchers have developed a robot technology that can precisely grasp the state of deformable objects and handle them skillfully, even with incomplete visual information. This achievement is expected to contribute to intelligent automation in various industrial and service fields, including cable and wire assembly, manufacturing that handles soft components, and clothing organization and packaging.
KAIST (President Kwang Hyung Lee) announced on the 21st of August that the research team led by Professor Daehyung Park of the School of Computing developed an artificial intelligence technology called “INR-DOM (Implicit Neural-Representation for Deformable Object Manipulation),” which enables robots to skillfully handle objects whose shape continuously changes like elastic bands and which are visually difficult to distinguish.
Professor Park’s research team developed a technology that allows robots to completely reconstruct the overall shape of a deformable object from partially observed three-dimensional information and to learn manipulation strategies based on it. Additionally, the team introduced a new two-stage learning framework that combines reinforcement learning and contrastive learning so that robots can efficiently learn specific tasks. The trained controller achieved significantly higher task success rates compared to existing technologies in a simulation environment, and in real robot experiments, it demonstrated a high level of manipulation capability, such as untying complicatedly entangled rubber bands, thereby greatly expanding the applicability of robots in handling deformable objects.
Deformable Object Manipulation (DOM) is one of the long-standing challenges in robotics. This is because deformable objects have infinite degrees of freedom, making their movements difficult to predict, and the phenomenon of self-occlusion, in which the object hides parts of itself, makes it difficult for robots to grasp their overall state.
To solve these problems, representation methods of deformable object states and control technologies based on reinforcement learning have been widely studied. However, existing representation methods could not accurately represent continuously deforming surfaces or complex three-dimensional structures of deformable objects, and since state representation and reinforcement learning were separated, there was a limitation in constructing a suitable state representation space needed for object manipulation.
To overcome these limitations, the research team utilized “Implicit Neural Representation.” This technology receives partial three-dimensional information (point cloud*) observed by the robot and reconstructs the overall shape of the object, including unseen parts, as a continuous surface (signed distance function, SDF). This enables robots to imagine and understand the overall shape of the object just like humans.
*Point cloud 3D information: a method of representing the three-dimensional shape of an object as a “set of points” on its surface.
Furthermore, the research team introduced a two-stage learning framework. In the first stage of pre-training, a model is trained to reconstruct the complete shape from incomplete point cloud data, securing a state representation module that is robust to occlusion and capable of well representing the surfaces of stretching objects. In the second stage of fine-tuning, reinforcement learning and contrastive learning are used together to optimize the control policy and state representation module so that the robot can clearly distinguish subtle differences between the current state and the goal state and efficiently find the optimal action required for task execution.
When the INR-DOM technology developed by the research team was mounted on a robot and tested, it showed overwhelmingly higher success rates than the best existing technologies in three complex tasks in a simulation environment: inserting a rubber ring into a groove (sealing), installing an O-ring onto a part (installation), and untying tangled rubber bands (disentanglement). In particular, in the most challenging task, disentanglement, the success rate reached 75%, which was about 49% higher than the best existing technology (ACID, 26%).
The research team also verified that INR-DOM technology is applicable in real environments by combining sample-efficient robotic reinforcement learning with INR-DOM and performing reinforcement learning in a real-world environment.
As a result, in actual environments, the robot performed insertion, installation, and disentanglement tasks with a success rate of over 90%, and in particular, in the visually difficult bidirectional disentanglement task, it achieved a 25% higher success rate compared to existing image-based reinforcement learning methods, proving that robust manipulation is possible despite visual ambiguity.
Minseok Song, a master’s student and first author of this research, stated that “this research has shown the possibility that robots can understand the overall shape of deformable objects even with incomplete information and perform complex manipulation based on that understanding.” He added, “It will greatly contribute to the advancement of robot technology that performs sophisticated tasks in cooperation with humans or in place of humans in various fields such as manufacturing, logistics, and medicine.”
This study, with KAIST School of Computing master’s student Minseok Song as first author, was presented at the top international robotics conference, Robotics: Science and Systems (RSS) 2025, held June 21–25 at USC in Los Angeles.
※ Paper title: “Implicit Neural-Representation Learning for Elastic Deformable-Object Manipulations”
※ DOI: https://www.roboticsproceedings.org/ (to be released), currently https://arxiv.org/abs/2505.00500
This research was supported by the Ministry of Science and ICT through the Institute of Information & Communications Technology Planning & Evaluation (IITP)’s projects “Core Software Technology Development for Complex-Intelligence Autonomous Agents” (RS-2024-00336738; Development of Mission Execution Procedure Generation Technology for Autonomous Agents’ Complex Task Autonomy), “Core Technology Development for Human-Centered Artificial Intelligence” (RS-2022-II220311; Goal-Oriented Reinforcement Learning Technology for Multi-Contact Robot Manipulation of Everyday Objects), “Core Computing Technology” (RS-2024-00509279; Global AI Frontier Lab), as well as support from Samsung Electronics. More details can be found at https://inr-dom.github.io.
KAIST Leading the International Standardization of Next-Generation Random Number Generators
In computer security, random numbers are crucial values that must be unpredictable—such as secret keys or initialization vectors (IVs)—forming the foundation of security systems. To achieve this, deterministic random bit generators (DRBGs) are used, which produce numbers that appear random. However, existing DRBGs had limitations in both security (unpredictability against hacking) and output speed. KAIST researchers have developed a DRBG that theoretically achieves the highest possible level of security through a new proof technique, while maximizing speed by parallelizing its structure. This enables safe and ultra-fast random number generation applicable from IoT devices to large-scale servers.
KAIST (President Kwang Hyung Lee) announced on the 20th of August that a research team led by Professor Jooyoung Lee from the School of Computing has established a new theoretical framework for analyzing the security of permutation*-based deterministic random bit generators (DRBG, Deterministic Random Bits Generator) and has designed a DRBG that achieves optimal efficiency.
*Permutation: The process of shuffling bits or bytes by changing their order, allowing bidirectional conversion (the shuffled data can be restored to its original state).
Deterministic random bit generators create unpredictable random numbers from entropy sources (random data obtained from the environment) using basic cryptographic operations such as block ciphers*, hash functions**, and permutations.
*Block cipher: A method of transforming plaintext into ciphertext of the same length.
**Hash function: A function that converts input into a fixed-length digest by mixing input data to produce an unpredictable value.
The random numbers generated are used in most cryptographic algorithms determine the fundamental security of the entire system that relies on them. Therefore, DRBGs form the basis of cryptography, and improving their efficiency and security is a highly important research task.
Permutation functions, as fundamental components of cryptographic algorithms that allow bidirectional computation, have attracted significant attention for their excellent security and efficiency, especially since being adopted in the U.S. standard SHA-3 hash function.
However, the sponge construction* adopted in SHA-3 has been criticized for its limited output efficiency relative to permutation size. Since all existing permutation-based DRBGs used sponge constructions in their output functions, they too suffered from output efficiency limitations.
*Sponge construction (Sponge construction): A structure resembling a sponge’s process of absorbing and squeezing out water. It sequentially absorbs input data and then squeezes out as much output as desired. Since the output length is not fixed, it can generate very long random numbers or hashes when needed.
In addition, existing permutation-based DRBGs used a technique called game hopping to prove security. However, this method had the limitation of yielding lower security guarantees than theoretically possible.
For example, when a permutation’s capacity (c) is 256 bits, the theoretical expectation is min{c/2, λ}, i.e., 128-bit security. But under the conventional proof method, the guarantee was only min{c/3, λ}, about 85 bits. (λ refers to the entropy threshold, and min indicates taking the smaller of the two values.)
Game hopping defines the situation between the random number generator and the adversary as a “game,” splits it into many small steps (mini-games), and calculates the adversary’s success probability at each stage to combine them. However, because the process excessively subdivides the stages, the resulting security level turned out lower than the actual one.
Professor Jooyoung Lee’s research team at KAIST noted that the conventional game-hopping technique divided the overall game into too many steps and proposed a new proof method simplifying it into just two stages. As a result, they demonstrated that the security level of permutation-based DRBGs actually corresponds to min{c/2, λ} bits— an improvement of approximately 50% compared to existing proofs. They also proved that this value is the theoretical maximum achievable.
The research team also designed POSDRBG (Parallel Output Sponge-based DRBG) to address the output efficiency limitation of the existing sponge structure caused by its serial (single-line) processing. The newly proposed parallel structure processes multiple streams simultaneously, thereby achieving the maximum efficiency possible for permutation-based DRBGs.
Professor Jooyoung Lee stated, “POSDRBG is a new deterministic random bit generator that improves both random number generation speed and security, making it applicable from small IoT devices to large-scale servers. This research is expected to positively influence the ongoing revision of the international DRBG standard SP800-90A*, leading to the formal inclusion of permutation-based DRBGs.”
*SP800-90A: An international standard document established by the U.S. NIST (National Institute of Standards and Technology), defining the design and operational criteria for DRBGs used in cryptographic systems. Until now, permutation-based DRBGs have not been included in the standard.
This research, with Woohyuk Chung (KAIST, first author), Seongha Hwang (KAIST), Hwigyeom Kim (Samsung Electronics), and Jooyoung Lee (KAIST, corresponding author), will be presented in August at CRYPTO (the Annual International Cryptology Conference), the world’s top academic conference in cryptology.
Article title: “Enhancing Provable Security and Efficiency of Permutation-Based DRBGs“
DOI: https://doi.org/10.1007/978-3-032-01901-1_15
This research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP).
The random number output function of the existing Sponge-DRBG uses a sponge structure that directly connects the permutation P. For reference, all existing permutation-function-based DRBGs have this sponge structure. In the sponge structure, among the n-bit inputs of P, only the upper r bits are used as the output Z. Therefore, the output efficiency is always limited to r/n.
In this study, the random number output function of POSDRBG was designed to allow parallel computation, and all n-bit outputs of the permutation function P become random numbers Z. Therefore, it has an output efficiency of 1.
KAIST Develops AI to Easily Find Promising Materials That Capture Only CO₂
< Photo 1. (From left) Professor Jihan Kim, Ph.D. candidate Yunsung Lim and Dr. Hyunsoo Park of the Department of Chemical and Biomolecular Engineering >
In order to help prevent the climate crisis, actively reducing already-emitted CO₂ is essential. Accordingly, direct air capture (DAC) — a technology that directly extracts only CO₂ from the air — is gaining attention. However, effectively capturing pure CO₂ is not easy due to water vapor (H₂O) present in the air. KAIST researchers have successfully used AI-driven machine learning techniques to identify the most promising CO₂-capturing materials among metal-organic frameworks (MOFs), a key class of materials studied for this technology.
KAIST (President Kwang Hyung Lee) announced on the 29th of June that a research team led by Professor Jihan Kim from the Department of Chemical and Biomolecular Engineering, in collaboration with a team at Imperial College London, has developed a machine-learning-based simulation method that can quickly and accurately screen MOFs best suited for atmospheric CO₂ capture.
< Figure 1. Concept diagram of Direct Air Capture (DAC) technology and carbon capture using Metal-Organic Frameworks (MOFs). MOFs are promising porous materials capable of capturing carbon dioxide from the atmosphere, drawing attention as a core material for DAC technology. >
To overcome the difficulty of discovering high-performance materials due to the complexity of structures and the limitations of predicting intermolecular interactions, the research team developed a machine learning force field (MLFF) capable of precisely predicting the interactions between CO₂, water (H₂O), and MOFs. This new method enables calculations of MOF adsorption properties with quantum-mechanics-level accuracy at vastly faster speeds than before.
Using this system, the team screened over 8,000 experimentally synthesized MOF structures, identifying more than 100 promising candidates for CO₂ capture. Notably, this included new candidates that had not been uncovered by traditional force-field-based simulations. The team also analyzed the relationships between MOF chemical structure and adsorption performance, proposing seven key chemical features that will help in designing new materials for DAC.
< Figure 2. Concept diagram of adsorption simulation using Machine Learning Force Field (MLFF). The developed MLFF is applicable to various MOF structures and allows for precise calculation of adsorption properties by predicting interaction energies during repetitive Widom insertion simulations. It is characterized by simultaneously achieving high accuracy and low computational cost compared to conventional classical force fields. >
This research is recognized as a significant advance in the DAC field, greatly enhancing materials design and simulation by precisely predicting MOF-CO₂ and MOF-H₂O interactions.
The results of this research, with Ph.D. candidate Yunsung Lim and Dr. Hyunsoo Park of KAIST as co-first authors, were published in the international academic journal Matter on June 12.
※Paper Title: Accelerating CO₂ direct air capture screening for metal–organic frameworks with a transferable machine learning force field
※DOI: 10.1016/j.matt.2025.102203
This research was supported by the Saudi Aramco-KAIST CO₂ Management Center and the Ministry of Science and ICT's Global C.L.E.A.N. Project.
KAIST Invites World-Renowned Scholars, Elevating Global Competitiveness
< Photo 1. (From left) Professor John Rogers, Professor Gregg Rothermel, Dr. Sang H. Choi >
KAIST announced on June 27th that it has appointed three world-renowned scholars, including Professor John A. Rogers of Northwestern University, USA, as Invited Distinguished Professors in key departments such as Materials Science and Engineering.
Professor John A. Rogers (Northwestern University, USA) will be working with the Department of Materials Science and Engineering from July 2025 to June 2028 with Professor Gregg Rothermel (North Carolina State University, USA) working with the School of Computing from August 2025 to July 2026, and Dr. Sang H. Choi (NASA Langley Research Center, USA) with the Department of Aerospace Engineering from May 2025 to April 2028.
Professor John A. Rogers, a person of global authority in the field of bio-integrated electronics, has been leading advanced convergence technologies such as flexible electronics, smart skin, and implantable sensors. His significant impact on academia and industry is evident through over 900 papers published in top-tier academic journals like Science, Nature, and Cell, and he comes in an H-index of 240*. His research group, the Rogers Research Group at Northwestern University, focuses on "Science that brings Solutions to Society," encompassing areas such as bio-integrated microsystems and unconventional nanofabrication techniques. He is the founding Director of the Querrey-Simpson Institute of Bioelectronics at Northwestern University.
* H-index 240: An H-index is a measurement used to assess the research productivity and impact of an individual authors. H-index 240 means that 240 or more papers have been cited at least 240 times each, indicating a significant impact and the presumable status as a world-class scholar.
The Department of Materials Science and Engineering plans to further enhance its research capabilities in next-generation bio-implantable materials and wearable devices and boost its global competitiveness through the invitation of Professor Rogers. In particular, it aims to create strong research synergies by linking with the development of bio-convergence interface materials, a core task of the Leading Research Center (ERC, total research budget of 13.5 billion KRW over 7 years) led by Professor Kun-Jae Lee.
Professor Gregg Rothermel, a world-renowned scholar in software engineering, was ranked second among the top 50 global researchers by Communications of the ACM. For over 30 years, he has conducted practical research to improve software reliability and quality. He has achieved influential research outcomes through collaborations with global companies such as Boeing, Microsoft, and Lockheed Martin. Dr. Rothermel's research at North Carolina State University focuses on software engineering and program analysis, with significant contributions through initiatives like the ESQuaReD Laboratory and the Software-Artifact Infrastructure Repository (SIR).
The School of Computing plans to strengthen its research capabilities in software engineering and conduct collaborative research on software design and testing to enhance the reliability and safety of AI-based software systems through the invitation of Professor Gregg Rothermel. In particular, he is expected to participate in the Big Data Edge-Cloud Service Research Center (ITRC, total research budget of 6.7 billion KRW over 8 years) led by Professor In-Young Ko of the School of Computing, and the Research on Improving Complex Mobility Safety (SafetyOps, Digital Columbus Project, total research budget of 3.5 billion KRW over 8 years), contributing to resolving uncertainties in machine learning-based AI software and advancing technology.
Dr. Sang H. Choi, a global expert in space exploration and energy harvesting, has worked at NASA Langley Research Center for over 40 years, authoring over 200 papers and reports, holding 45 patents, and receiving 71 awards from NASA. In 2022, he was inducted into the 'Inventors Hall of Fame' as part of NASA's Technology Transfer Program. This is a rare honor, recognizing researchers who have contributed to the private sector dissemination of space exploration technology, with only 35 individuals worldwide selected to date. Dr. Choi's extensive work at NASA includes research on advanced electronic and energetic materials, satellite sensors, and various nano-technologies.
Dr. Choi plans to collaborate with Associate Professor Hyun-Jung Kim (former NASA Research Scientist, 2009-2024), who joined the Department of Aerospace Engineering in September of 2024, to lead the development of core technologies for lunar exploration (energy sources, sensing, in-situ resource utilization ISRU).
KAIST President Kwang Hyung Lee stated, "It is very meaningful to be able to invite these world-class scholars. Through these appointments, KAIST will further strengthen its global competitiveness in research in the fields of advanced convergence technology such as bio-convergence electronics, AI software engineering, and space exploration, securing our position as the leader of global innovations."
Military Combatants Usher in an Era of Personalized Training with New Materials
< Photo 1. (From left) Professor Steve Park of Materials Science and Engineering, Kyusoon Pak, Ph.D. Candidate (Army Major) >
Traditional military training often relies on standardized methods, which has limited the provision of optimized training tailored to individual combatants' characteristics or specific combat situations. To address this, our research team developed an e-textile platform, securing core technology that can reflect the unique traits of individual combatants and various combat scenarios. This technology has proven robust enough for battlefield use and is economical enough for widespread distribution to a large number of troops.
On June 25th, Professor Steve Park's research team at KAIST's Department of Materials Science and Engineering announced the development of a flexible, wearable electronic textile (E-textile) platform using an innovative technology that 'draws' electronic circuits directly onto fabric.
The wearable e-textile platform developed by the research team combines 3D printing technology with new materials engineering design to directly print flexible and highly durable sensors and electrodes onto textile substrates. This enables the collection of precise movement and human body data from individual combatants, which can then be used to propose customized training models.
Existing e-textile fabrication methods were often complex or limited in their ability to provide personalized customization. To overcome these challenges, the research team adopted an additive manufacturing technology called 'Direct Ink Writing (DIW)' 3D printing.
< Figure 1. Schematic diagram of e-textile manufactured with Direct Ink Writing (DIW) printing technology on various textiles, including combat uniforms >
This technology involves directly dispensing and printing special ink, which functions as sensors and electrodes, onto textile substrates in desired patterns. This allows for flexible implementation of various designs without the complex process of mask fabrication. This is expected to be an effective technology that can be easily supplied to hundreds of thousands of military personnel.
The core of this technology lies in the development of high-performance functional inks based on advanced materials engineering design. The research team combined styrene-butadiene-styrene (SBS) polymer, which provides flexibility, with multi-walled carbon nanotubes (MWCNT) for electrical conductivity. They developed a tensile/bending sensor ink that can stretch up to 102% and maintain stable performance even after 10,000 repetitive tests. This means that accurate data can be consistently obtained even during the strenuous movements of combatants.
< Figure 2. Measurement of human movement and breathing patterns using e-textile >
Furthermore, new material technology was applied to implement 'interconnect electrodes' that electrically connect the upper and lower layers of the fabric. The team developed an electrode ink combining silver (Ag) flakes with rigid polystyrene (PS) polymer, precisely controlling the impregnation level (how much the ink penetrates the fabric) to effectively connect both sides or multiple layers of the fabric. This secures the technology for producing multi-layered wearable electronic systems integrating sensors and electrodes.
< Figure 3. Experimental results of recognizing unknown objects after machine learning six objects using a smart glove >
The research team proved the platform's performance through actual human movement monitoring experiments. They printed the developed e-textile on major joint areas of clothing (shoulders, elbows, knees) and measured movements and posture changes during various exercises such as running, jumping jacks, and push-ups in real-time.
Additionally, they demonstrated the potential for applications such as monitoring breathing patterns using a smart mask and recognizing objects through machine learning and perceiving complex tactile information by printing multiple sensors and electrodes on gloves. These results show that the developed e-textile platform is effective in precisely understanding the movement dynamics of combatants.
This research is an important example demonstrating how cutting-edge new material technology can contribute to the advancement of the defense sector. Major Kyusoon Pak of the Army, who participated in this research, considered required objectives such as military applicability and economic feasibility for practical distribution from the research design stage.
< Figure 4. Experimental results showing that a multi-layered e-textile glove connected with interconnect electrodes can measure tensile/bending signals and pressure signals at a single point >
Major Pak stated, "Our military is currently facing both a crisis and an opportunity due to the decrease in military personnel resources caused by the demographic cliff and the advancement of science and technology. Also, respect for life in the battlefield is emerging as a significant issue. This research aims to secure original technology that can provide customized training according to military branch/duty and type of combat, thereby enhancing the combat power and ensuring the survivability of our soldiers."
He added, "I hope this research will be evaluated as a case that achieved both scientific contribution and military applicability."
This research, where Kyusoon Pak, Ph.D. Candidate (Army Major) from KAIST's Department of Materials Science and Engineering, participated as the first author and Professor Steve Park supervised, was published on May 27, 2025, in `npj Flexible Electronics (top 1.8% in JCR field)', an international academic journal in the electrical, electronic, and materials engineering fields.
* Paper Title: Fabrication of Multifunctional Wearable Interconnect E-textile Platform Using Direct Ink Writing (DIW) 3D Printing
* DOI: https://doi.org/10.1038/s41528-025-00414-7
This research was supported by the Ministry of Trade, Industry and Energy and the National Research Foundation of Korea.
KAIST's Li-Fi - Achieves 100 Times Faster Speed and Enhanced Security of Wi-Fi
- KAIST-KRISS Develop 'On-Device Encryption Optical Transmitter' Based on Eco-Friendly Quantum Dots
- New Li-Fi Platform Technology Achieves High Performance with 17.4% Device Efficiency and 29,000 nit Brightness, Simultaneously Improving Transmission Speed and Security
- Presents New Methodology for High-Speed and Encrypted Communication Through Single-Device-Based Dual-Channel Optical Modulation
< Photo 1. (Front row from left) Seungmin Shin, First Author; Professor Himchan Cho; (Back row from left) Hyungdoh Lee, Seungwoo Lee, Wonbeom Lee; (Top left) Dr. Kyung-geun Lim >
Li-Fi (Light Fidelity) is a wireless communication technology that utilizes the visible light spectrum (400-800 THz), similar to LED light, offering speeds up to 100 times faster than existing Wi-Fi (up to 224 Gbps). While it has fewer limitations in available frequency allocation and less radio interference, it is relatively vulnerable to security breaches as anyone can access it. Korean researchers have now proposed a new Li-Fi platform that overcomes the limitations of conventional optical communication devices and can simultaneously enhance both transmission speed and security.
KAIST (President Kwang Hyung Lee) announced on the 24th that Professor Himchan Cho's research team from the Department of Materials Science and Engineering, in collaboration with Dr. Kyung-geun Lim of the Korea Research Institute of Standards and Science (KRISS, President Ho-Seong Lee) under the National Research Council of Science & Technology (NST, Chairman Young-Sik Kim), has developed 'on-device encryption optical communication device' technology for the utilization of 'Li-Fi,' which is attracting attention as a next-generation ultra-high-speed data communication.
Professor Cho's team created high-efficiency light-emitting triode devices using eco-friendly quantum dots (low-toxicity and sustainable materials). The device developed by the research team is a mechanism that generates light using an electric field. Specifically, the electric field is concentrated in 'tiny holes (pinholes) in the permeable electrode' and transmitted beyond the electrode. This device utilizes this principle to simultaneously process two input data streams.
Using this principle, the research team developed a technology called 'on-device encryption optical transmitter.' The core of this technology is that the device itself converts information into light and simultaneously encrypts it. This means that enhanced security data transmission is possible without the need for complex, separate equipment.
External Quantum Efficiency (EQE) is an indicator of how efficiently electricity is converted into light, with a general commercialization standard of about 20%. The newly developed device recorded an EQE of 17.4%, and its luminance was 29,000 nit, significantly exceeding the maximum brightness of a smartphone OLED screen, which is 2,000 nit, demonstrating a brightness more than 10 times higher.
< Figure 1. Schematic diagram of the device structure developed by the research team and encrypted communication >
Furthermore, to more accurately understand how this device converts information into light, the research team used a method called 'transient electroluminescence analysis.' They analyzed the light-emitting characteristics generated by the device when voltage was instantaneously applied for very short durations (hundreds of nanoseconds = billionths of a second). Through this analysis, they investigated the movement of charges within the device at hundreds of nanoseconds, elucidating the operating mechanism of dual-channel optical modulation implemented within a single device.
Professor Himchan Cho of KAIST stated, "This research overcomes the limitations of existing optical communication devices and proposes a new communication platform that can both increase transmission speed and enhance security."
< Photo 2. Professor Himchan Cho, Department of Materials Science and Engineering >
He added, "This technology, which strengthens security without additional equipment and simultaneously enables encryption and transmission, can be widely applied in various fields where security is crucial in the future."
This research, with Seungmin Shin, a Ph.D. candidate at KAIST's Department of Materials Science and Engineering, participating as the first author, and Professor Himchan Cho and Dr. Kyung-geun Lim of KRISS as co-corresponding authors, was published in the international journal 'Advanced Materials' on May 30th and was selected as an inside front cover paper.※ Paper Title: High-Efficiency Quantum Dot Permeable electrode Light-Emitting Triodes for Visible-Light Communications and On-Device Data Encryption※ DOI: https://doi.org/10.1002/adma.202503189
This research was supported by the National Research Foundation of Korea, the National Research Council of Science & Technology (NST), and the Korea Institute for Advancement of Technology.