President Kwang Hyung Lee, 2026 New Year Message
In his 2026 New Year’s Address, KAIST President Kwang Hyung Lee stated, “Based on the ‘QAIST-New Culture Strategy’ that encourages asking questions and taking on challenges, KAIST will accelerate AI-centered innovation in education and research to leap forward as a world-class university.”
President Lee highlighted the major achievements of the past year, including:
▲Educational innovation to nurture inquisitive minds ▲Advancement of education and research systems through the establishment of the College of AI ▲Establishment of a postdoc-centered research ecosystem through the InnoCORE Program, a national initiative to cultivate world-class AI talent ▲An approximately 20 percent increase in research funding ▲Expansion of research collaboration with global companies and universities
Particularly in the education sector, he explained that KAIST has fostered a culture where failure is viewed as a starting point for new challenges. This has been achieved through initiatives such as student-generated exam questions, the ‘Problem Definition to Solution Program (PDSP)’—where students define and solve problems on their own—and the operation of ‘KAIST Failure Week.’
He emphasized that these changes led to tangible results, with undergraduate early-admission applicants increasing by 1.9 times and graduate school applicants by 1.3 times over the past three years.
At the same time, efforts have been made to improve students' learning and living environments. KAIST has pursued the renovation and remodeling of all dormitories and enhanced student dining facilities and menu options to ensure students enjoy a more comfortable campus life. President Lee stressed, “The most precious members of KAIST are our students, and the university's role is to create an environment where they can freely ask questions and take on challenges.”
In the research field, KAIST secured billions of Korean won in annual research funding through joint initiatives with Germany’s Merck and Taiwan’s Formosa Group. It also established a strategic hub connecting to the global startup ecosystem through the KAIST–IDIS Silicon Valley Campus. Furthermore, 26 buildings have been newly constructed or expanded over the past five years, and 24 buildings are currently under construction or scheduled to break ground, ensuring the continuous expansion of education and research infrastructure.
In terms of technology commercialization, as of 2025, KAIST launched 59 deep-tech startups and completed technology transfers totaling KRW 8.2 billion. Notably, Sovargen, a faculty startup, successfully concluded a KRW 750 billion technology export deal.
President Lee presented the following key priorities for 2026: ▲Elevating the College of AI to a world-leading level ▲Advancing the Pyeongtaek, Osong, and Sejong campus projects ▲Expanding global partnerships ▲Ensuring the safe execution of 24 major construction projects
President Lee concluded by saying, “KAIST has now firmly established itself as a leading university representing the Republic of Korea, and its global recognition is rising rapidly through internationalization efforts. If we continue to work together in 2026, KAIST will stand proudly as a truly world-class university.”
KAIST Awakens dormant immune cells inside tumors to attack cancer
<(From Left) Professor Ji-Ho Park, Dr. Jun-Hee Han from the Department of Bio and Brain Engineering>
Within tumors in the human body, there are immune cells (macrophages) capable of fighting cancer, but they have been unable to perform their roles properly due to suppression by the tumor. KAIST researchers have overcome this limitation by developing a new therapeutic approach that directly converts immune cells inside tumors into anticancer cell therapies.
KAIST (President Kwang Hyung Lee) announced on the 30th that a research team led by Professor Ji-Ho Park of the Department of Bio and Brain Engineering has developed a therapy in which, when a drug is injected directly into a tumor, macrophages already present in the body absorb it, produce CAR (a cancer-recognizing device) proteins on their own, and are converted into anticancer immune cells known as “CAR-macrophages.”
Solid tumors—such as gastric, lung, and liver cancers—grow as dense masses, making it difficult for immune cells to infiltrate tumors or maintain their function. As a result, the effectiveness of existing immune cell therapies has been limited.
CAR-macrophages, which have recently attracted attention as a next-generation immunotherapy, have the advantage of directly engulfing cancer cells while simultaneously activating surrounding immune cells to amplify anticancer responses.
However, conventional CAR-macrophage therapies require immune cells to be extracted from a patient’s blood, followed by cell culture and genetic modification. This process is time-consuming, costly, and has limited feasibility for real-world patient applications.
To address this challenge, the research team focused on “tumor-associated macrophages” that are already accumulated around tumors.
They developed a strategy to directly reprogram immune cells in the body by loading lipid nanoparticles—designed to be readily absorbed by macrophages—with both mRNA encoding cancer-recognition information and an immunostimulant that activates immune responses.
In other words, in this study, CAR-macrophages were created by “directly converting the body’s own macrophages into anticancer cell therapies inside the body.”
<Figure . Schematic illustration of the strategy for in vivo CAR-macrophage generation and cancer cell eradication via co-delivery of CAR mRNA and immunostimulants using lipid nanoparticles (LNPs)>
When this therapeutic agent was injected into tumors, macrophages rapidly absorbed it and began producing proteins that recognize cancer cells, while immune signaling was simultaneously activated. As a result, the generated “enhanced CAR-macrophages” showed markedly improved cancer cell–killing ability and activated surrounding immune cells, producing a powerful anticancer effect.
In animal models of melanoma (the most dangerous form of skin cancer), tumor growth was significantly suppressed, and the therapeutic effect was shown to have the potential to extend beyond the local tumor site to induce systemic immune responses.
Professor Ji-Ho Park stated, “This study presents a new concept of immune cell therapy that generates anticancer immune cells directly inside the patient’s body,” adding that “it is particularly meaningful in that it simultaneously overcomes the key limitations of existing CAR-macrophage therapies—delivery efficiency and the immunosuppressive tumor environment.”
This research was led by Jun-Hee Han, Ph.D., of the Department of Bio and Brain Engineering at KAIST as the first author, and the results were published on November 18 in ACS Nano, an international journal in the field of nanotechnology.
※ Paper title: “In Situ Chimeric Antigen Receptor Macrophage Therapy via Co-Delivery of mRNA and Immunostimulant,” Authors: Jun-Hee Han (first author), Erinn Fagan, Kyunghwan Yeom, Ji-Ho Park (corresponding author), DOI: 10.1021/acsnano.5c09138
This research was supported by the Mid-Career Researcher Program of the National Research Foundation of Korea.
Presenting a Brain-Like Next-Generation AI Semiconductor that Sees and Judges Instantly
< (From left) Professor Sanghun Jeon, Ph.D candidate Seungyeob Kim, Postdoctoral researcher Hongrae Cho, Ph.D candidates Sang-ho Lee and Taeseung Jung, and M.S candidate Seonjae Park >
With the advancement of Artificial Intelligence (AI), the importance of ultra-low-power semiconductor technology that integrates sensing, computation, and memory into a single unit is growing. However, conventional structures face challenges such as power loss due to data movement, latency, and limitations in memory reliability. A Korean research team has drawn international academic attention by presenting core technologies for an integrated ‘Sensor–Compute–Store’ AI semiconductor to solve these issues.
KAIST announced on December 31st that Professor Sanghun Jeon’s research team from the School of Electrical Engineering presented a total of six papers at the ‘International Electron Devices Meeting (IEEE IEDM 2025)’—the world’s most prestigious semiconductor conference—held in San Francisco from December 8 to 10. Among these, the papers were simultaneously selected as a Highlight Paper and a Top Ranked Student Paper.
Highlight Paper: Monolithically Integrated Photodiode–Spiking Circuit for Neuromorphic Vision with In-Sensor Feature Extraction [Link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=255]
Top Ranked Student Paper: A Highly Reliable Ferroelectric NAND Cell with Ultra-thin IGZO Charge Trap Layer; Trap Profile Engineering for Endurance and Retention Improvement [Link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=124]
The research on the M3D integrated neuromorphic vision sensor, selected as a highlight paper, is a semiconductor that stacks the human eye and brain within a single chip. Simply put, the sensors that detect light and the circuits that process signals like a brain are made into very thin layers and stacked vertically in one chip, implementing a structure where the process of 'seeing' and 'judging' occurs simultaneously.
Through this, the research team completed the world's first "In-Sensor Spiking Convolution" platform, where AI computation technology that "sees and judges at the same time" takes place directly within the camera sensor.
< Figure 1. Summary of research on vertically stacked optical signal-to-spike frequency converter for AI >
< Figure 2. Representative diagram of the development of a 2T-2C near-pixel analog computing cell based on oxide thin-film transistors >
Previously, this technology required several stages: capturing an image (sensor), converting it to digital (ADC), storing it in memory (DRAM), and then calculating (CNN). However, this new technology eliminates unnecessary data movement as the calculation happens immediately within the sensor. As a result, it has become possible to implement real-time, ultra-low-power Edge AI with significantly reduced power consumption and dramatically improved response speeds.
Based on this approach, the research team presented six core technologies at the conference covering all layers of AI semiconductors, from input to storage. They simultaneously created neuromorphic semiconductors that operate like the brain using much less electricity while utilizing existing semiconductor processes, along with next-generation memory optimized for AI.
First, on the sensor side, they designed the system so that judgment occurs at the sensor stage rather than having separate components for capturing images and calculating. Consequently, power consumption decreased and response speeds increased compared to the conventional method of taking a photo and sending it to another chip for calculation.
< Figure 3. Schematic diagram of a next-generation biomimetic tactile system using neuromorphic devices >
< Figure 4. Representative diagram of NC-NAND development research based on Ultra-thin-Mo and Sub-3.5 nm HZO >
Furthermore, in the field of memory, they implemented a next-generation NAND flash that uses the same materials but operates at lower voltages, lasts longer, and can store data stably even when the power is turned off. Through this, they presented a foundational technology that satisfies the requirements for high-capacity, high-reliability, and low-power memory necessary for AI.
< Figure 5. Representative diagram of next-generation 3D FeNAND memory development research >
< Figure 6. Representative diagram of research on charge behavior characterization and quantitative analysis methodology for next-generation FeNAND memory >
Professor Sanghun Jeon, who led the research, stated, "This research is significant in that it demonstrates that the entire hierarchy can be integrated into a single material and process system, moving away from the existing AI semiconductor structure where sensing, computation, and storage were designed separately." He added, "Moving forward, we plan to expand this into a next-generation AI semiconductor platform that encompasses everything from ultra-low-power Edge AI to large-scale AI memory."
Meanwhile, this research was conducted with support from basic research projects of the Ministry of Science and ICT and the National Research Foundation of Korea, as well as the Center for Heterogeneous Integration of Extreme-scale & Property Semiconductors (CH³IPS). It was carried out in collaboration with Samsung Electronics, Kyungpook National University, and Hanyang University.
Turning PC and Mobile Devices into AI Infrastructure, Reducing ChatGPT Costs
< (From left) KAIST School of Electrical Engineering: Dr. Jinwoo Park, M.S candidate Seunggeun Cho, and Professor Dongsu Han >
Until now, AI services based on Large Language Models (LLMs) have mostly relied on expensive data center GPUs. This has resulted in high operational costs and created a significant barrier to entry for utilizing AI technology. A research team at KAIST has developed a technology that reduces reliance on expensive data center GPUs by utilizing affordable, everyday GPUs to provide AI services at a much lower cost.
On December 28th, KAIST announced that a research team led by Professor Dongsu Han from the School of Electrical Engineering developed 'SpecEdge,' a new technology that significantly lowers LLM infrastructure costs by utilizing affordable, consumer-grade GPUs widely available outside of data centers.
SpecEdge is a system where data center GPUs and "edge GPUs"—found in personal PCs or small servers—collaborate to form an LLM inference infrastructure. By applying this technology, the team successfully reduced the cost per token (the smallest unit of text generated by AI) by approximately 67.6% compared to methods using only data center GPUs.
To achieve this, the research team utilized a method called 'Speculative Decoding.' In this process, a small language model placed on the edge GPU quickly generates a high-probability token sequence (a series of words or word fragments). Then, the large-scale language model in the data center verifies this sequence in batches. During this process, the edge GPU continues to generate words without waiting for the server's response, simultaneously increasing LLM inference speed and infrastructure efficiency.
< Figure 1. Language data flow diagram of the developed SpecEdge >
< Figure 2. Detailed computation time reduction method of SpecEdge >
< Figure 3. Illustration of efficient batching of verification requests from multiple edge GPUs on the server GPU within SpecEdge >
Compared to performing speculative decoding solely on data center GPUs, SpecEdge improved cost efficiency by 1.91 times and server throughput by 2.22 times. Notably, the technology was confirmed to work seamlessly even under standard internet speeds, meaning it can be immediately applied to real-world services without requiring a specialized network environment.
Furthermore, the server is designed to efficiently process verification requests from multiple edge GPUs, allowing it to handle more simultaneous requests without GPU idle time. This has realized an LLM serving infrastructure structure that utilizes data center resources more effectively.
This research presents a new possibility for distributing LLM computations—which were previously concentrated in data centers—to the edge, thereby reducing infrastructure costs and increasing accessibility. In the future, as this expands to various edge devices such as smartphones, personal computers, and Neural Processing Units (NPUs), high-quality AI services are expected to become available to a broader range of users.
< Figure 4. Conceptual comparison of the developed SpecEdge vs. conventional methods >
Professor Dongsu Han, who led the research, stated, "Our goal is to utilize edge resources around the user, beyond the data center, as part of the LLM infrastructure. Through this, we aim to lower AI service costs and create an environment where anyone can utilize high-quality AI."
Dr. Jinwoo Park and M.S candidate Seunggeun Cho from KAIST participated in this study. The research results were presented as a 'Spotlight' (top 3.2% of papers, with a 24.52% acceptance rate) at the NeurIPS (Neural Information Processing Systems) conference, the world's most prestigious academic conference in the field of AI, held in San Diego from December 2nd to 7th.
Paper Title: SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs
Paper Links: NeurIPS Link, arXiv Link
This research was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) under the project 'Development of 6G System Technology to Support AI-Native Application Services.'
Vieworks CEO Hu-sik Kim Appointed as 28th KAIST Alumni Association President
< Hu-sik Kim, 28th President of KAIST Alumni Association (CEO of Vieworks) >
KAIST announced on December 23rd that Hu-sik Kim, CEO of Vieworks—a company specializing in medical and industrial imaging solutions—has been appointed as the 28th President of the KAIST Alumni Association.
President-elect Hu-sik Kim, an alumnus with a Master’s degree in Physics (Class of ’95) from KAIST, is a technology-driven leader who has dedicated 26 years to the field of imaging solutions. He is recognized as a "field-oriented innovator" who has pioneered global niche markets with world-first technologies and driven long-term growth by prioritizing people and organizational culture as core competencies.
While working professionally, he enrolled in the KAIST Master’s program to strengthen his theoretical and practical expertise in optics. Later, he played a leading role in co-founding a venture company with fellow alumni, successfully growing Vieworks into a prominent global mid-sized enterprise.
In his inauguration remarks, President Kim stated, “I feel a profound sense of responsibility to give back to the nation and the community for the benefits I have received. I will do my best to ensure that the values of innovation and entrepreneurship are realized through our alumni network, and that the alumni association and our alma mater can prosper together.”
President Kim’s term will span two years starting from January 2026. The inauguration ceremony will be held during the "2026 New Year’s Greeting Ceremony" on January 16, 2026, at the El Tower in Seoul.
KAIST, AI judges manufacturing beyond craftsmanship and language barriers
<(From Left) M.S candidate Inhyo Lee, Ph.D candidate Heekyu Kim, Ph.D candidate joonyoung Kim, Professor Seunghwa Ryu>
Most of the plastic products we use are made through injection molding, a process in which molten plastic is injected into a mold to mass-produce identical items. However, even slight changes in conditions can lead to defects, so the process has long relied on the intuition of highly skilled workers. Now, KAIST researchers have proposed an AI-based solution that autonomously optimizes processes and transfers manufacturing knowledge, addressing concerns that expertise could be lost due to the retirement of skilled workers and the increase in foreign labor.
KAIST (President Kwang Hyung Lee) announced on the 22nd of December that a research team led by Professor Seunghwa Ryu from the Department of Mechanical Engineering · InnoCORE PRISM-AI Center has, for the first time in the world, developed generative AI technology that autonomously optimizes injection molding processes, along with an LLM-based knowledge transfer system that makes on-site expertise accessible to anyone. The team also reported that these achievements were published consecutively in an internationally renowned journal.
The first achievement is a generative AI–based process inference technology that automatically infers optimal process conditions based on environmental changes or quality requirements. Previously, whenever temperature, humidity, or desired quality levels changed, skilled workers had to rely on trial and error to readjust conditions.
The research team implemented a diffusion model–based approach that reverse-engineers process conditions satisfying target quality requirements, using environmental data and process parameters collected over several months from an actual injection molding factory.
In addition, the team built a surrogate model that substitutes for actual production, enabling quality prediction without running the real process. As a result, they achieved an error rate of just 1.63%, significantly lower than the 23~44% error rates of representative existing technologies such as GAN* and VAE** models traditionally used for process prediction. Experiments applying the AI-generated conditions to real processes confirmed successful production of acceptable products, demonstrating practical applicability.
*GAN (Generative Adversarial Network): a method in which two AI models compete with each other to generate data
**VAE (Variational Autoencoder): a method that compresses and learns common patterns in data and then reconstructs them
<Figure 1. Generative AI–Based Process Reasoning Technology>
The second achievement is the IM-Chat, an LLM-based knowledge transfer system designed to address skilled worker retirement and multilingual work environments. IM-Chat is a multi-agent AI system that combines large language models (LLMs) with retrieval-augmented generation (RAG), serving as an AI assistant for manufacturing sites by providing appropriate solutions to problems encountered by novice or foreign workers.
When a worker asks a question in natural language, the AI understands it and, if necessary, automatically calls the generative process inference AI, simultaneously providing optimal process condition calculations along with relevant standards and background explanations.
For example, when asked, “What is the appropriate injection pressure when the factory humidity is 43.5%?”, the AI calculates the optimal condition and presents the supporting manual references as well. With support for multilingual interfaces, foreign workers can receive the same level of decision-making support.
This research is regarded as a core manufacturing AI transformation (AX) technology that can be extended beyond injection molding to molds, presses, extrusion, 3D printing, batteries, bio-manufacturing, and other industries.
In particular, the work is significant in that it presents a paradigm for autonomous manufacturing AI, integrating generative AI and LLM agents through a Tool-Calling approach*, enabling AI to make its own judgments and invoke necessary functions.
*Tool-Calling approach: a method in which AI autonomously calls and uses the functions or programs required for a given situation
<Figure 2. Large Language Model–Based Multilingual Knowledge Transfer Multi-Agent IM-Chat>
<Figure 3. Example of Operation of the Large Language Model (LLM)–Based Multilingual Knowledge Transfer Multi-Agent IM-Chat>
<Figure 4. Illustration of the Application of an LLM-Based Multilingual Knowledge Transfer Multi-Agent IM-Chat (AI-Generated)>
Professor Seunghwa Ryu explained, “This is a case where we addressed fundamental problems in manufacturing in a data-driven way by combining AI that autonomously optimizes processes with LLMs that make on-site knowledge accessible to anyone,” adding, “We will continue expanding this approach to various manufacturing processes to accelerate intelligence and autonomy across the industry.”
This research involved doctoral candidates Junhyeong Lee, Joon-Young Kim, and Heekyu Kim from the Department of Mechanical Engineering as co–first authors, with Professor Seunghwa Ryu as the corresponding author. The results were published consecutively in the April and December issues of Journal of Manufacturing Systems (JCR 1/69, IF 14.2), the world’s top-ranked international journal in engineering and industrial fields.
※ Paper 1: “Development of an Injection Molding Production Condition Inference System Based on Diffusion Model,” DOI: https://doi.org/10.1016/j.jmsy.2025.01.008 ※ Paper 2: “IM-Chat: A multi-agent LLM framework integrating tool-calling and diffusion modeling for knowledge transfer in injection molding industry,” DOI: https://doi.org/10.1016/j.jmsy.2025.11.007
This research was supported by the Ministry of Science and ICT, the Ministry of SMEs and Startups, and the Ministry of Trade, Industry and Energy.
KAIST-UEL Team Develops Origami Airless Wheel to Explore Lunar Caves
<(From Upper Left) Ph.D candidate Seong-Bin Lee, CEO Namsuk Cho, Researcher Geonho Lee, Researcher Seungju Lee, M.S candidate Junseo Kim,
Principal Researcher Jong Tai Jang, Professor Se Kwon Kim, Professor Taewon Seo, Center Director Chae Kyung Sim, Professor Dae-Young Lee>
<(From Left) Principal Researcher Jong Tai Jang, CEO Namsuk Cho, Ph.D candidate Seong-Bin Lee, Professor Dae-Young Lee,Center Director Chae Kyung Sim>
New variable-diameter wheel overcomes steep terrain and harsh lunar conditions, paving the way for subsurface lunar exploration.
A joint research team from the Korea Advanced Institute of Science and Technology (KAIST) and the Unmanned Exploration Laboratory (UEL) has developed a transformative wheel capable of navigating the Moon’s most extreme terrains, including steep lunar pits and lava tubes.
The study presents a novel "origami-inspired" deployable airless wheel that can significantly expand its diameter to traverse obstacles that would trap traditional rovers. The research was published in the December issue of Science Robotics.
The Challenge: Small Rovers vs. Big Obstacles Lunar lava tubes and pits are prime candidates for future human habitats due to their natural shielding from cosmic radiation and extreme temperature fluctuations, but accessing them is perilous. Deploying a swarm of small, independent rovers can be an effective strategy to mitigate the risks associated with a single large rover. This strategy ensures mission continuity through redundancy; even if some units fail, the remaining rovers can complete the exploration.
However, small rovers face an inherent physical limitation: their compact wheel size severely restricts their ability to traverse steep, rugged terrains like lunar pit entrances. While variable-diameter wheels could theoretically solve this by offering high traversability on demand, creating such a system for the Moon has been a formidable challenge. Designing a lightweight transformable wheel that can withstand the harsh lunar environment—specifically the abrasive dust and the vacuum that causes metal parts to fuse ("cold welding")—has remained a significant engineering hurdle.
A Transformable Wheel for Extreme Environments To conquer these obstacles, a research team, led by Professor Dae-Young Lee from KAIST’s Department of Aerospace Engineering, developed a new type of compliant wheel that eliminates complex mechanical joints. By applying the structural principles of the “Da Vinci bridge” combined with origami design, the team created a wheel that uses the flexibility of its materials to transform.
Capable of expanding from a compact 230 mm to 500 mm in diameter, the wheel allows compact rovers to maintain a low profile during transport, yet scale significant obstacles once deployed. Crucially, by utilizing a specialized elastic metal frame and fabric tensioners instead of traditional hinges, the design ensures reliable operation in the harsh lunar environment, effectively resisting the risks of cold welding and mechanical failure caused by fine dust.
The team rigorously tested the wheel’s capabilities using artificial lunar soil (simulants). The wheel demonstrated superior traction on loose slopes and proved its structural integrity by withstanding a drop impact equivalent to a 100-meter fall in lunar gravity.
< Driving performance field tests conducted in various environments such as artificial lunar soil, extreme temperatures, mud, and rocky terrain >
Scientific and Engineering Significance The project brought together experts from major Korean space institutes to validate the technology's potential. Prof. Lee highlighted the wheel as a practical and reliable solution for navigating the Moon's most difficult terrains, expressing optimism that this unique technology would position the team as leaders in future lunar missions despite remaining challenges involving communication and power.
From a scientific perspective, Dr. Chae Kyung Sim, Head of the Planetary Science Group at KASI (Korea Astronomy and Space Science Institute), emphasized the value of lunar pits as "natural geological heritages," noting that this research significantly lowers the technical barriers to accessing these sites and brings actual exploration missions closer to reality. Furthermore, Dr. Jongtae Jang, Principal Researcher at KARI (Korea Aerospace Research Institute), underscored the engineering rigor behind the design, explaining that the wheel was meticulously optimized and validated using mathematical thermal models to endure the Moon’s extreme 300-degree temperature fluctuations.
About KAIST KAIST is the first and top science and technology university in Korea. KAIST has been the gateway to advanced science and technology, innovation, and entrepreneurship, and our graduates have been key ingredients behind Korea’s innovations.
About UEL(Unmanned Exploration Laboratory), inc. has cutting edge technology about planetary exploration mobility robotics in the Republic of Korea. UEL provides unmanned exploration systems from design and manufacturing the mobility platforms to perform the rover missions on Earth, the Moon, and beyond.
Journal Reference Science Robotics
DOI 10.1126/scirobotics.adx2549
AI Gets a Private Tutor, Learning Human Preferences More Accurately
< Professor Junmo Kim and Ph.D. candidate Minchan Kwon, School of Electrical Engineering >
No matter how much data they learn, why do Artificial Intelligence (AI) models often miss the mark on human intent? Conventional "comparison learning," designed to help AI understand human preferences, has frequently led to confusion rather than clarity. A KAIST research team has now presented a new learning solution that allows AI to accurately learn human preferences even with limited data by assigning it a "private tutor."
On December17th, a research team led by Professor Junmo Kim of KAIST School of Electrical Engineering announced the development of "TVKD" (Teacher Value-based Knowledge Distillation), a reinforcement learning framework that significantly improves data efficiency and learning stability while effectively reflecting human preferences.
Existing AI training methods typically rely on collecting massive amounts of "preference comparison" data—simple structures like "A is better than B." However, this approach requires vast datasets and often causes the AI to become confused in ambiguous situations where the distinction is unclear.
To solve this problem, the research team proposed a method in which a ‘Teacher model’ that has first deeply understood human preferences delivers only the core information to a ‘Student model.’ This can be compared to a private tutor who organizes and teaches complex content, and the research team named this ‘Preference Distillation.’
The biggest feature of this technology is that instead of simply imitating ‘good or bad,’ it is designed so that the teacher model learns a ‘Value Function’ that numerically judges how valuable each situation is, and then delivers this to the student model. Through this, the AI can learn by making comprehensive judgments about ‘why this choice is better’ rather than fragmentary comparisons, even in ambiguous situations.
< Conceptual diagram of TVKD: After teaching the human preference dataset to the teacher model, learning proceeds by delivering the teacher's information and the dataset to the student model >
The core of this technology is twofold. First, by reflecting value judgments that consider the entire context into the student model, learning that understands the overall flow rather than fragmentary answers has become possible. Second, a technique was introduced to adjust learning importance according to the reliability of preference data. Clear data is significantly reflected in learning, while the influence of ambiguous or noisy data is reduced, allowing the AI to learn stably even in realistic environments.
As a result of the research team applying this technology to various AI models and conducting experiments, it showed more accurate and stable performance than methods previously known to have the best performance. In particular, it recorded achievements that stably outperformed existing top technologies in major evaluation indices such as MT-Bench and AlpacaEval.
Professor Junmo Kim said, “In reality, human preference data is not always sufficient or perfect,” and added, “This technology will allow AI to learn consistently even under such constraints, so it will be highly practical in various fields.”
< Performance comparison results for each task of MT-Bench. It can be confirmed that the proposed TVKD framework records generally higher scores than existing methods. >
< Visualization results of the Shaping term. The top tokens (converted into words) judged as important by the teacher model within the response are displayed in red, intuitively showing which tokens have a greater influence during the value-based alignment process. >
Ph.D. candidate Minchan Kwon from our university’s School of Electrical Engineering participated as the first author, and the research results were accepted at ‘NeurIPS 2025’, the most prestigious international conference in the field of artificial intelligence. The research was presented at a poster session on December 3, 2025 (US Pacific Time).
※ Paper Title: Preference Distillation via Value based Reinforcement Learning, DOI: https://doi.org/10.48550/arXiv.2509.16965
Meanwhile, this research was carried out with support from the Information & Communications Technology Planning & Evaluation (IITP) funded by the government (Ministry of Science and ICT) in 2024 (No. RS-2024-00439020, Development of Sustainable Real-time Multimodal Interactive Generative AI, SW Star Lab).
KAIST Welcomes the Class of 2026: “Play Boldly, Learn Deeply” - President Kwang-Hyung Lee
< President Kwang-Hyung Lee pictured with NYU exchange students >
KAIST announced on December 15th that it has delivered a congratulatory message to the successful applicants of the 2026 undergraduate early admissions, sharing the university’s unique educational philosophy of encouraging challenge and failure, as well as its vision for cultivating global talent.
For the 2026 undergraduate admissions, KAIST selected future scientific leaders based on its core values and talent ideals: Creativity, Challenge, and Caring. KAIST plans to strengthen education focused on nurturing convergent talent who can cross disciplinary boundaries. The recent upward trend in applications to KAIST reflects the growing importance of scientific talent who will lead national competitiveness amidst intense global competition in AI, semiconductors, space, and biotechnology.
In his congratulatory message, President Kwang Hyung Lee emphasized, “KAIST is a place where you can play and study to your heart's content with friends, start your own business, and even experience failure. KAIST is a ‘playground for eccentrics’ where you can try anything.”
He specifically introduced a challenge-oriented academic culture, stating, “Do not fear failure. If you organize and share your experiences of failure well, you might even receive a ‘Failure Award.’”
President Lee further stressed, “KAIST is the perfect school for students who want to blaze new trails through creativity and inquiry, and for those who wish to change the world. If your goal is simply to get an ‘A’ in every subject or to secure a stable job, you do not need to come here. However, if you are a student who prefers defining your own problems over doing what others tell you and wants to challenge yourself beyond established frameworks, you must come to KAIST.”
He also highlighted the free, student-led environment by stating, “For a KAISTian, the only limit to challenge is imagination,” adding, “During my tenure as President, I have never once rejected an idea proposed by students.”
Regarding the global educational environment, President Lee explained, “KAIST is no longer just a domestic university; it is a platform where you can study, research, and be active on the world stage. We actively support students’ global experiences through the joint campus operation with New York University (NYU), the establishment of a Silicon Valley campus, and exchange programs with over 100 overseas universities.”
Meanwhile, to lead the AI era, KAIST recently established the nation’s first AI College and is building a full-scale education and research system covering all fields of artificial intelligence. The AI College plans to systematically foster next-generation AI leaders through a curriculum linked from undergraduate to graduate levels.
In addition, KAIST is strengthening education in humanities, culture, and the arts alongside science and technology. The university operates seven humanities and social science minor programs—Digital Humanities & Social Sciences, Economics, Culture Technology, Intellectual Property, Science & Technology Policy, Entrepreneurship, and Future Strategy. It also expands students' imagination and creativity through on-campus art museums, numerous galleries, and regular performances and cultural events.
Furthermore, KAIST encourages challenge and balanced growth through the “Mountaineering Scholarship,” which provides up to 700,000 KRW annually to students who complete designated hiking courses, regardless of grades or income level.
President Lee concluded his message of support by saying, “My heart is already racing at the thought of pioneering the 21st-century future with all of you. I look forward to seeing you grow into ‘stars,’ each with your own unique color, and shine on the global stage.”
< President Kwang Hyung Lee performing with the student lab club 'Gootos' at Innovate Korea 2024 >
AI-Engineered "Nasal Spray Antiviral Platform" Developed to Block Flu and COVID-19
<(From Left) Professor Hyun Jung Chung, Professor Ho Min Kim, Professor Ji Eun Oh>
<(From Left) Dr. Seungju Yang, Dr. Jeongwon Yun, Ph.D candidate Jae Hyuk Kwon>
Respiratory viruses that have diverse strains and mutate rapidly, such as influenza and COVID-19, are difficult to block perfectly with vaccines alone. To solve this problem, KAIST's research team has successfully developed a nasal (intranasal) antiviral platform using AI technology to overcome the existing limitations of interferon-lambda treatments—namely, being "weak against heat and disappearing quickly from the nasal mucosa."
KAIST announced on December 15th that a joint research team—consisting of Professor Ho Min Ktim and Professor Hyun Jung Chung from the Department of Biological Sciences, and Professor Ji Eun Oh from the Graduate School of Medical Science and Engineering used AI to stably redesign the interferon-lambda protein and combined it with a delivery technology that ensures effective diffusion and long-term retention in the nasal mucosa, thereby implementing a universal prevention technology for various respiratory viruses.
Interferon-lambda is an innate immune protein produced by the body to block viral infections, playing a crucial role in stopping respiratory viruses like the common cold, flu, and COVID-19. However, when formulated as a treatment for nasal administration, its actual efficacy was limited by its vulnerability to heat, degrading enzymes, mucus, and ciliary motion.
The research team used AI protein design technology to precisely reinforce the structural weaknesses of interferon-lambda.
First, they significantly increased stability by changing the loose "loop" structures of the protein—which were prone to instability—into rigid "helix" structures that lock in place like a firm spring.
Additionally, to prevent "aggregation" (proteins sticking together to form lumps), they applied "surface engineering" to make the surface more water-compatible. They also introduced "glycoengineering," adding sugar chain (glycan) structures to the protein surface to make it even more robust and stable.
As a result, the newly produced interferon-lambda showed a massive improvement in stability, surviving for two weeks 50℃ and demonstrated the ability to diffuse rapidly even through thick nasal mucus.
The research team further protected the protein by encapsulating it in microscopic "nanoliposomes" and coated the surface with "low-molecular-weight chitosan." This significantly enhanced "mucoadhesion," allowing the treatment to stick to the nasal lining for an extended period.
When this delivery platform was applied to animal models infected with influenza, a powerful inhibitory effect was confirmed, with the virus level in the nasal cavity decreasing by more than 85%.
This technology is a mucosal immune platform that can block viral infections in their early stages simply by spraying it into the nose. It is expected to be a new therapeutic strategy that can respond quickly not only to seasonal flu but also to unexpected new or mutant viruses.
Professor Ho Min Kim stated, "Through AI-based protein design and mucosal delivery technology, we have simultaneously overcome the stability and retention time limitations of existing interferon-lambda treatments. This platform, which is stable at high temperatures and stays in the mucosa for a long time, is an innovative technology that can be used even in developing countries lacking strict cold-chain infrastructure. It also has great scalability for developing various treatments and vaccines." He added, "This is a meaningful achievement resulting from multidisciplinary convergence research, covering everything from AI protein design to drug delivery optimization and immune evaluation through infection models."
This research involved Dr. Jeongwon Yun from the KAIST InnoCORE (AI-Co-Research & Eudcation for innovative Drug Institute, AI-CRED Institute) Dr. Seungju Yang from the Department of Biological Sciences, and PhD student Jae Hyuk Kwon from the Graduate School of Medical Science and Engineering as co-first authors. The results were published consecutively in the renowned international journals Advanced Science (Nov 20) and Biomaterials Research (Nov 21).
Paper 1: Computational Design and Glycoengineering of Interferon-Lambda for Nasal Prophylaxis against Respiratory Viruses, Advanced Science, DOI: 10.1002/advs.202506764
Paper 2: Intranasal Nanoliposomes Delivering Interferon Lambda with Enhanced Mucosal Retention as an Antiviral, Biomaterials Research, DOI: 10.34133/bmr.0287
This research was conducted with support from the KAIST InnoCORE Program, Mid-Career Researcher Support Program and the Bio-Medical Technology Development Program through the National Research Foundation of Korea (NRF), Healthcare Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), the KAIST Convergence Research Institute Operation Program, and the Institute for Basic Science (IBS).
Uncovering brain’s secret to stable yet flexible learning – paving the way for human-like AI
<(From Left) Professor Sang Wan Lee, Ph.D candidate Yoondo Sung, (Upper Left) Dr. Mattia Rigotti>
Humans possess a remarkable balance between stability and flexibility, enabling them to quickly establish new plans and adjust goals even in the face of sudden changes. However, "Model-Free reinforcement learning," which is widely used in robotics and exemplified by AlphaGo’s famous match against Lee Sedol, struggle to achieve these two capabilities simultaneously. KAIST's research team has discovered that the secret lies in the unique information processing method within the prefrontal cortex, a principle that could serve as the foundation for developing "Brain-like AI” that is both flexible and stable.
KAIST announced on December 14th that a research team led by Professor Sang Wan Lee from the Department of Brain and Cognitive Sciences, in collaboration with IBM AI Research, has deciphered how the human brain manages goal changes in uncertain situations, suggesting a new direction for next-generation reinforcement learning.
The research team highlighted a critical limitation of current reinforcement learning models: they lose the balance between flexibility for goals pursuit and stability in uncertain environments. Humans, however, achieve both simultaneously. The team hypothesized that this difference arises from how the prefrontal cortex represents information.
Using functional MRI (fMRI) experiments, reinforcement learning models, and advanced AI analyses, the team revealed that the human prefrontal cortex has a unique embedding structure that represents "goal information" and "uncertainty information" separately to prevent interference. Individuals with more distinct separation between these channels were able to adapt strategies when goals shifted, while maintaining stable judgment despite environmental uncertainty. The team likened this mechanism to "multiplexing" in communication technology, where multiple signals are transmitted simultaneously without interference.
In this way, the human prefrontal cortex operates through two "channels": one that sensitively tracks goal changes to ensure flexibility in decision-making, and another that isolate environmental uncertainty to maintain stable judgment.
An interesting point is that the prefrontal cortex goes beyond simple executing control guided by the first channel; it uses the second channel to actually choose which learning strategy to use depending on the situation.
This demonstrates the brain’s "meta-learning capabilities," meaning it learns not only what to learn but also how to learn – by choosing which learning strategy to use. This is why humans remain resilient in constantly changing situations.
The implication of this research extend across various fields, including the analysis of individual reinforcement and meta-learning abilities, personalized education design, cognitive diagnosis, and human-computer interaction (HCI). Moreover, embedding brain-inspired representation structures into AI could lead to "brain-like thinking AI", allowing AI to better understand human intentions and values, reducing dangerous judgments, and enabling safer cooperation with humans.
<Figure 1. Balance between Flexibility and Stability in Humans and AI>
<Figure 2. Topological Structure of Goal Representation in the Prefrontal Cortex and Environmental Uncertainty Information>
Lead researcher Professor Sang Wan Lee emphasized the significance of the findings: "This study clarifies the brain's fundamental operating principles—from flexibly following changing goals to stably establishing plans—from an AI perspective. These principles will serve as a core foundation for next-generation AI, allowing it to adapt like a human and learn more safely and intelligently."
This study featured PhD candidate Yoondo Sung as the first author and Dr. Mattia Rigotti of IBM AI Research as the second author, with Professor Sang Wan Lee serving as the corresponding author. The research results were published on November 26 in the international academic journal Nature Communications.
(Paper Title: Factorized embedding of goal and uncertainty in the lateral prefrontal cortex guides stably flexible learning / DOI: 10.1038/s41467-025-66677-w)
Notably, this research was conducted with support from the "Frontier R&D Project" of the Ministry of Science and ICT.
KAIST-KakaoBank Speeds Up 'Explainable AI' by 11 Times: "Boosts Financial AI Reliability
< (From left) Professor Jaesik Choi of the Kim Jaechul Graduate School of AI, Ph.D candidate Chanwoo Lee, Ph.D candidate Youngjin Park >
The research team led by Professor Jaesik Choi of KAIST's Kim Jaechul Graduate School of AI, in collaboration with KakaoBank Corp, announced that they have developed an accelerated explanation technology that can explain the basis of an Artificial Intelligence (AI) model's judgment in real-time. This research achievement significantly increases the practical applicability of Explainable Artificial Intelligence (hereinafter XAI) technology in fields requiring real-time decision-making, such as financial services, by achieving an average processing speed 8.5 times faster, and up to 11 times faster, than existing explanation algorithms for AI model predictions.
In the financial sector, a clear explanation for decisions made by AI systems is essential. Especially in services directly related to customer rights, such as loan screening and anomaly detection, regulatory demands to transparently present the basis for the AI model's judgment are increasingly stringent. However, conventional Explainable Artificial Intelligence (XAI) technologies required the repeated calculation of hundreds to thousands of baselines to generate accurate explanations, resulting in massive computational costs. This was a major factor limiting the application of XAI technology in real-time service environments.
To address this issue, Professor Choi's research team developed the 'ABSQR (Amortized Baseline Selection via Rank-Revealing QR)' framework for accelerating explanation algorithms. ABSQR noticed that the value function matrix generated during the AI model explanation process has a low-rank structure. It introduced a method to select only a critical few baselines from the hundreds available. This drastically reduced the computation complexity, which was previously proportional to the number of baselines, to be proportional only to the number of selected critical baselines, thereby maximizing computational efficiency while maintaining explanatory accuracy.
Specifically, ABSQR operates in two stages. The first stage systematically selects important baselines using Singular Value Decomposition (SVD) and Rank-Revealing QR decomposition techniques. Unlike existing random sampling methods, this is a deterministic selection method aimed at preserving information recovery, which guarantees the accuracy of the explanation while significantly reducing computation. The second stage introduces an amortized inference mechanism, which reuses the pre-calculated weights of the baselines through cluster-based search, allowing the system to provide an explanation for the model's prediction result in real-time service environments without repeatedly evaluating the model. The research team verified the superiority of ABSQR through experiments on various real-world datasets. Tests on standard datasets across five sectors—finance, marketing, and demographics—showed that ABSQR achieved an average processing speed 8.5 times faster than existing explanation algorithms that use all baselines, with a maximum speed improvement of over 11 times. Furthermore, the degradation of explanatory accuracy due to speed acceleration was minimized, maintaining up to 93.5% of the explanation accuracy compared to the baseline algorithm. This level is sufficient to meet the explanation quality required in real-world applications.
< ABSQR Framework Overview. (1) The baseline selection stage utilizes the low-rank structure of the value function matrix to select only a small number of key baselines, and (2) the accelerated search stage reuses the pre-calculated baseline weight coefficients based on clusters. This dramatically reduces the computation complexity, which was proportional to the number of baselines, to be proportional only to the number of selected key baselines. >
A KakaoBank official stated, "We will continue relentless research and development to enhance the reliability and convenience of financial services and introduce innovative financial technologies that customers can experience." Chanwoo Lee and Youngjin Park, co-first authors from KAIST, explained the significance of the research: "This methodology solves the crucial acceleration problem for real-time application in the financial sector, proving that it is possible to provide users with the reasons behind a learning model's decision in real-time." They added, "This research provides new insights into what constitutes unnecessary computation and the selection of important baselines in explanation algorithms, practically contributing to the improvement of explanation technology efficiency." This research, co-authored by PhD candidates Chanwoo Lee and Youngjin Park from the KAIST Kim Jaechul Graduate School of AI, and researchers Hyeongeun Lee and Yeeun Yoo from the KakaoBank Financial Technology Research Institute, was presented on November 12 at the 'CIKM 2025 (ACM International Conference on Information and Knowledge Management)', the world's highest-authority academic conference in the field of information and knowledge management. ※ Paper Title: Amortized Baseline Selection via Rank-Revealing QR for Efficient Model Explanation
※ Author Information:
※ Author Information: DOI: https://doi.org/10.1145/3746252.3761036
Co-First Authors: Chanwoo Lee (KAIST Kim Jaechul Graduate School of AI), Youngjin Park (KAIST Kim Jaechul Graduate School of AI), Hyeogeun Lee (KakaoBank), Yeeun Yoo (KakaoBank)
Co-Authors: Daehee Han (KakaoBank), Junho Choi (KAIST Kim Jaechul Graduate School of AI), Kunhyung Kim (KAIST Kim Jaechul Graduate School of AI)
Corresponding Authors: Nari Kim (KAIST Kim Jaechul Graduate School of AI), Jaesik Choi (KAIST Kim Jaechul Graduate School of AI)
Meanwhile, this research achievement was conducted through KakaoBank's industry-academia research project 'Advanced Research on Explainable Artificial Intelligence Algorithms in the Financial Sector' and the Ministry of Science and ICT/Institute for Information & Communications Technology Planning and Evaluation (IITP) supported project 'Development of Explainable Artificial Intelligence Technology Providing Explainability in a Plug-and-Play Manner and Verification of Explanation Provision for AI Systems.'