AI Developed to Locate Slums Worldwide... Wins Best Paper Award at AAAI 2026
<(From Left) Sumin Lee, Sungwon Park, Prof. Jihee Kim, Prof. Meeyoung Cha, Prof. Jeasurk Yang>
"Cities don't even know where their slums (impoverished areas) are located."
In many developing nations, the most vulnerable citizens are invisible to the state simply because their homes don't appear on any official map. Today, a breakthrough using Artificial Intelligence (AI) is changing that.
A joint research team from KAIST and Chonnam National University in South Korea and MPI-SP in Germany has developed an AI technology that autonomously identifies slum areas using nothing but satellite imagery. This technology is expected to fundamentally transform urban policy-making and public resource allocation in developing countries where data is scarce and has won the Best Paper Award in the ‘AI for Social Impact’ category at the AAAI 2026 (Association for the Advancement of Artificial Intelligence), the world's premiermost prestigious AI academic conference.
Why it Matters
While previous studies struggle to recognize slums across countries due to varying architectural styles, the team introduced a "Mixture-of-Experts (MoE)" structure. In this system, multiple AI models learn different regional characteristics; when a new city is inputted, the system automatically selects the most appropriate model.
<Figure1. Overview of the Mixture-of-Experts(MoE) structure to identify slum areas>
The core of this research is "Test-Time Adaptation (TTA)" technology. Even if humans do not pre-mark slum locations in a new city, the AI reduces its own errors by comparing and verifying the prediction results of multiple models, trusting only the areas where they commonly agree. This ensures stable performance even in regions with insufficient data.
The research team applied this technology to major cities such as Kampala (Uganda) and Maputo (Mozambique) and confirmed that it distinguishes slum areas more precisely than existing state-of-the-art technologies.
This technology is expected to be utilized in various policy fields, including:
Establishing urban infrastructure expansion plans for developing countries.
Identifying areas vulnerable to disasters and infectious diseases in advance.
Selecting targets for housing environment improvement projects.
Monitoring the implementation of UN Sustainable Development Goals (SDGs).
<Figure2. Slum segmentation results in Kampala in 2015 (yellow) and 2023 (red). Over the eight-year period, the slum ratio in the city increased from 8.4% to 8.6%>
Meeyoung Cha, an AI researcher and author, stated, "This research proves that AI is no longer just a tool for analysis. It is a tool for action. Our technology can bridge the data gap to solve the world’s most pressing social challenges." Jihee Kim, an economist and author, added, "It will complement costly field surveys and help effectively allocate limited resources to the areas that need them most."
The research results were presented at AAAI 2026 in Singapore on January 25th.
Paper Title: Generalizable Slum Detection from Satellite Imagery with Mixture-of-Experts
Paper Link: https://aaai.org/about-aaai/aaai-awards/aaai-conference-paper-awards-and-recognition/
This research was supported by the National Research Foundation of Korea (NRF) through the Mid-career Researcher Support Program and the Data Science Convergence Human Resources Training Program.
Professor Youngjin Kwon's Team Wins Google Award 'Catches Bugs Without a Real CPU
< Professor Youngjin Kwon >
Modern CPUs have complex structures, and in the process of handling multiple tasks simultaneously, an order-scrambling error known as a 'concurrency bug' can occur. Although this can lead to security issues, these bugs were extremely difficult to detect using conventional methods. Our university's research team has developed a world-first-level technology to automatically detect these bugs by precisely reproducing the internal operation of the CPU in a virtual environment without needing a physical chip. Through this, they successfully found and fixed 11 new bugs in the latest Linux kernel.
Our university announced on the 21st that the research team led by Professor Youngjin Kwon of the School of Computing has won the 'Research Scholar Award' (Systems category) presented by Google.
The Google Research Scholar Award is a global research support program, implemented since 2020, to support Early-Career Professors conducting innovative research in various fields such as AI, Systems, Security, and Data Management.
It is known as a highly competitive program, with the selection process conducted directly by Google Research scientists, and only a tiny fraction of the hundreds of applicants worldwide are chosen. In particular, this award is recognized as one of the most prestigious industry research support programs globally in the field of AI and Computer Systems, and domestic recipients are rare.
■ Technology Developed to Detect Concurrency Bugs in the Latest Apple M3 and ARM Servers
Professor Kwon's team developed a technology that automatically detects concurrency bugs in the latest ARM (a CPU design method that uses less power and is highly efficient) based servers, such as the Apple M3 (Apple's latest-generation computer processor chip).
A concurrency bug is an error that occurs when the order of operations gets mixed up while the CPU handles multiple tasks simultaneously. This is a severe security vulnerability that can cause the computer to suddenly freeze or become a pathway for hackers to attack the system. However, these errors were extremely difficult to find with existing testing methods alone.
■ Automatically Detects Bugs by Reproducing CPU Internal Operations Without a Real CPU
The core achievement of Professor Kwon's team is the 'technology to reproduce the internal operation of the CPU exactly in a virtual environment without a physical chip.' Using this technology, it is possible to precisely analyze the order in which instructions are executed and where problems occur using only software, without having to disassemble the CPU or use the actual chip.
By running the Linux operating system based on this system to automatically detect bugs, the research team discovered 11 new bugs in the latest Linux kernel* and reported them to the developer community, where they were all fixed.
*Linux kernel: The core operating system engine that forms the basis of servers, supercomputers, and smartphones (Android) worldwide. It acts as the 'heart' of the system, managing the CPU, memory, and storage devices.
Google recognized this technology as 'very important for its own infrastructure' and conferred the Award.
< Google Scholar Award Recipient Page >
This technology is evaluated to have general applicability, not only to Linux but also to various operating systems such as Android and Windows. The research team has released the software as open-source (GitHub) so that anyone in academia or industry can utilize it.
Professor Youngjin Kwon stated, "This award validates the international competitiveness of KAIST's systems research," and "We will continue our research to establish a safe and highly reliable computing environment."
※ Google Scholar Award Recipient Page: https://research.google/programs-and-events/research-scholar-program/recipients/ GitHub (Technology Open-Source): https://github.com/casys-kaist/ozz
Automatic C to Rust Translation Technology Gains Global Attention for Accuracy Beyond AI
<(From Left) Professor Sukyoung Ryu, Researcher Jaemin Hong>
As the C language, which forms the basis of critical global software like operating systems, faces security limitations, KAIST's research team is pioneering core original technology research for the accurate automatic conversion to Rust to replace it. By proving the mathematical correctness of the conversion, a limitation of existing Artificial Intelligence (LLM) methods, and solving C language security issues through automatic conversion to Rust, they presented a new direction and vision for future software security research. This work has been selected as the cover story for CACM, the world's highest-authority academic journal, thereby demonstrating KAIST's global research leadership in the field of computer science.
KAIST announced on the 9th of November that the paper by Professor Sukyoung Ryu's research team (Programming Language Research Group) from the School of Computing was selected as the cover story for the November issue of CACM (Communications of the ACM), the highest authority academic journal published by ACM (Association for Computing Machinery), the world's largest computer society.
<Photo of the Paper Selected for the Cover of Communications of the ACM>
This paper comprehensively addresses the technology developed by Professor Sukyoung Ryu's research team for the automatic conversion of C language to Rust, and it received high acclaim from the international research community for presenting the technical vision and academic direction this research should pursue in the future.
The C language has been widely used in the industry since the 1970s, but its structural limitations have continuously caused severe bugs and security vulnerabilities. Rust, on the other hand, is a secure programming language developed since 2015, used in the development of operating systems and web browsers, and has the characteristic of being able to detect and prevent bugs before program execution.
The US White House recommended discontinuing the use of C language in a technology report released in February 2024, and the Defense Advanced Research Projects Agency (DARPA) also explicitly stated that Rust is the core alternative for resolving C language security issues by promoting a project to develop technology for the automatic conversion of C code to Rust.
Professor Sukyoung Ryu's research team proactively raised the issues of C language safety and the importance of automatic conversion even before these movements began in earnest, and they have continuously developed core related technologies.
In May 2023, the research team presented the Mutex conversion technology (necessary for program synchronization) at ICSE (International Conference on Software Eng), the top authority conference in software engineering. In June 2024, they presented the Output Parameter conversion technology (used for result delivery) at PLDI (Programming Language Design and Implementation), the top conference in programming languages, and in October of the same year, they presented the Union conversion technology (for storing diverse data together) at ASE (Automated Software Eng), the representative conference in software automation.
These three studies are all "world-first" achievements presented at top-tier international academic conferences, successfully implementing automatic conversion technology for each feature with high completeness.
Since 2023, the research team has consistently published papers in CACM every year, establishing themselves as global leading researchers who consistently solve important and challenging problems worldwide.
This paper was published in CACM (Communications of the ACM) on October 24, with Dr. Jaemin Hong (Postdoctoral Research Fellow at KAIST Information and Electronics Research Institute) as the first author. ※Paper Title: Automatically Translating C to Rust, DOI: https://doi.org/10.1145/3737696
Dr. Jaemin Hong stated, "The conversion technology we developed is an original technology based on programming language theory, and its biggest strength is that we can logically prove the 'correctness' of the conversion." He added, "While most research relies on Large Language Models (LLMs), our technology can mathematically guarantee the correctness of the conversion."
Dr. Hong is scheduled to be appointed as an Assistant Professor in the Computer Science Department at UNIST starting in March 2025.
Furthermore, Professor Ryu's research team has four papers accepted for presentation at ASE 2025, the highest-authority conference in software engineering, including C→Rust conversion technology.
These papers, in addition to automatic conversion technology, cover various cutting-edge software engineering fields and are receiving high international acclaim. They include: technology to verify whether quantum computer programs operate correctly, 'WEST' technology that automatically checks the correctness of WebAssembly programs (technology for fast and efficient program execution on the web) and creates tests for them, and technology that automatically simplifies complex WebAssembly code to quickly find errors. Among these, the WEST paper received the Distinguished Paper Award.
This research was supported by the Leading Research Center/Mid-career Researcher Support Program of the National Research Foundation of Korea, the Institute of Information & Communications Technology Planning & Evaluation (IITP), and Samsung Electronics.
3D Worlds from Just a Few Phone Photos
<(From Left) Ph.D candidate Jumin Lee, Ph.D candidate Woo Jae Kim, Ph.D candidate Youngju Na, Ph.D candidate Kyu Beom Han, Professor Sung-eui Yoon>
Existing 3D scene reconstructions require a cumbersome process of precisely measuring physical spaces with LiDAR or 3D scanners, or correcting thousands of photos along with camera pose information. The research team at KAIST has overcome these limitations and introduced a technology enabling the reconstruction of 3D —from tabletop objects to outdoor scenes—with just two to three ordinary photographs. The breakthrough suggests a new paradigm in which spaces captured by camera can be immediately transformed into virtual environments.
KAIST announced on November 6 that the research team led by Professor Sung-Eui Yoon from the School of Computing has developed a new technology called SHARE (Shape-Ray Estimation), which can reconstruct high-quality 3D scenes using only ordinary images, without precise camera pose information.
Existing 3D reconstruction technology has been limited by the requirement of precise camera position and orientation information at the time of shooting to reproduce 3D scenes from a small number of images. This has necessitated specialized equipment or complex calibration processes, making real-world applications difficult and slowing widespread adoption.
To solve these problems, the research team developed a technology that constructs accurate 3D models by simultaneously estimating the 3D scene and the camera orientation using just two to three standard photographs. The technology has been recognized for its high efficiency and versatility, enabling rapid and precise reconstruction in real-world environments without additional training or complex calibration processes.
While existing methods calculate 3D structures from known camera poses, SHARE autonomously extracts spatial information from images themselves and infers both camera pose and scene structure. This enables stable 3D reconstruction without shape distortion by aligning multiple images taken from different positions into a single unified space.
<Representative Image of SHARE Technology>
"The SHARE technology is a breakthrough that dramatically lowers the barrier to entry for 3D reconstruction,” said Professor Sung-Eui Yoon. “It will enable the creation of high-quality content in various industries such as construction, media, and gaming using only a smartphone camera. It also has diverse application possibilities, such as building low-cost simulation environments in the fields of robotics and autonomous driving."
<SHARE Technology, Precise Camera Information and 3D Scene Prediction Technology>
Ph.D. Candidate Youngju Na and M.S candidate Taeyeon Kim participated as co-first authors on the research. The results were presented on September 17th at the IEEE International Conference on Image Processing (ICIP 2025), where the paper received the Best Student Paper Award.
The award, given to only one paper among 643 accepted papers this year—a selection rate of 0.16 percent—once again underscores the excellent research capabilities of the KAIST research team.
Paper Title: Pose-free 3D Gaussian Splatting via Shape-Ray Estimation, DOI: https://arxiv.org/abs/2505.22978
Award Information: https://www.linkedin.com/posts/ieeeicip_congratulations-to-the-icip-2025-best-activity-7374146976449335297-6hXz
This achievement was carried out with support from the Ministry of Science and ICT's SW Star Lab Project under the task 'Development of Perception, Action, and Interaction Algorithms for Unspecified Environments for Open World Robot Services.'
Refrigerator Use Increases with Stress, IoT Sensors Read Mental Health
<(From Left) Ph.D candidate Chanhee Lee, Professor Uichin Lee, Professor Hyunsoo Lee, Ph.D candidate Youngji Koh from School of Computing>
The number of single-person households in South Korea has exceeded 8 million, accounting for 36% of the total, marking an all-time high. A Seoul Metropolitan Government survey found that 62% of single-person households experience 'loneliness', deepening feelings of isolation and mental health issues. KAIST researchers have gone beyond the limitations of smartphones and wearables, utilizing in-home IoT data to reveal that a disruption in daily rhythm is a key indicator of worsening mental health. This research is expected to lay the foundation for developing personalized mental healthcare management systems.
KAIST (President Kwang Hyung Lee) announced on the 21st of October that a research team led by Professor Uichin Lee from the School of Computing has demonstrated the possibility of accurately tracking an individual's mental health status using in-home Internet of Things (IoT) sensor data.
Consistent self-monitoring is important for mental health management, but existing smartphone- or wearable-based tracking methods have the limitation of data loss when the user is not wearing or carrying the device inside the home.
The research team therefore focused on in-home environmental data. A 4-week pilot study was conducted on 20 young single-person households, installing appliances, sleep mats, motion sensors, and other devices to collect IoT data, which was then analyzed along with smartphone and wearable data.
The results confirmed that utilizing IoT data alongside existing methods allows for a significantly more accurate capture of changes in mental health. For instance, reduced sleep time was closely linked to increased levels of depression, anxiety, and stress, and increased indoor temperature also showed a correlation with anxiety and depression.
<Picture1. Heatmap of the Correlation Between Each User’s Mental Health Status and Sensor Data>
Participants' behavioral patterns varied, including a 'binge-eating type' with increased refrigerator use during stress and a 'lethargic type' with a sharp decrease in activity. However, a common trend clearly emerged: mental health deteriorated as daily routines became more irregular.
Variability in daily patterns was confirmed to be a more important factor than the frequency of specific behaviors, suggesting that a regular routine is essential for maintaining mental health.
When research participants viewed their life data through visualization software, they generally perceived the data as being genuinely helpful in understanding their mental health, rather than expressing concern about privacy invasion. This significantly enhanced the research acceptance and satisfaction with participation.
<Figure 2. Comparison of Average Mental Health Status Between the High Irregularity Group (Red) and the Low Irregularity Group (Blue)>
Professor Uichin Lee stated, "This research demonstrates that in-home IoT data can serve as an important clue for understanding mental health within the context of an individual's daily life," and added, "We plan to further develop this into a remote healthcare system that can predict individual lifestyle patterns and provide personalized coaching using AI."
Youngji Koh, a Ph.D candidate, participated as the first author in this research. The findings were published in the September issue of the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, a prominent international journal in the field of human-computer interaction (HCI). ※ Harnessing Home IoT for Self-tracking Emotional Wellbeing: Behavioral Patterns, Self-Reflection, and Privacy Concerns DOI: https://dl.acm.org/doi/10.1145/3749485 ※ Youngji Koh (KAIST, 1st author), Chanhee Lee (KAIST, 2nd author), Eunki Joung (KAIST, 3rd author), Hyunsoo Lee (KAIST, corresponding author), Uichin Lee (KAIST, corresponding author)
This research was conducted with support from the LG Electronics-KAIST Digital Healthcare Research Center and the National Research Foundation of Korea, funded by the government (Ministry of Science and ICT).
KAIST Develops an AI Semiconductor Brain Combining Transformer's Intelligence and Mamba's Efficiency
<(From Left) Ph.D candidate Seongryong Oh, Ph.D candidate Yoonsung Kim, Ph.D candidate Wonung Kim, Ph.D candidate Yubin Lee, M.S candidate Jiyong Jung, Professor Jongse Park, Professor Divya Mahajan, Professor Chang Hyun Park>
As recent Artificial Intelligence (AI) models’ capacity to understand and process long, complex sentences grows, the necessity for new semiconductor technologies that can simultaneously boost computation speed and memory efficiency is increasing. Amidst this, a joint research team featuring KAIST researchers and international collaborators has successfully developed a core AI semiconductor 'brain' technology based on a hybrid Transformer and Mamba structure, which was implemented for the first time in the world in a form capable of direct computation inside the memory, resulting in a four-fold increase in the inference speed of Large Language Models (LLMs) and a 2.2-fold reduction in power consumption.
KAIST (President Kwang Hyung Lee) announced on the 17th of October that the research team led by Professor Jongse Park from KAIST School of Computing, in collaboration with Georgia Institute of Technology in the United States and Uppsala University in Sweden, developed 'PIMBA,' a core technology based on 'AI Memory Semiconductor (PIM, Processing-in-Memory),' which acts as the brain for next-generation AI models.
Currently, LLMs such as ChatGPT, GPT-4, Claude, Gemini, and Llama operate based on the 'Transformer' brain structure, which sees all of the words simultaneously. Consequently, as the AI model grows and the processed sentences become longer, the computational load and memory requirements surge, leading to speed reductions and high energy consumption as major issues.
To overcome these problems with Transformer, the recently proposed sequential memory-based 'Mamba' structure introduced a method for processing information over time, increasing efficiency. However, memory bottlenecks and power consumption limits still remained.
Professor Park Jongse's research team designed 'PIMBA,' a new semiconductor structure that directly performs computations inside the memory in order to maximize the performance of the 'Transformer–Mamba Hybrid Model,' which combines the advantages of both Transformer and Mamba.
While existing GPU-based systems move data out of the memory to perform computations, PIMBA performs calculations directly within the storage device without moving the data. This minimizes data movement time and significantly reduces power consumption.
<Analysis of Post-Transformer Models and Proposal of a Problem-Solving Acceleration System>
As a result, PIMBA showed up to a 4.1-fold improvement in processing performance and an average 2.2-fold decrease in energy consumption compared to existing GPU systems.
The research outcome is scheduled to be presented on October 20th at the '58th International Symposium on Microarchitecture (MICRO 2025),' a globally renowned computer architecture conference that will be held in Seoul. It was previously recognized for its excellence by winning the Gold Prize at the '31st Samsung Humantech Paper Award.' ※Paper Title: Pimba: A Processing-in-Memory Acceleration for Post-Transformer Large Language Model Serving, DOI: 10.1145/3725843.3756121
This research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP), the AI Semiconductor Graduate School Support Project, and the ICT R&D Program of the Ministry of Science and ICT and the IITP, with assistance from the Electronics and Telecommunications Research Institute (ETRI). The EDA tools were supported by IDEC (the IC Design Education Center).
KAIST Develops AI Crowd Prediction Technology to Prevent Disasters like the Itaewon Tragedy
<(From Left) Ph.D candidate Youngeun Nam from KAIST, Professor Jae-Gil Lee from KAIST, Ji-Hye Na from KAIST, (Top right, from left) Professor Soo-Sik Yoon from Korea University, Professor HwanJun Song from KAIST>
To prevent crowd crush incidents like the Itaewon tragedy, it's crucial to go beyond simply counting people and to instead have a technology that can detect the real-
inflow and movement patterns of crowds. A KAIST research team has successfully developed new AI crowd prediction technology that can be used not only for managing large-scale events and mitigating urban traffic congestion but also for responding to infectious disease outbreaks.
On the 17th, KAIST (President Kwang Hyung Lee) announced that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new AI technology that can more accurately predict crowd density.
The dynamics of crowd gathering cannot be explained by a simple increase or decrease in the number of people. Even with the same number of people, the level of risk changes depending on where they are coming from and which direction they are heading.
Professor Lee's team expressed this movement using the concept of a "time-varying graph." This means that accurate prediction is only possible by simultaneously analyzing two types of information: "node information" (how many people are in a specific area) and "edge information" (the flow of people between areas).
In contrast, most previous studies focused on only one of these factors, either concentrating on "how many people are gathered right now" or "which paths are people moving along." However, the research team emphasized that combining both is necessary to truly capture a dangerous situation.
For example, a sudden increase in density in a specific alleyway, such as Alley A, is difficult to predict with just "current population" data. But by also considering the flow of people continuously moving from a nearby area, Area B, towards Area A (edge information), it's possible to pre-emptively identify the signal that "Area A will soon become dangerous."
To achieve this, the team developed a "bi-modal learning" method. This technology simultaneously considers population counts (node information) and population flow (edge information), while also learning spatial relationships (which areas are connected) and temporal changes (when and how movement occurs).
Specifically, the team introduced a 3D contrastive learning technique. This allows the AI to learn not only 2D spatial (geographical) information but also temporal information, creating a 3D relationship. As a result, the AI can understand not just whether the population is "large or small right now," but "what pattern the crowd is developing into over time." This allows for a much more accurate prediction of the time and place where congestion will occur than previous methods.
<Figure 1. Workflow of the bi-modal learning-based crowd congestion risk prediction developed by the research team.
The research team developed a crowd congestion risk prediction model based on bi-modal learning. The vertex-based time series represents indicator changes in a specific area (e.g., increases or decreases in crowd density), while the edge-based time series captures the flow of population movement between areas over time. Although these two types of data are collected from different sources, they are mapped onto the same network structure and provided together as input to the AI model. During training, the model simultaneously leverages both vertex and edge information based on a shared network, allowing it to capture complex movement patterns that might be overlooked when relying on only a single type of data. For example, a sudden increase in crowd density in a particular area may be difficult to predict using vertex information alone, but by additionally considering the steady inflow of people from adjacent areas (edge information), the prediction becomes more accurate. In this way, the model can precisely identify future changes based on past and present information, ultimately predicting high-risk crowd congestion areas in advance.>
The research team built and publicly released six real-world datasets for their study, which were compiled from sources such as Seoul, Busan, and Daegu subway data, New York City transit data, and COVID-19 confirmed case data from South Korea and New York.
The proposed technology achieved up to a 76.1% improvement in prediction accuracy over recent state-of-the-art methods, demonstrating strong perf
Professor Jae-Gil Lee stated, "It is important to develop technologies that can have a significant social impact," adding, "I hope this technology will greatly contribute to protecting public safety in daily life, such as in crowd management for large events, easing urban traffic congestion, and curbing the spread of infectious diseases."
Youngeun Nam, a Ph.D candidate in the KAIST School of Computing, was the first author of the study, and Jihye Na, another Ph.D candidate, was a co-author. The research findings were presented at the Knowledge Discovery and Data Mining (KDD) 2025 conference, a top international conference in the field of data mining, this past August.
※ Paper Title: Bi-Modal Learning for Networked Time Series ※ DOI: https://doi.org/10.1145/3711896.3736856
This technology is the result of research projects including the "Mid-Career Researcher Project" (RS-2023-NR077002, Core Technology Research for Crowd Management Systems Based on AI and Mobility Big Data) and the "Human-Centered AI Core Technology Development Project" (RS-2022-II220157, Robust, Fair, and Scalable Data-Centric Continuous Learning).
The Fall of Tor for Just $2: A Solution to the Tor Vulnerability
<(From Left) Ph.D candidate Jinseo Lee, Hobin Kim, Professor Min Suk Kang>
KAIST research team has made a new milestone in global security research, becoming the first Korean research team to identify a security vulnerability in Tor, the world's largest anonymous network, and propose a solution.
On September 12, our university's Professor Min Suk Kang's research team from the School of Computing announced that they had received an Honorable Mention Award at the USENIX Security 2025 conference, held from August 13 to 15 in Seattle, USA.
The USENIX Security conference is one of the world's most prestigious conferences in information security, ranking first among all security and cryptography conferences and journals based on the Google Scholar h-5 index. The Honorable Mention Award is a highly regarded honor given to only about 6% of all papers.
The core of this research was the discovery of a new denial-of-service (DoS) attack vulnerability in Tor, the world's largest anonymous network, and the proposal of a method to resolve it. The Tor Onion Service, a key technology for various anonymity-based services, is a primary tool for privacy protection, used by millions of people worldwide every day.
The research team found that Tor's congestion-sensing mechanism is insecure and proved through a real-world network experiment that a website could be crippled for as little as $2. This is just 0.2% of the cost of existing attacks. The study is particularly notable as it was the first to show that the existing security measures implemented in Tor to prevent DoS attacks can actually make the attacks worse.
In addition, the team used mathematical modeling to uncover the principles behind this vulnerability and provided guidelines for Tor to maintain a balance between anonymity and availability. These guidelines have been shared with the Tor development team and are currently being applied through a phased patch.
A new attack model proposed by the research team shows that when an attacker sends a tiny, pre-designed amount of attack traffic to a Tor website, it confuses the congestion measurement system. This triggers an excessive congestion control, which ultimately prevents regular users from accessing the website. The research team proved through experiments that the cost of this attack is only 0.2% of existing methods.
In February, Tor founder Roger Dingledine visited KAIST and discussed collaboration with the research team. In June, the Tor administration paid a bug bounty of approximately $800 in appreciation for the team's proactive report.
"Tor anonymity system security is an area of active global research, but this is the first study on security vulnerabilities in Korea, which makes it very significant," said Professor Kang Min-seok. "The vulnerability we identified is very high-risk, so it received significant attention from many Tor security researchers at the conference. We will continue our comprehensive research, not only on enhancing the Tor system's anonymity but also on using Tor technology in the field of criminal investigation."
The research was conducted by Ph.D. candidate Jinseo Lee (first author), and former master's student Hobin Kim at the KAIST Graduate School of Information Security and a current Ph.D. candidate at Carnegie Mellon University (second author).
The paper is titled "Onions Got Puzzled: On the Challenges of Mitigating Denial-of-Service Problems in Tor Onion Services." https://www.usenix.org/conference/usenixsecurity25/presentation/lee
This achievement was recognized as a groundbreaking, first-of-its-kind study on Tor security vulnerabilities in Korea and played a decisive role in the selection of Professor Kang's lab for the 2025 Basic Research Program (Global Basic Research Lab) by the Ministry of Science and ICT.
< Photo 2. Presentation photo of Ph.D cadidate Jinseo Lee from School of Computing>
Through this program, the research team plans to establish a domestic research collaboration system with Ewha Womans University and Sungshin Women's University and expand international research collaborations with researchers in the U.S. and U.K. to conduct in-depth research on Tor vulnerabilities and anonymity over the next three years.
< Photo 3. Presentation photo of Ph.D cadidate Jinseo Lee from School of Computing>
KAIST Establishes 2 Billion KRW Scholarship Fund for the School of Computing through Matching Donation by Alumnus Byung-Gyu Chang
<(From Left) President Kwang Hyung Lee, Chairman Byung-Gyu Chang Professor Sukyoung Ryu from head of the School of Computing>
KAIST (President Kwang Hyung Lee) announced on the 1st of September that the School of Computing has established a “School of Computing Scholarship Fund” (worth 2 billion KRW) to provide consistent support for students in urgent need of financial assistance.
Professor Sukyoung Ryu, head of the School of Computing, who led the fundraising initiative, said, “Serving as a member of the KAIST Scholarship Committee since 2021, where the ‘Inseojeonggong Scholarship,’ also known as the ‘Emergency Relief Scholarship,’ greatly helped financially struggling students, I found it regrettable that once the principal was depleted, we were unable to continue providing support. With the establishment of this new School of Computing scholarship, we plan to begin providing aid from the Fall 2025 semester and hope that this initiative will expand to the entire KAIST community.”
Starting fundraising in May 2023, the School of Computing raised 1 billion KRW from a total of 63 donors. Alumnus Byung-Gyu Chang, Chairman of Krafton, supported the purpose of the scholarship and expanded the fund to 2 billion KRW by donating an equivalent amount through a 1:1 matching grant system.
The fundraising campaign saw participation from current students, alumni, faculty, and both current and former professors. Among them, alumni couple Jungtaek Kim (entered KAIST in ’92) and So-Yeon Ahn donated 200 million KRW to help students facing financial difficulties in their studies or job preparation. Alumni couple Ha-Yeon Seo (entered KAIST in ’95) and Dong-Hun Hahn (entered KAIST in ’96), following their earlier donation for the expansion of the School of Computing building, contributed an additional 40 million KRW to the scholarship fund.
Professor Emeritus Kyu-Young Whang and Professor Kyunghyun Cho of NYU, who had previously donated to the Kyu-Young Whang Scholarship Fund (formerly the Odysseus Scholarship Fund) and the Lim Mi-Sook Scholarship Fund respectively, also joined this initiative. Alumnus Seung Hyun Lee donated the entire $220,000 reward he received for reporting a critical security vulnerability in the Chrome browser.
Alumnus Bum-Gyu Lee, who co-runs the non-degree program “SW Academy Jungle” with the School of Computing, expressed gratitude for the role the school played in the growth of both himself and his company. Inquiring whether it “would be okay if [he] covered the remaining amount out of the 1 billion KRW target,” he became the final donor.
Professor Ryu emphasized, “Through this scholarship, I hope students who previously had to choose undesired paths due to financial reasons—despite wanting to pursue entrepreneurship or graduate studies—will have the chance to fully dedicate at least a semester or a year to the challenges they truly wish to take on.”
Chairman Byung-Gyu Chang stated, “I deeply resonate with the scholarship’s purpose of prioritizing support for students making career choices under financial strain. To accelerate its realization, I decided to make a matching donation equal to the fundraising amount. I hope this will serve as an opportunity to restructure the university-wide scholarship system.”
President Kwang Hyung Lee remarked that “KAIST’s greatest asset is its talented students who will lead the future, and no student should ever give up on studies, entrepreneurship, or dreams for financial reasons.” He added, “I hope this School of Computing scholarship will serve as a solid foundation for students to design and pursue their future challenges. I would like to thank all donors for their support and will actively review Chairman Chang’s proposal to ensure its realization.”
Meanwhile, the KAIST Development Foundation is actively promoting the “TeamKAIST” campaign for the general public and alumni to bring together more “KAIST benefactors.”
※ Related Website: https://giving.kaist.ac.kr/ko/sub01/sub0103_1.php
KAIST Develops New AI Inference-Scaling Method for Planning
<(From Left) Professor Sungjin Ahn, Ph.D candidate Jaesik Yoon, M.S candidate Hyeonseo Cho, M.S candidate Doojin Baek, Professor Yoshua Bengio>
<Ph.D candidate Jaesik Yoon from professor Ahn's research team>
Diffusion models are widely used in many AI applications, but research on efficient inference-time scalability*, particularly for reasoning and planning (known as System 2 abilities) has been lacking. In response, the research team has developed a new technology that enables high-performance and efficient inference for planning based on diffusion models. This technology demonstrated its performance by achieving a 100% success rate on an giant maze-solving task that no existing model had succeeded in. The results are expected to serve as core technology in various fields requiring real-time decision-making, such as intelligent robotics and real-time generative AI.
*Inference-time scalability: Refers to an AI model’s ability to flexibly adjust performance based on the computational resources available during inference.
KAIST (President Kwang Hyung Lee) announced on the 20th that a research team led by Professor Sungjin Ahn in the School of Computing has developed a new technology that significantly improves the inference-time scalability of diffusion-based reasoning through joint research with Professor Yoshua Bengio of the University of Montreal, a world-renowned scholar in deep learning. This study was carried out as part of a collaboration between KAIST and Mila (Quebec AI Institute) through the Prefrontal AI Joint Research Center.
This technology is gaining attention as a core AI technology that, after training, allows the AI to efficiently utilize more computational resources during inference to solve complex reasoning and planning problems that cannot be addressed merely by scaling up data or model size. However, current diffusion models used across various applications lack effective methodologies for implementing such scalability particularly for reasoning and planning.
To address this, Professor Ahn’s research team collaborated with Professor Bengio to propose a novel diffusion model inference technique based on Monte Carlo Tree Search. This method explores diverse generation paths during the diffusion process in a tree structure and is designed to efficiently identify high-quality outputs even with limited computational resources. As a result, it achieved a 100% success rate on the "giant-scale maze-solving" task, where previous methods had a 0% success rate.
In the follow-up research, the team also succeeded in significantly improving the major drawback of the proposed method—its slow speed. By efficiently parallelizing the tree search and optimizing computational cost, they achieved results of equal or superior quality up to 100 times faster than the previous version. This is highly meaningful as it demonstrates the method’s inference capabilities and real-time applicability simultaneously.
Professor Sungjin Ahn stated, “This research fundamentally overcomes the limitations of existing planning method based on diffusion models, which required high computational cost,” adding, “It can serve as core technology in various areas such as intelligent robotics, simulation-based decision-making, and real-time generative AI.”
The research results were presented as Spotlight papers (top 2.6% of all accepted papers) by doctoral student Jaesik Yoon of the School of Computing at the 42nd International Conference on Machine Learning (ICML 2025), held in Vancouver, Canada, from July 13 to 19.
※ Paper titles: Monte Carlo Tree Diffusion for System 2 Planning (Jaesik Yoon, Hyeonseo Cho, Doojin Baek, Yoshua Bengio, Sungjin Ahn, ICML 25), Fast Monte Carlo Tree Diffusion: 100x Speedup via Parallel Sparse Planning (Jaesik Yoon, Hyeonseo Cho, Yoshua Bengio, Sungjin Ahn)
※ DOI: https://doi.org/10.48550/arXiv.2502.07202, https://doi.org/10.48550/arXiv.2506.09498
This research was supported by the National Research Foundation of Korea.
KAIST Turns an Unprecedented Idea into Reality: Quantum Computing with Magnets
What started as an idea under KAIST’s Global Singularity Research Project—"Can we build a quantum computer using magnets?"—has now become a scientific reality. A KAIST-led international research team has successfully demonstrated a core quantum computing technology using magnetic materials (ferromagnets) for the first time in the world.
KAIST (represented by President Kwang-Hyung Lee) announced on the 6th of May that a team led by Professor Kab-Jin Kim from the Department of Physics, in collaboration with the Argonne National Laboratory and the University of Illinois Urbana-Champaign (UIUC), has developed a “photon-magnon hybrid chip” and successfully implemented real-time, multi-pulse interference using magnetic materials—marking a global first.
< Photo 1. Dr. Moojune Song (left) and Professor Kab-Jin Kim (right) of KAIST Department of Physics >
In simple terms, the researchers developed a special chip that synchronizes light and internal magnetic vibrations (magnons), enabling the transmission of phase information between distant magnets. They succeeded in observing and controlling interference between multiple signals in real time. This marks the first experimental evidence that magnets can serve as key components in quantum computing, serving as a pivotal step toward magnet-based quantum platforms.
The N and S poles of a magnet stem from the spin of electrons inside atoms. When many atoms align, their collective spin vibrations create a quantum particle known as a “magnon.”
Magnons are especially promising because of their nonreciprocal nature—they can carry information in only one direction, which makes them suitable for quantum noise isolation in compact quantum chips. They can also couple with both light and microwaves, enabling the potential for long-distance quantum communication over tens of kilometers.
Moreover, using special materials like antiferromagnets could allow quantum computers to operate at terahertz (THz) frequencies, far surpassing today’s hardware limitations, and possibly enabling room-temperature quantum computing without the need for bulky cryogenic equipment.
To build such a system, however, one must be able to transmit, measure, and control the phase information of magnons—the starting point and propagation of their waveforms—in real time. This had not been achieved until now.
< Figure 1. Superconducting Circuit-Based Magnon-Photon Hybrid System. (a) Schematic diagram of the device. A NbN superconducting resonator circuit fabricated on a silicon substrate is coupled with spherical YIG magnets (250 μm diameter), and magnons are generated and measured in real-time via a vertical antenna. (b) Photograph of the actual device. The distance between the two YIG spheres is 12 mm, a distance at which they cannot influence each other without the superconducting circuit. >
Professor Kim’s team used two tiny magnetic spheres made of Yttrium Iron Garnet (YIG) placed 12 mm apart with a superconducting resonator in between—similar to those used in quantum processors by Google and IBM. They input pulses into one magnet and successfully observed lossless transmission of magnon vibrations to the second magnet via the superconducting circuit.
They confirmed that from single nanosecond pulses to four microwave pulses, the magnon vibrations maintained their phase information and demonstrated predictable constructive or destructive interference in real time—known as coherent interference.
By adjusting the pulse frequencies and their intervals, the researchers could also freely control the interference patterns of magnons, effectively showing for the first time that electrical signals can be used to manipulate magnonic quantum states.
This work demonstrated that quantum gate operations using multiple pulses—a fundamental technique in quantum information processing—can be implemented using a hybrid system of magnetic materials and superconducting circuits. This opens the door for the practical use of magnet-based quantum devices.
< Figure 2. Experimental Data. (a) Measurement results of magnon-magnon band anticrossing via continuous wave measurement, showing the formation of a strong coupling hybrid system. (b) Magnon pulse exchange oscillation phenomenon between YIG spheres upon single pulse application. It can be seen that magnon information is coherently transmitted at regular time intervals through the superconducting circuit. (c,d) Magnon interference phenomenon upon dual pulse application. The magnon information state can be arbitrarily controlled by adjusting the time interval and carrier frequency between pulses. >
Professor Kab-Jin Kim stated, “This project began with a bold, even unconventional idea proposed to the Global Singularity Research Program: ‘What if we could build a quantum computer with magnets?’ The journey has been fascinating, and this study not only opens a new field of quantum spintronics, but also marks a turning point in developing high-efficiency quantum information processing devices.”
The research was co-led by postdoctoral researcher Moojune Song (KAIST), Dr. Yi Li and Dr. Valentine Novosad from Argonne National Lab, and Prof. Axel Hoffmann’s team at UIUC. The results were published in Nature Communications on April 17 and npj Spintronics on April 1, 2025.
Paper 1: Single-shot magnon interference in a magnon-superconducting-resonator hybrid circuit, Nat. Commun. 16, 3649 (2025)
DOI: https://doi.org/10.1038/s41467-025-58482-2
Paper 2: Single-shot electrical detection of short-wavelength magnon pulse transmission in a magnonic ultra-thin-film waveguide, npj Spintronics 3, 12 (2025)
DOI: https://doi.org/10.1038/s44306-025-00072-5
The research was supported by KAIST’s Global Singularity Research Initiative, the National Research Foundation of Korea (including the Mid-Career Researcher, Leading Research Center, and Quantum Information Science Human Resource Development programs), and the U.S. Department of Energy.
KAIST's Pioneering VR Precision Technology & Choreography Tool Receive Spotlights at CHI 2025
Accurate pointing in virtual spaces is essential for seamless interaction. If pointing is not precise, selecting the desired object becomes challenging, breaking user immersion and reducing overall experience quality. KAIST researchers have developed a technology that offers a vivid, lifelike experience in virtual space, alongside a new tool that assists choreographers throughout the creative process.
KAIST (President Kwang-Hyung Lee) announced on May 13th that a research team led by Professor Sang Ho Yoon of the Graduate School of Culture Technology, in collaboration with Professor Yang Zhang of the University of California, Los Angeles (UCLA), has developed the ‘T2IRay’ technology and the ‘ChoreoCraft’ platform, which enables choreographers to work more freely and creatively in virtual reality. These technologies received two Honorable Mention awards, recognizing the top 5% of papers, at CHI 2025*, the best international conference in the field of human-computer interaction, hosted by the Association for Computing Machinery (ACM) from April 25 to May 1.
< (From left) PhD candidates Jina Kim and Kyungeun Jung along with Master's candidate, Hyunyoung Han and Professor Sang Ho Yoon of KAIST Graduate School of Culture Technology and Professor Yang Zhang (top) of UCLA >
T2IRay: Enabling Virtual Input with Precision
T2IRay introduces a novel input method that allows for precise object pointing in virtual environments by expanding traditional thumb-to-index gestures. This approach overcomes previous limitations, such as interruptions or reduced accuracy due to changes in hand position or orientation.
The technology uses a local coordinate system based on finger relationships, ensuring continuous input even as hand positions shift. It accurately captures subtle thumb movements within this coordinate system, integrating natural head movements to allow fluid, intuitive control across a wide range.
< Figure 1. T2IRay framework utilizing the delicate movements of the thumb and index fingers for AR/VR pointing >
Professor Sang Ho Yoon explained, “T2IRay can significantly enhance the user experience in AR/VR by enabling smooth, stable control even when the user’s hands are in motion.”
This study, led by first author Jina Kim, was supported by the Excellent New Researcher Support Project of the National Research Foundation of Korea under the Ministry of Science and ICT, as well as the University ICT Research Center (ITRC) Support Project of the Institute of Information and Communications Technology Planning and Evaluation (IITP).
▴ Paper title: T2IRay: Design of Thumb-to-Index Based Indirect Pointing for Continuous and Robust AR/VR Input▴ Paper link: https://doi.org/10.1145/3706598.3713442
▴ T2IRay demo video: https://youtu.be/ElJlcJbkJPY
ChoreoCraft: Creativity Support through VR for Choreographers
In addition, Professor Yoon’s team developed ‘ChoreoCraft,’ a virtual reality tool designed to support choreographers by addressing the unique challenges they face, such as memorizing complex movements, overcoming creative blocks, and managing subjective feedback.
ChoreoCraft reduces reliance on memory by allowing choreographers to save and refine movements directly within a VR space, using a motion-capture avatar for real-time interaction. It also enhances creativity by suggesting movements that naturally fit with prior choreography and musical elements. Furthermore, the system provides quantitative feedback by analyzing kinematic factors like motion stability and engagement, helping choreographers make data-driven creative decisions.
< Figure 2. ChoreoCraft's approaches to encourage creative process >
Professor Yoon noted, “ChoreoCraft is a tool designed to address the core challenges faced by choreographers, enhancing both creativity and efficiency. In user tests with professional choreographers, it received high marks for its ability to spark creative ideas and provide valuable quantitative feedback.”
This research was conducted in collaboration with doctoral candidate Kyungeun Jung and master’s candidate Hyunyoung Han, alongside the Electronics and Telecommunications Research Institute (ETRI) and One Million Co., Ltd. (CEO Hye-rang Kim), with support from the Cultural and Arts Immersive Service Development Project by the Ministry of Culture, Sports and Tourism.
▴ Paper title: ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tools▴ Paper link: https://doi.org/10.1145/3706598.3714220
▴ ChoreoCraft demo video: https://youtu.be/Ms1fwiSBjjw
*CHI (Conference on Human Factors in Computing Systems): The premier international conference on human-computer interaction, organized by the ACM, was held this year from April 25 to May 1, 2025.