KAIST Predicts Human Group Behavior with AI! 1st Place at the World’s Top Conference… Major Success after 23 Years
<(From Left) Ph.D candidate Geon Lee, Ph.D candidate Minyoung Choe, M.S candidate Jaewan Chun, Professor Kijung Shin, M.S candidate Seokbum Yoon>
KAIST (President Kwang Hyung Lee) announced on the 9th of December that Professor Kijung Shin’s research team at the Kim Jaechul Graduate School of AI has developed a groundbreaking AI technology that predicts complex social group behavior by analyzing how individual attributes such as age and role influence group relationships.
With this technology, the research team achieved the remarkable feat of winning the Best Paper Award at the world-renowned data mining conference “IEEE ICDM,” hosted by the Institute of Electrical and Electronics Engineers (IEEE). This is the highest honor awarded to only one paper out of 785 submissions worldwide, and marks the first time in 23 years that a Korean university research team has received this award, once again demonstrating KAIST’s technological leadership on the global research stage.
Today, group interactions involving many participants at the same time—such as online communities, research collaborations, and group chats—are rapidly increasing across society. However, there has been a lack of technology that can precisely explain both how such group behavior is structured and how individual characteristics influence it at the same time.
To overcome this limitation, Professor Kijung Shin’s research team developed an AI model called “NoAH (Node Attribute-based Hypergraph Generator),” which realistically reproduces the interplay between individual attributes and group structure.
NoAH is an artificial intelligence that explains and imitates what kinds of group behaviors emerge when people’s characteristics come together. For example, it can analyze and faithfully reproduce how information such as a person’s interests and roles actually combine to form group behavior.
As such, NoAH is an AI that generates “realistic group behavior” by simultaneously reflecting human traits and relationships. It was shown to reproduce various real-world group behaviors—such as product purchase combinations in e-commerce, the spread of online discussions, and co-authorship networks among researchers—far more realistically than existing models.
< The process of generating group interactions using NoAH >
Professor Kijung Shin stated, “This study opens a new AI paradigm that enables a richer understanding of complex interactions by considering not only the structure of groups but also individual attributes together,” and added, “Analyses of online communities, messengers, and social networks will become far more precise.”
This research was conducted by a team consisting of Professor Kijung Shin and KAIST Kim Jaechul Graduate School of AI students: master’s students Jaewan Chun and Seokbum Yoon, and doctoral students Minyoung Choe and Geon Lee, and was presented at IEEE ICDM on November 18.
※ Paper title: “Attributed Hypergraph Generation with Realistic Interplay Between Structure and Attributes” Original paper: https://arxiv.org/abs/2509.21838
< Photo from the award ceremony held on November 14 at the International Spy Museum in Washington, D.C.>
Meanwhile, including this award-winning paper, Professor Shin’s research team presented a total of four papers at IEEE ICDM this year. In addition, in 2023, the team also received the Best Student Paper Runner-up (4th place) at the same conference.
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-202400457882, AI Research Hub Project) (RS-2019-II190075, Artificial Intelligence Graduate School Program (KAIST)) (No. RS-2022-II220871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration).
How Does AI Think? KAIST Achieves First Visualization of the Internal Structure Behind AI Decision-Making
<(From Left) Ph.D candidate Daehee Kwon, Ph.D candidate Sehyun lee, Professor Jaesik Choi>
Although deep learning–based image recognition technology is rapidly advancing, it still remains difficult to clearly explain the criteria AI uses internally to observe and judge images. In particular, technologies that analyze how large-scale models combine various concepts (e.g., cat ears, car wheels) to reach a conclusion have long been recognized as a major unsolved challenge.
KAIST (President Kwang Hyung Lee) announced on the 26th of November that Professor Jaesik Choi’s research team at the Kim Jaechul Graduate School of AI has developed a new explainable AI (XAI) technology that visualizes the concept-formation process inside a model at the level of circuits, enabling humans to understand the basis on which AI makes decisions.
The study is evaluated as a significant step forward that allows researchers to structurally examine “how AI thinks.”
Inside deep learning models, there exist basic computational units called neurons, which function similarly to those in the human brain. Neurons detect small features within an image—such as the shape of an ear, a specific color, or an outline—and compute a value (signal) that is transmitted to the next layer.
In contrast, a circuit refers to a structure in which multiple neurons are connected to jointly recognize a single meaning (concept). For example, to recognize the concept of cat ear, neurons detecting outline shapes, neurons detecting triangular forms, and neurons detecting fur-color patterns must activate in sequence, forming a functional unit (circuit).
Up until now, most explanation techniques have taken a neuron-centric approach based on the idea that “a specific neuron detects a specific concept.” However, in reality, deep learning models form concepts through cooperative circuit structures involving many neurons. Based on this observation, the KAIST research team proposed a technique that expands the unit of concept representation from “neuron → circuit.”
The research team’s newly developed technology, Granular Concept Circuits (GCC), is a novel method that analyzes and visualizes how an image-classification model internally forms concepts at the circuit level.
GCC automatically traces circuits by computing Neuron Sensitivity and Semantic Flow. Neuron Sensitivity indicates how strongly a neuron responds to a particular feature, while Semantic Flow measures how strongly that feature is passed on to the next concept. Using these metrics, the system can visualize, step-by-step, how basic features such as color and texture are assembled into higher-level concepts.
The team conducted experiments in which specific circuits were temporarily disabled (ablation). As a result, when the circuit responsible for a concept was deactivated, the AI’s predictions actually changed.
In other words, the experiment directly demonstrated that the corresponding circuit indeed performs the function of recognizing that concept.
This study is regarded as the first to reveal, at a fine-grained circuit level, the actual structural process by which concepts are formed inside complex deep learning models. Through this, the research suggests practical applicability across the entire explainable AI (XAI) domain—including strengthening transparency in AI decision-making, analyzing the causes of misclassification, detecting bias, improving model debugging and architecture, and enhancing safety and accountability.
The research team stated, “This technology shows the concept structures that AI forms internally in a way that humans can understand,” adding that “this study provides a scientific starting point for researching how AI thinks.”
Professor Jaesik Choi emphasized, “Unlike previous approaches that simplified complex models for explanation, this is the first approach to precisely interpret the model’s interior at the level of fine-grained circuits,” and added, “We demonstrated that the concepts learned by AI can be automatically traced and visualized.”
< Overview of the Conceptual Circuit Proposed by the Research Team >
This study, with Ph.D. candidates Dahee Kwon and Sehyun Lee from KAIST Kim Jaechul Graduate School of AI as co–first authors, was presented on October 21 at the International Conference on Computer Vision (ICCV).
Paper title: Granular Concept Circuits: Toward a Fine-Grained Circuit Discovery for Concept Representations
Paper link: https://openaccess.thecvf.com/content/ICCV2025/papers/Kwon_Granular_Concept_Circuits_Toward_a_Fine-Grained_Circuit_Discovery_for_Concept_ICCV_2025_paper.pdf
This research was supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the “Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation” project, the AI Research Hub Project, and the KAIST AI Graduate School Program, and was carried out with support from the Defense Acquisition Program Administration (DAPA) and the Agency for Defense Development (ADD) at the KAIST Center for Applied Research in Artificial Intelligence.
KAIST Develops AI ‘MARIOH’ to Uncover and Reconstruct Hidden Multi-Entity Relationships
<(From Left) Professor Kijung Shin, Ph.D candidate Kyuhan Lee, and Ph.D candidate Geon Lee>
Just like when multiple people gather simultaneously in a meeting room, higher-order interactions—where many entities interact at once—occur across various fields and reflect the complexity of real-world relationships. However, due to technical limitations, in many fields, only low-order pairwise interactions between entities can be observed and collected, which results in the loss of full context and restricts practical use. KAIST researchers have developed the AI model “MARIOH,” which can accurately reconstruct* higher-order interactions from such low-order information, opening up innovative analytical possibilities in fields like social network analysis, neuroscience, and life sciences.
*Reconstruction: Estimating/reconstructing the original structure that has disappeared or was not observed.
KAIST (President Kwang Hyung Lee) announced on the 5th that Professor Kijung Shin’s research team at the Kim Jaechul Graduate School of AI has developed an AI technology called “MARIOH” (Multiplicity-Aware Hypergraph Reconstruction), which can reconstruct higher-order interaction structures with high accuracy using only low-order interaction data.
Reconstructing higher-order interactions is challenging because a vast number of higher-order interactions can arise from the same low-order structure.
The key idea behind MARIOH, developed by the research team, is to utilize multiplicity information of low-order interactions to drastically reduce the number of candidate higher-order interactions that could stem from a given structure.
In addition, by employing efficient search techniques, MARIOH quickly identifies promising interaction candidates and uses multiplicity-based deep learning to accurately predict the likelihood that each candidate represents an actual higher-order interaction.
<Figure 1. An example of recovering high-dimensional relationships (right) from low-dimensional paper co-authorship relationships (left) with 100% accuracy, using MARIOH technology.>
Through experiments on ten diverse real-world datasets, the research team showed that MARIOH reconstructed higher-order interactions with up to 74% greater accuracy compared to existing methods.
For instance, in a dataset on co-authorship relations (source: DBLP), MARIOH achieved a reconstruction accuracy of over 98%, significantly outperforming existing methods, which reached only about 86%. Furthermore, leveraging the reconstructed higher-order structures led to improved performance in downstream tasks, including prediction and classification.
According to Kijung, “MARIOH moves beyond existing approaches that rely solely on simplified connection information, enabling precise analysis of the complex interconnections found in the real world.” Furthermore, “it has broad potential applications in fields such as social network analysis for group chats or collaborative networks, life sciences for studying protein complexes or gene interactions, and neuroscience for tracking simultaneous activity across multiple brain regions.”
The research was conducted by Kyuhan Lee (Integrated M.S.–Ph.D. program at the Kim Jaechul Graduate School of AI at KAIST; currently a software engineer at GraphAI), Geon Lee (Integrated M.S.–Ph.D. program at KAIST), and Professor Kijung Shin. It was presented at the 41st IEEE International Conference on Data Engineering (IEEE ICDE), held in Hong Kong this past May.
※ Paper title: MARIOH: Multiplicity-Aware Hypergraph Reconstruction ※ DOI: https://doi.ieeecomputersociety.org/10.1109/ICDE65448.2025.00233
<Figure 2. An example of the process of recovering high-dimensional relationships using MARIOH technology>
This research was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) through the project “EntireDB2AI: Foundational technologies and software for deep representation learning and prediction using complete relational databases,” as well as by the National Research Foundation of Korea through the project “Graph Foundation Model: Graph-based machine learning applicable across various modalities and domains.”
KAIST Researchers Unveil an AI that Generates "Unexpectedly Original" Designs
< Photo 1. Professor Jaesik Choi, KAIST Kim Jaechul Graduate School of AI >
Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text "creative," its ability to generate truly creative images remains limited. KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary.
Professor Jaesik Choi's research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training.
< Photo 2. Gayoung Lee, Researcher at NAVER AI Lab; Dahee Kwon, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Jiyeon Han, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Junho Kim, Researcher at NAVER AI Lab >
Professor Choi's research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns. Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation.
Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model.
Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training.
< Figure 1. Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. >
The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility.
In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods.
Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, "This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation."
They added, "This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem."
< Figure 2. Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. >
This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.* Paper Title: Enhancing Creative Generation on Stable Diffusion-based Models* DOI: https://doi.org/10.48550/arXiv.2503.23538
This research was supported by the KAIST-NAVER Ultra-creative AI Research Center, the Innovation Growth Engine Project Explainable AI, the AI Research Hub Project, and research on flexible evolving AI technology development in line with increasingly strengthened ethical policies, all funded by the Ministry of Science and ICT through the Institute for Information & Communications Technology Promotion. It also received support from the KAIST AI Graduate School Program and was carried out at the KAIST Future Defense AI Specialized Research Center with support from the Defense Acquisition Program Administration and the Agency for Defense Development.
KAIST Introduces ‘Virtual Teaching Assistant’ That can Answer Even in the Middle of the Night – Successful First Deployment in Classroom
- Research teams led by Prof. Yoonjae Choi (Kim Jaechul Graduate School of AI) and Prof. Hwajeong Hong (Department of Industrial Design) at KAIST developed a Virtual Teaching Assistant (VTA) to support learning and class operations for a course with 477 students.
- The VTA responds 24/7 to students’ questions related to theory and practice by referencing lecture slides, coding assignments, and lecture videos.
- The system’s source code has been released to support future development of personalized learning support systems and their application in educational settings.
< Photo 1. (From left) PhD candidate Sunjun Kweon, Master's candidate Sooyohn Nam, PhD candidate Hyunseung Lim, Professor Hwajung Hong, Professor Yoonjae Choi >
“At first, I didn’t have high expectations for the Virtual Teaching Assistant (VTA), but it turned out to be extremely helpful—especially when I had sudden questions late at night, I could get immediate answers,” said Jiwon Yang, a Ph.D. student at KAIST. “I was also able to ask questions I would’ve hesitated to bring up with a human TA, which led me to ask even more and ultimately improved my understanding of the course.”
KAIST (President Kwang Hyung Lee) announced on June 5th that a joint research team led by Prof. Yoonjae Choi of the Kim Jaechul Graduate School of AI and Prof. Hwajeong Hong of the Department of Industrial Design has successfully developed and deployed a Virtual Teaching Assistant (VTA) that provides personalized feedback to individual students even in large-scale classes.
This study marks one of the first large-scale, real-world deployments in Korea, where the VTA was introduced in the “Programming for Artificial Intelligence” course at the KAIST Kim Jaechul Graduate School of AI, taken by 477 master’s and Ph.D. students during the Fall 2024 semester, to evaluate its effectiveness and practical applicability in an actual educational setting.
The AI teaching assistant developed in this study is a course-specialized agent, distinct from general-purpose tools like ChatGPT or conventional chatbots. The research team implemented a Retrieval-Augmented Generation (RAG) architecture, which automatically vectorizes a large volume of course materials—including lecture slides, coding assignments, and video lectures—and uses them as the basis for answering students’ questions.
< Photo 2. Teaching Assistant demonstrating to the student how the Virtual Teaching Assistant works>
When a student asks a question, the system searches for the most relevant course materials in real time based on the context of the query, and then generates a response. This process is not merely a simple call to a large language model (LLM), but rather a material-grounded question answering system tailored to the course content—ensuring both high reliability and accuracy in learning support.
Sunjun Kweon, the first author of the study and head teaching assistant for the course, explained, “Previously, TAs were overwhelmed with repetitive and basic questions—such as concepts already covered in class or simple definitions—which made it difficult to focus on more meaningful inquiries.” He added, “After introducing the VTA, students began to reduce repeated questions and focus on more essential ones. As a result, the burden on TAs was significantly reduced, allowing us to concentrate on providing more advanced learning support.”
In fact, compared to the previous year’s course, the number of questions that required direct responses from human TAs decreased by approximately 40%.
< Photo 3. A student working with VTA. >
The VTA, which was operated over a 14-week period, was actively used by more than half of the enrolled students, with a total of 3,869 Q&A interactions recorded. Notably, students without a background in AI or with limited prior knowledge tended to use the VTA more frequently, indicating that the system provided practical support as a learning aid, especially for those who needed it most.
The analysis also showed that students tended to ask the VTA more frequently about theoretical concepts than they did with human TAs. This suggests that the AI teaching assistant created an environment where students felt free to ask questions without fear of judgment or discomfort, thereby encouraging more active engagement in the learning process.
According to surveys conducted before, during, and after the course, students reported increased trust, response relevance, and comfort with the VTA over time. In particular, students who had previously hesitated to ask human TAs questions showed higher levels of satisfaction when interacting with the AI teaching assistant.
< Figure 1. Internal structure of the AI Teaching Assistant (VTA) applied in this course. It follows a Retrieval-Augmented Generation (RAG) structure that builds a vector database from course materials (PDFs, recorded lectures, coding practice materials, etc.), searches for relevant documents based on student questions and conversation history, and then generates responses based on them. >
Professor Yoonjae Choi, the lead instructor of the course and principal investigator of the study, stated, “The significance of this research lies in demonstrating that AI technology can provide practical support to both students and instructors. We hope to see this technology expanded to a wider range of courses in the future.”
The research team has released the system’s source code on GitHub, enabling other educational institutions and researchers to develop their own customized learning support systems and apply them in real-world classroom settings.
< Figure 2. Initial screen of the AI Teaching Assistant (VTA) introduced in the "Programming for AI" course. It asks for student ID input along with simple guidelines, a mechanism to ensure that only registered students can use it, blocking indiscriminate external access and ensuring limited use based on students. >
The related paper, titled “A Large-Scale Real-World Evaluation of an LLM-Based Virtual Teaching Assistant,” was accepted on May 9, 2025, to the Industry Track of ACL 2025, one of the most prestigious international conferences in the field of Natural Language Processing (NLP), recognizing the excellence of the research.
< Figure 3. Example conversation with the AI Teaching Assistant (VTA). When a student inputs a class-related question, the system internally searches for relevant class materials and then generates an answer based on them. In this way, VTA provides learning support by reflecting class content in context. >
This research was conducted with the support of the KAIST Center for Teaching and Learning Innovation, the National Research Foundation of Korea, and the National IT Industry Promotion Agency.
KAIST Proposes a New Way to Circumvent a Long-time Frustration in Neural Computing
The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. The technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future.
KAIST (President Kwang-Hyung Lee) announced on the 16th of December that Professor Se-Bum Paik 's research team in the Department of Brain Cognitive Sciences solved the weight transport problem*, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks.
*Weight transport problem: This is the biggest obstacle to the development of artificial intelligence that mimics the biological brain. It is the fundamental reason why large-scale memory and computational work are required in the learning of general artificial neural networks, unlike biological brains.
Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning.
< Figure 1. Illustration depicting the method of random noise training and its effects >
This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different.
At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem.
In 2016, a joint research team from Oxford University and DeepMind in the UK first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality.
KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise).
As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training.
< Figure 2. Illustration depicting the meta-learning effect of random noise training >
The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.’ It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport.
< Figure 3. Illustration depicting research on understanding the brain's operating principles through artificial neural networks >
Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning,” and added, “It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.”
This study, in which Jeonghwan Cheon, a Master’s candidate of KAIST Department of Brain and Cognitive Sciences participated as the first author and Professor Sang Wan Lee of the same department as a co-author, was presented at the 38th Neural Information Processing Systems (NeurIPS), the world's top artificial intelligence conference, on December 14th in Vancouver, Canada. (Paper title: Pretraining with random noise for fast and robust learning without weight transport)
This study was conducted with the support of the National Research Foundation of Korea's Basic Research Program in Science and Engineering, the Information and Communications Technology Planning and Evaluation Institute's Talent Development Program, and the KAIST Singularity Professor Program.
Professor Joseph J. Lim of KAIST receives the Best System Paper Award from RSS 2023, First in Korea
- Professor Joseph J. Lim from the Kim Jaechul Graduate School of AI at KAIST and his team receive an award for the most outstanding paper in the implementation of robot systems.
- Professor Lim works on AI-based perception, reasoning, and sequential decision-making to develop systems capable of intelligent decision-making, including robot learning
< Photo 1. RSS2023 Best System Paper Award Presentation >
The team of Professor Joseph J. Lim from the Kim Jaechul Graduate School of AI at KAIST has been honored with the 'Best System Paper Award' at "Robotics: Science and Systems (RSS) 2023".
The RSS conference is globally recognized as a leading event for showcasing the latest discoveries and advancements in the field of robotics. It is a venue where the greatest minds in robotics engineering and robot learning come together to share their research breakthroughs. The RSS Best System Paper Award is a prestigious honor granted to a paper that excels in presenting real-world robot system implementation and experimental results.
< Photo 2. Professor Joseph J. Lim of Kim Jaechul Graduate School of AI at KAIST >
The team led by Professor Lim, including two Master's students and an alumnus (soon to be appointed at Yonsei University), received the prestigious RSS Best System Paper Award, making it the first-ever achievement for a Korean and for a domestic institution.
< Photo 3. Certificate of the Best System Paper Award presented at RSS 2023 >
This award is especially meaningful considering the broader challenges in the field. Although recent progress in artificial intelligence and deep learning algorithms has resulted in numerous breakthroughs in robotics, most of these achievements have been confined to relatively simple and short tasks, like walking or pick-and-place. Moreover, tasks are typically performed in simulated environments rather than dealing with more complex, long-horizon real-world tasks such as factory operations or household chores. These limitations primarily stem from the considerable challenge of acquiring data required to develop and validate learning-based AI techniques, particularly in real-world complex tasks.
In light of these challenges, this paper introduced a benchmark that employs 3D printing to simplify the reproduction of furniture assembly tasks in real-world environments. Furthermore, it proposed a standard benchmark for the development and comparison of algorithms for complex and long-horizon tasks, supported by teleoperation data. Ultimately, the paper suggests a new research direction of addressing complex and long-horizon tasks and encourages diverse advancements in research by facilitating reproducible experiments in real-world environments.
Professor Lim underscored the growing potential for integrating robots into daily life, driven by an aging population and an increase in single-person households. As robots become part of everyday life, testing their performance in real-world scenarios becomes increasingly crucial. He hoped this research would serve as a cornerstone for future studies in this field.
The Master's students, Minho Heo and Doohyun Lee, from the Kim Jaechul Graduate School of AI at KAIST, also shared their aspirations to become global researchers in the domain of robot learning. Meanwhile, the alumnus of Professor Lim's research lab, Dr. Youngwoon Lee, is set to be appointed to the Graduate School of AI at Yonsei University and will continue pursuing research in robot learning.
Paper title: Furniture Bench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation. Robotics: Science and Systems.
< Image. Conceptual Summary of the 3D Printing Technology >