본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.26
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Kim+Jaechul+Graduate+School+of+AI
by recently order
by view order
Answering Calls for Help Even at Dawn: an AI TA makes a Successful Debut at KAIST
- Research team of Professor Yoonjae Choi of Kim Jaechul Graduate School of AI and Professor Hwajung Hong of the Department of Industrial Design, development of AI assistant (VTA) that helps with operation and learning in lectures for 477 students - Responds to students’ questions about theory and practice 24 hours a day by referring to class slides, coding practice materials, and lecture videos - Releases the system’s source code to support the development of customized learning assistance systems and application in educational settings in the future < Photo 1. (From left) PhD candidate Sunjun Kweon, Master's candidate Sooyohn Nam, PhD candidate Hyunseung Lim, Professor Hwajung Hong, Professor Yoonjae Choi > “At first, I didn’t have high expectations for AI assistant (VTA), but it was very useful because I could get immediate answers when I suddenly asked questions about concepts that I was curious about late at night,” he said. “In particular, I was able to ask questions about parts that I hesitated to ask a human assistant without feeling burdened, and as I asked more questions, my understanding of the class increased.” (KAIST Ph.D. student Ji-won Yang) KAIST (President Kwang-Hyung Lee) announced that a joint research team of Professor Yoonjae Choi of Kim Jaechul Graduate School of AI and Professor Hwajung Hong of Industrial Design Department On the June 5th, it was announced that it had developed a ‘Virtual Teaching Assistant (VTA)’ that can provide personalized feedback to each student even in large lectures and successfully applied it to actual lectures. This study is the first domestic case in which VTA was introduced to the ‘Programming for Artificial Intelligence’ course of the Kim Jaechul Graduate School of AI, which 477 master’s and doctoral students took in the fall semester of 2024, and its effectiveness and practicality were verified on a large scale in an actual educational setting. The AI teaching assistant developed in this study is an agent specialized for classes, different from general chatGPT or existing chatbots. The research team automatically vectorized a large amount of class materials such as lecture slides, coding practice materials, and lecture videos, and implemented a Retrieval Augmented Generation (RAG) structure in which questions and answers are answered based on this. < Photo 2. Students demonstrating how the Virtual Teaching Assistant works > When a student asks a question, the system searches for the most relevant class materials in real time based on the context of the question and generates a response. This process is not simply calling a large language model (LLM), but is designed as a data-based question-and-answer that corresponds to the class content, so it can be said to be an intelligent system that secures both learning reliability and accuracy. The first author of this study and the responsible teaching assistant for the class, PhD candidate Sunjun Kweon, said, “In the past, there were many repetitive and basic questions such as content already explained in class or simple concept definitions, so it was difficult for teaching assistants to focus on key questions.” He continued, “After the introduction of VTA, students reduced repetitive questions and focused on essential questions, so the burden on teaching assistants was noticeably reduced and they were able to focus on more high-level learning support.” In fact, the number of questions that teaching assistants had to answer directly decreased by about 40% compared to last year’s class. < Photo 3. A student working with VTA. > More than half of all students actually used VTA during the 14-week operation, and a total of 3,869 questions and answers were recorded. In particular, the frequency of VTA use was higher for students who were not majoring in AI or lacked prior knowledge, which suggests that VTA provided practical help as a learning aid. In addition, the analysis results showed that students tended to ask questions about theoretical concepts to VTA more often than to human assistants. This can be interpreted as the AI assistant providing an environment where students can freely ask questions without being evaluated or feeling uncomfortable, thereby actively encouraging learning participation. As a result of the survey conducted three times before, during, and after class, students reported higher reliability, response appropriateness, and comfort with VTA than at the beginning. In particular, students who had experience of hesitating to ask questions to human assistants showed higher satisfaction with their interactions with AI assistants. < Figure 1. Internal structure of the AI Teaching Assistant (VTA) applied in this course. It follows a Retrieval-Augmented Generation (RAG) structure that builds a vector database from course materials (PDFs, recorded lectures, coding practice materials, etc.), searches for relevant documents based on student questions and conversation history, and then generates responses based on them. > Professor Yoonjae Choi, who led the research and is the professor in charge of the class, said, “The significance of the study lies in the fact that it has confirmed that AI technology can provide practical help to both students and instructors. We hope that this technology will be expanded to more diverse classes in the future.” The research team is supporting other educational institutions and researchers to develop customized learning assistance systems based on this by releasing the system's source code on the developer platform GitHub and apply it to educational settings. < Figure 2. Initial screen of the AI Teaching Assistant (VTA) introduced in the "Programming for AI" course. It asks for student ID input along with simple guidelines, a mechanism to ensure that only registered students can use it, blocking indiscriminate external access and ensuring limited use based on students. > The related paper was accepted by the 'ACL 2025 Industry Track', one of the most prestigious international academic conferences in the field of natural language processing (NLP), on May 9, 2025, and its research excellence was recognized. ※ Paper title: A Large-Scale Real-World Evaluation of an LLM-Based Virtual Teaching Assistant < Figure 3. Example conversation with the AI Teaching Assistant (VTA). When a student inputs a class-related question, the system internally searches for relevant class materials and then generates an answer based on them. In this way, VTA provides learning support by reflecting class content in context. > Meanwhile, this study was conducted with the support of the KAIST Center for Teaching and Learning Innovation, the National Research Foundation of Korea, and the National IT Industry Promotion Agency.
2025.06.05
View 108
KAIST Proposes a New Way to Circumvent a Long-time Frustration in Neural Computing
The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. The technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future. KAIST (President Kwang-Hyung Lee) announced on the 16th of December that Professor Se-Bum Paik 's research team in the Department of Brain Cognitive Sciences solved the weight transport problem*, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks. *Weight transport problem: This is the biggest obstacle to the development of artificial intelligence that mimics the biological brain. It is the fundamental reason why large-scale memory and computational work are required in the learning of general artificial neural networks, unlike biological brains. Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning. < Figure 1. Illustration depicting the method of random noise training and its effects > This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different. At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem. In 2016, a joint research team from Oxford University and DeepMind in the UK first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality. KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise). As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training. < Figure 2. Illustration depicting the meta-learning effect of random noise training > The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.’ It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport. < Figure 3. Illustration depicting research on understanding the brain's operating principles through artificial neural networks > Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning,” and added, “It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.” This study, in which Jeonghwan Cheon, a Master’s candidate of KAIST Department of Brain and Cognitive Sciences participated as the first author and Professor Sang Wan Lee of the same department as a co-author, was presented at the 38th Neural Information Processing Systems (NeurIPS), the world's top artificial intelligence conference, on December 14th in Vancouver, Canada. (Paper title: Pretraining with random noise for fast and robust learning without weight transport) This study was conducted with the support of the National Research Foundation of Korea's Basic Research Program in Science and Engineering, the Information and Communications Technology Planning and Evaluation Institute's Talent Development Program, and the KAIST Singularity Professor Program.
2024.12.16
View 5562
Professor Joseph J. Lim of KAIST receives the Best System Paper Award from RSS 2023, First in Korea
- Professor Joseph J. Lim from the Kim Jaechul Graduate School of AI at KAIST and his team receive an award for the most outstanding paper in the implementation of robot systems. - Professor Lim works on AI-based perception, reasoning, and sequential decision-making to develop systems capable of intelligent decision-making, including robot learning < Photo 1. RSS2023 Best System Paper Award Presentation > The team of Professor Joseph J. Lim from the Kim Jaechul Graduate School of AI at KAIST has been honored with the 'Best System Paper Award' at "Robotics: Science and Systems (RSS) 2023". The RSS conference is globally recognized as a leading event for showcasing the latest discoveries and advancements in the field of robotics. It is a venue where the greatest minds in robotics engineering and robot learning come together to share their research breakthroughs. The RSS Best System Paper Award is a prestigious honor granted to a paper that excels in presenting real-world robot system implementation and experimental results. < Photo 2. Professor Joseph J. Lim of Kim Jaechul Graduate School of AI at KAIST > The team led by Professor Lim, including two Master's students and an alumnus (soon to be appointed at Yonsei University), received the prestigious RSS Best System Paper Award, making it the first-ever achievement for a Korean and for a domestic institution. < Photo 3. Certificate of the Best System Paper Award presented at RSS 2023 > This award is especially meaningful considering the broader challenges in the field. Although recent progress in artificial intelligence and deep learning algorithms has resulted in numerous breakthroughs in robotics, most of these achievements have been confined to relatively simple and short tasks, like walking or pick-and-place. Moreover, tasks are typically performed in simulated environments rather than dealing with more complex, long-horizon real-world tasks such as factory operations or household chores. These limitations primarily stem from the considerable challenge of acquiring data required to develop and validate learning-based AI techniques, particularly in real-world complex tasks. In light of these challenges, this paper introduced a benchmark that employs 3D printing to simplify the reproduction of furniture assembly tasks in real-world environments. Furthermore, it proposed a standard benchmark for the development and comparison of algorithms for complex and long-horizon tasks, supported by teleoperation data. Ultimately, the paper suggests a new research direction of addressing complex and long-horizon tasks and encourages diverse advancements in research by facilitating reproducible experiments in real-world environments. Professor Lim underscored the growing potential for integrating robots into daily life, driven by an aging population and an increase in single-person households. As robots become part of everyday life, testing their performance in real-world scenarios becomes increasingly crucial. He hoped this research would serve as a cornerstone for future studies in this field. The Master's students, Minho Heo and Doohyun Lee, from the Kim Jaechul Graduate School of AI at KAIST, also shared their aspirations to become global researchers in the domain of robot learning. Meanwhile, the alumnus of Professor Lim's research lab, Dr. Youngwoon Lee, is set to be appointed to the Graduate School of AI at Yonsei University and will continue pursuing research in robot learning. Paper title: Furniture Bench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation. Robotics: Science and Systems. < Image. Conceptual Summary of the 3D Printing Technology >
2023.07.31
View 9218
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1