Robot Valley Project Activation of the Korean style Robot and AI Startup Ecosystem Fully Underway
< From left: Top Excellence Award winner Robolight (Pre-startup Founder Han-seol Choi), Top Excellence Award winner Coils (CEO Seong-ryeol Heo), Professor Jung Kim of KAIST, Grand Prize winner Noman (CEO Jung-wook Moon), Professor Kyoungchul Kong of KAIST, CEO Dae-hee Park of Daejeon Creative Economy Innovation Center, Excellence Award winner Gigaflops (CEO Min-tae Kim), Excellence Award winner BLUE APEX (Pre-startup Founder Na-hyeon Kwon) >
KAIST announced on December 10th that KAIST Holdings (CEO Hyeonmin Bae), a specialized technology commercialization investment institution, successfully held the '2025 KAIST Hu-Robotics Startup Cup' on the 9th at the main building of Daejeon Startup Park. This was held as part of the Robot Valley Project, aiming to discover and foster promising startup teams in the robotics field and establish a robot scale-up ecosystem based on a technology platform.
This competition was conducted as a core program of the Robot Valley Project (Deep-Tech Scale-up Valley Fostering Project), which is promoted by the Ministry of Science and ICT and supported by Daejeon Metropolitan City. The competition proceeded through a meet-up day with KAIST Mechanical Engineering researchers, robotics companies like Angel Robotics and Twinny, and startup experts such as Bluepoint, leading to the final round. Throughout this process, a support system for the scale-up of robot startups was established, linking technology verification, strengthening entrepreneurial capabilities, and investment linkage.
KAIST Holdings and the Deep-Tech Valley Project Group (hereinafter referred to as the Project Group) stated that this competition marks the beginning of 'establishing a Korean-style Robot and AI startup ecosystem.' Their goal through the Robot Valley Project is to create a Korean-style robot scale-up ecosystem centered around Daejeon and KAIST, and furthermore, to build a technology circulation structure utilizing verified technology platforms.
KAIST has produced successful scale-up cases in the robotics field, such as Rainbow Robotics and Angel Robotics. However, the recent robotics industry has seen a rapid increase in technological difficulty due to the convergence of mechanical engineering, AI, and control software, creating structural limitations for early-stage founders to challenge alone.
To solve this, the Project Group proposed the 'Scale-up Valley Construction Strategy,' which opens up the verified technologies of established senior companies to junior founders. This strategy focuses on supporting startups to concentrate on developing market-ready robot services and applications on top of verified technology platforms, rather than consuming excessive time on developing basic hardware like motors and controllers.
The Angel Robotics technology platform, presented as the core underlying technology of this strategy, consists of actuators, control modules, and core software. KAIST plans to gradually open up these foundational technologies for use by early-stage startup teams.
The Project Group emphasized that enabling startup teams to utilize such technology platforms from the initial stage is the core infrastructure for accelerating the Korean-style robot startup ecosystem.
A total of 21 teams participated in this competition, including pre-startup founders (Track A) and early-stage startups established within 3 years (Track B), all possessing human-centered robotics technology and convergence business models.
After fierce preliminaries, 8 teams advanced to the final round, and a total of 5 teams were finally selected: one Grand Prize winner, two Choi Woo-sung (Top Excellence Award) winners, and two Excellence Award winners.
The Grand Prize was awarded to 'Noman' for proposing an integrated system for a strawberry farm work robot and a rotating vertical cultivation module.
The Woo-sung Choi (Top Excellence Award) went to 'Robolight' and 'Coils.'
The Excellence Award was awarded to BLUE APEX and Gigaflops.
Professor Jung Kim, Head of the KAIST Mechanical Engineering Department and General Manager of the Robot Valley Project, said, "This competition has become the starting point for discovering future robot unicorns. For the next three years, we will continue to provide practical support for the growth of robot startups, and KAIST will play a leading role in building and expanding the deep-tech robot ecosystem centered in Daejeon."
< Group Photo of Award Winners >
Meanwhile, this competition was jointly hosted and organized by the Ministry of Science and ICT, Daejeon Metropolitan City, and the Research and Business Development Special Zone Foundation, as well as startup support organizations including KAIST, KAIST Holdings, Daejeon Technopark, and Daejeon Creative Economy Innovation Center.
KAIST Develops Multimodal AI That Understands Text and Images Like Humans
<(From Left) M.S candidate Soyoung Choi, Ph.D candidate Seong-Hyeon Hwang, Professor Steven Euijong Whang>
Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types of data. KAIST researchers have now developed a new multimodal AI training technology that enables models to recognize both text and images evenly, enabling far more accurate predictions.
KAIST (President Kwang Hyung Lee) announced on the 14th that a research team led by Professor Steven Euijong Whang from the School of Electrical Engineering has developed a novel data augmentation method that enables multimodal AI systems—those that must process multiple data types simultaneously—to make balanced use of all input data.
Multimodal AI combines various forms of information, such as text and video, to make judgments. However, AI models often show a tendency to rely excessively on one particular type of data, resulting in degraded prediction performance.
To solve this problem, the research team deliberately trained AI models using mismatched or incongruent data pairs. By doing so, the model learned to rely on all modalities—text, images, and even audio—in a balanced way, regardless of context.
The team further improved performance stability by incorporating a training strategy that compensates for low-quality data while emphasizing more challenging examples. The method is not tied to any specific model architecture and can be easily applied to various data types, making it highly scalable and practical.
<Model Prediction Changes with a Data-Centric Multimodal AI Training Framework>
Professor Steven Euijong Whang explained, “Improving AI performance is not just about changing model architectures or algorithms—it’s much more important how we design and use the data for training.” He continued, “This research demonstrates that designing and refining the data itself can be an effective approach to help multimodal AI utilize information more evenly, without becoming biased toward a specific modality such as images or text.”
The study was co-led by doctoral student Seong-Hyeon Hwang and master’s student Soyoung Choi, with Professor Steven Euijong Whang serving as the corresponding author. The results will be presented at NeurIPS 2025 (Conference on Neural Information Processing Systems), the world’s premier conference in the field of AI, which will be held this December in San Diego, USA, and Mexico City, Mexico.
※ Paper title: “MIDAS: Misalignment-based Data Augmentation Strategy for Imbalanced Multimodal Learning,” Original paper: https://arxiv.org/pdf/2509.25831
The research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the projects “Robust, Fair, and Scalable Data-Centric Continual Learning” (RS-2022-II220157) and “AI Technology for Non-Invasive Near-Infrared-Based Diagnosis and Treatment of Brain Disorders” (RS-2024-00444862).
Next-Generation Humanoid Robot Capable of Moonwalk Developed
<From the middle of the back row, clockwise: Professor Hae-Won Park, Dongyun Kang (Ph.D. candidate), Hajun Kim (Ph.D. candidate), JongHun Choe (Ph.D. candidate), Min-su Kim (Research Professor)>
KAIST research team's independently developed humanoid robot boasts world-class driving performance, reaching speeds of 12km/h, along with excellent stability, maintaining balance even with its eyes closed or on rough terrain. Furthermore, it can perform complex human-specific movements such as duck walk and moonwalk, drawing attention as a next-generation robot platform that can be utilized in actual industrial settings. Professor Park Hae-won's research team at the Humanoid Robot Research Center (HuboLab) of KAIST's Department of Mechanical Engineering announced on the 19th that they have independently developed the lower body platform for a next-generation humanoid robot. The developed humanoid is characterized by its design tailored for human-centric environments, targeting a height (165cm) and weight (75kg) similar to that of a human. The significance of the newly developed lower body platform is immense as the research team directly designed and manufactured all core components, including motors, reducers, and motor drivers. By securing key components that determine the performance of humanoid robots with their own technology, they have achieved technological independence in terms of hardware. In addition, the research team trained an AI controller through a self-developed reinforcement learning algorithm in a virtual environment, successfully applied it to real-world environments by overcoming the Sim-to-Real Gap, thereby securing technological independence in terms of algorithms as well.
<Developed 'KAIST Humanoid' Lower Body Platform>
Currently, the developed humanoid can run at a maximum speed of 3.25m/s (approximately 12km/h) on flat ground and has a step-climbing capability of over 30cm (a performance indicator showing how high a curb, stairs, or obstacle can be overcome). The team plans to further enhance its performance, aiming for a driving speed of 4.0m/s (approximately 14km/h), ladder climbing, and over 40cm step-climbing capability.
<‘KAIST Humanoid’ Lower Body Platform running>
Professor Hae-Won Park's team is collaborating with Professor Jae-min Hwangbo's team (arms) from KAIST's Department of Mechanical Engineering, Professor Sangbae Kim's team (hands) from MIT, Professor Hyun Myung's team (localization and navigation) from KAIST's Department of Electrical Engineering, and Professor Jae-hwan Lim's team (vision-based manipulation intelligence) from KAIST's Kim Jaechul AI Graduate School to implement a complete humanoid hardware with an upper body and AI. Through this, they are developing technology to enable the robot to perform complex tasks such as carrying heavy objects, operating valves, cranks, and door handles, and simultaneously walking and manipulating when pushing carts or climbing ladders. The ultimate goal is to secure versatile physical abilities to respond to the complex demands of actual industrial sites.
<An Intermediate Result: A Single-Leg Hopping Robot Has Been Developed>
During this process, the research team also developed a single-leg 'Hopping' robot. This robot demonstrated high-level movements, maintaining balance on one leg and repeatedly hopping, and even exhibited extreme athletic abilities such as a 360-degree somersault. Especially in a situation where imitation learning was impossible due to the absence of a biological reference model, the research team achieved significant results by implementing an AI controller through reinforcement learning that optimizes the center of mass velocity while reducing landing impact. Professor Park Hae-won stated, "This achievement is an important milestone that has achieved independence in both hardware and software aspects of humanoid research by securing core components and AI controllers with our own technology," and added, "We will further develop it into a complete humanoid including an upper body to solve the complex demands of actual industrial sites and furthermore, foster it as a next-generation robot that can work alongside humans."
<Key Components of the Directly Developed Robot: (a) Reducer, (b) Motor Stator, (c) Motor Driver, (d) EtherCAT-CAN convert board>
The results of this research will be presented by JongHun Choe, a Ph.D. candidate in Mechanical Engineering, as the first author, on hardware development at 'Humanoids 2025', an international humanoid robot specialized conference held on October 1st. Additionally, Ph.D. candidates Dongyun Kang, Gijeong Kim, and JongHun Choe from Mechanical Engineering will present the AI algorithm achievements as co-first authors at 'CoRL 2025', the top conference in robot intelligence, held on September 29th. ※Paper Titles and Papers: Learning Impact-Rich Rotational Maneuvers via Centroidal Velocity Rewards and Sim-to-Real Techniques: A One-Leg Hopper Flip Case Study, Conference on Robot Learning (CoRL), Seoul, Korea 2025, Dongyun Kang, Gijeong Kim, JongHun Choe, Hajun Kim, Hae-Won Park, arxiv version: https://arxiv.org/abs/2505.12222 Design of a 3-DOF Hopping Robot with an Optimized Gearbox: An Intermediate Platform Toward Bipedal Robots, IEEE-RAS, International Conference on Humanoid Robots, Seoul, Korea, 2025, JongHun Choe, Gijeong Kim, Hajun Kim, Dongyun Kang, Min-Su Kim, Hae-Won Park, arxiv version: https://arxiv.org/abs/2505.12231 This research was supported by research funding from the Ministry of Trade, Industry and Energy and the Korea Institute of Industrial Technology Planning and Evaluation (KEIT) (RS-2024-00427719). ※ Related Video: https://youtu.be/ytWO7lldN4c
A Boom in Robot Startups: Global Ventures from the Legacy of HUBO's Creator
KAIST announced on September 16 that it is gaining attention as a "cradle of Korean robotics" as various robot startups founded on campus have recently succeeded in attracting investment.
Rainbow Robotics, founded by Professor Jun-Ho Oh of the Department of Mechanical Engineering, set a new milestone in the robotics industry by successfully going public with its world-class humanoid technology. Following this, Angel Robotics, a company specializing in rehabilitation and medical robots founded by Professor Kyung-chul Kong of the Department of Mechanical Engineering, also went public, making the achievements of KAIST-born robot startups more visible.
Following in their footsteps, a number of other startups are on a rapid growth trajectory after their founding in various technological fields, including quadrupedal, collaborative, and wearable robots, as well as autonomous walking. These include Pureun Robotics (2021, Hyunchul Ham, MS from Mechanical Engineering), Wero Robotics (2021, Yeonbaek Lee, MS from Mechanical Engineering), Raion Robotics (2023, Professor Jaemin Hwangbo, Mechanical Engineering), Triangle Robotics (2023, Jinhyuk Choi, PhD candidate in Computer Science), URobotics (2024, Byungho Yoo, PhD from Electrical Engineering), and Diden Robotics (2024, Junha Kim, PhD from Mechanical Engineering).
In particular, Raion Robotics, founded by Professor Jaemin Hwangbo of the Department of Mechanical Engineering, recently secured a Series A investment of 23 billion KRW from leading domestic investors, including SBVA, Company K Partners, FuturePlay, KDB Capital, IBK, and IBK Venture Capital.
< (Left) Raibo1, (Right) Raibo2 participating in a marathon >
Raion Robotics' flagship product, the quadrupedal robot 'Raibo,' is equipped with reinforcement learning-based AI, enabling stable walking on uneven terrain. It also boasts a distinctive performance with an 8-hour operating time. Recently, it successfully completed a full marathon (42.195 km) alongside a human, proving its durability in real-world conditions and attracting attention from the global robotics industry.
This trend is also evident in URobotics, a startup from Professor Hyun Myung's lab in the Department of Electrical Engineering. URobotics recently secured a 3.5 billion KRW seed investment and was selected for the 1.5 billion KRW Deep Tech TIPS program, accelerating its growth in the field of autonomous walking robots. The company is preparing to apply its technology to various industrial sites, including defense, construction, logistics, and smart cities, by internalizing its control and autonomous walking technologies and applying them to humanoids. The industry is already taking note of its high growth potential from the early stages.
< (Left) URobotics' general-purpose autonomous walking solution being tested on a quadrupedal robot, (Right) Developing core spatial intelligence technology >
< URobotics' autonomous walking solution >
Diden Robotics, a startup from Professor Haewon Park's lab in the Department of Mechanical Engineering, is leading the industrial application and commercialization of walking mobile robot technology. The company's key competitive advantages lie in its hardware design capabilities through the internalization of core components, advanced Physical AI technology based on reinforcement learning, and a special magnetic foot technology. Robots developed with this technology can move freely on vertical steel walls and ceilings to perform high-difficulty tasks like welding and non-destructive testing. Based on this technology, Diden Robotics attracted a 7 billion KRW investment in a Pre-A round and has signed supply contracts with major shipyards, proving its commercial viability.
< (Left) Diden Robotics' mobile robot DIDEN30 for shipbuilding sites (Right) Various work scenarios inside a ship block >
KAIST recently secured 10.5 billion KRW in government funding by participating as the lead institution in the Deep Tech Scale-up Valley project. With this funding, it plans to create a virtuous cycle among companies, technology, and talent in the robotics industry and emerge as a next-generation robotics hub. URobotics and Angel Robotics are also participating in this project.
Bae Hyun-min, head of the Startup Center, said, "Researchers from KAIST are entering the global stage through challenging startups. The Startup Center will actively support them to help KAIST establish itself as a 'hub for deep tech startups'."
KAIST President Kwang Hyung Lee emphasized, "KAIST is a cradle of innovation that creates social value through startups, beyond education and research. The achievements of these robot startups show that KAIST is at the center of leading the paradigm of the global robotics industry. This also aligns with KAIST's vision of preparing for the era of 'Physical AI,' which fuses artificial intelligence with the physical world. KAIST will continue to strengthen its global technological leadership through innovation that connects academia and industry.
KAIST to Foster a 'Robot Valley' in Daejeon with $10 Million Initiative
<Group Photo of Kick-off Meeting>
On September 3, KAIST announced the official launch of the "2025 Deep Tech Scale-up Valley Nurturing Project" with a kick-off meeting at the KAIST Department of Mechanical Engineering.
KAIST was selected for this project by the Ministry of Science and ICT and the Research and Development Special District Foundation. With this selection, the university plans to create a "Robot Valley".
Over the next three and a half years, KAIST will receive a total of 13.65 billion won (approximately $10 million) in funding. The university's goal is to intensively nurture globally competitive, innovative robotics companies based on foundational technologies and to develop Daejeon into a global hub for the robotics industry.
The initiative will leverage Daejeon's exceptional research talent and its startup and investment ecosystem to create a model for regional revitalization and to cultivate the robotics industry as a next-generation strategic sector.
KAIST's vision for this project is to develop "Human-Friendly Robots (HFR)" that are more than just automated machines; they are collaborative partners that share space, roles, and emotions with people.
The project will implement a multi-stage strategy that includes promoting the commercialization of robotics technology, supporting the startup ecosystem, securing global technological competitiveness, and developing robot commercialization platforms. This will establish a virtuous cycle of technology development, startup and investment growth, and reinvestment.
Unlike traditional startup support and scale-up programs, this project aims for the simultaneous growth of the entire robotics industry, not just individual companies. A key element is an open innovation model where leading robotics firms like Angel Robotics Inc. and EuRoBotics Inc. (led by Professor Byung-ho Yu and Professor Hyun Myung) will share common core technologies related to actuators, circuits, AI, and standardized data. This will allow startups to focus on developing robot products that directly meet customer needs.
The project team includes key KAIST robotics researchers. The project leader is Professor Jung Kim (President of the Korea Robotics Society) from the Department of Mechanical Engineering. Other participating professors include Geon-Jae Lee from the Department of Materials Science and Engineering (human augmentation sensors), Hyun Myung from the School of Electrical Engineering (winner of the QRC 2023 quadruped robot autonomous walking competition at IEEE ICRA), Kyung-Chul Kong from the Department of Mechanical Engineering (two-time champion of the Cybathlon International Competition and founder of Angel Robotics), and Suk-Hyung Bae from the Department of Industrial Design (winner of the ACM SIGGRAPH robot sketching competition).
In addition, the KAIST Technology Commercialization Office, KAIST Holdings, Global Techno Valley Lab (GTLAB), and the Daejeon Center for Creative Economy and Innovation will manage technology commercialization and valley construction. The Daejeon Technopark will also participate to provide comprehensive commercialization support.
"The strategic cooperation between Daejeon City's robotics industry nurturing plan and KAIST was the driving force behind the selection for this project," said Geon-Jae Lee, Director of the KAIST Technology Commercialization Office. "We will create a robotics innovation ecosystem based in Daejeon and systematically foster global companies to rival the likes of ABB in Switzerland and KUKA in Germany, which are considered among the top three robotics companies in the world."
< Kick-off Meeting Scene>
Project leader Jung Kim stated, "We will spearhead efforts to discover and nurture over 15 future unicorn companies by promoting the commercialization of deep-tech robotics developed at KAIST. The entire KAIST robotics research team will dedicate its full efforts to ensure that our research and development achievements lead to real-world industries and startups."
KAIST President Kwang-Hyung Lee emphasized, "As Korea's leading research-oriented university, KAIST will actively support Daejeon's growth into a global robotics hub. This project is more than just research and development; it will be a turning point for KAIST to stand at the center of the global robotics ecosystem and create a new growth engine for the region and the nation."
In collaboration with Daejeon City, KAIST plans to form an "HFR Valley Innovation Council" to share and review project outcomes, ultimately building a self-sustaining ecosystem. This initiative aims to establish Daejeon as a world-class robotics industry hub.
KAIST Wins Bid for ‘Physical AI Core Technology Demonstration’ Pilot Project
KAIST (President Kwang Hyung Lee) announced on the 28th of August that, together with Jeonbuk State, Jeonbuk National University, and Sungkyunkwan University, it has jointly won the Ministry of Science and ICT’s pilot project for the “Physical AI Core Technology Proof of Concept (PoC)”, with KAIST serving as the overall research lead. The consortium also plans to participate in a full-scale demonstration project that is expected to reach a total scale of 1 trillion KRW in the future.
In this project, KAIST led the research planning under the theme of “Collaborative Intelligence Physical AI.” Based on this, Jeonbuk National University and Jeonbuk State will carry out joint research and establish a collaborative intelligence physical AI industrial ecosystem within the province. The pilot project will begin on September 1 this year and will run until the end of the year over the next five years. Through this effort, Jeonbuk State aims to be built into a global hub for physical AI.
KAIST will take charge of developing original research technologies, creating a research environment through the establishment of a testbed, and promoting industrial diffusion. Professor Young Jae Jang of the Department of Industrial and Systems Engineering at KAIST, who is the overall project director, has been leading research on collaborative intelligence physical AI since 2016. His “Collaborative Intelligence-Based Smart Manufacturing Innovation Technology” was selected as one of KAIST’s “Top 10 Research Achievements” in 2019.
“Physical AI” refers to cutting-edge artificial intelligence technology that enables physical devices such as robots, autonomous vehicles, and factory automation equipment to perform tasks without human instruction by understanding spatiotemporal concepts.
In particular, collaborative intelligence physical AI is a technology in which numerous robots and automated devices in a factory environment work together to achieve goals. It is attracting attention as a key foundation for realizing “dark factories” in industries such as semiconductors, secondary batteries, and automobile manufacturing.
Unlike existing manufacturing AI, this technology does not necessarily require massive amounts of historical data. Through real-time, simulation-based learning, it can quickly adapt even to manufacturing environments with frequent changes and has been deemed a next-generation technology that overcomes the limitations of data dependency.
Currently, the global AI industry is led by LLMs that simulate linguistic intelligence. However, physical AI must go beyond linguistic intelligence to include spatial intelligence and virtual environment learning, requiring the organic integration of hardware such as robots, sensors, and motors with software. As a manufacturing powerhouse, Korea is well-positioned to build such an ecosystem and seize the opportunity to lead global competition.
In fact, in April 2025, KAIST won first place at INFORMS (Institute for Operations Research and the Management Sciences), the world’s largest industrial engineering society, with its case study on collaborative intelligence physical AI, beating MIT and Amazon. This achievement is recognized as proof of Korea’s global competitiveness in the physical AI technology realm.
Professor Young Jae Jang, KAIST’s overall project director, said, “Winning this large-scale national project is the result of KAIST’s collaborative intelligence physical AI research capabilities accumulated over the past decade being recognized both domestically and internationally. This will be a turning point for establishing Korea’s manufacturing industry as a global leading ‘Physical AI Manufacturing Innovation Model.’”
KAIST President Kwang Hyung Lee emphasized that “KAIST is taking on the role of leading not only academic research but also the practical industrialization of national strategic technologies. Building on this achievement, we will collaborate with Jeonbuk National University and Jeonbuk State to develop Korea into a world-class hub for physical AI innovation.”
Through this project, KAIST, Jeonbuk National University, and Jeonbuk State plan to develop Korea into a global industrial hub for physical AI.
In KAIST, Robots Now Untie Rubber Bands and Insert Wires Like Humans
The technology that allows robots to handle deformable objects such as wires, clothing, and rubber bands has long been regarded as a key task in the automation of manufacturing and service industries. However, since such deformable objects do not have a fixed shape and their movements are difficult to predict, robots have faced great difficulties in accurately recognizing and manipulating them. KAIST researchers have developed a robot technology that can precisely grasp the state of deformable objects and handle them skillfully, even with incomplete visual information. This achievement is expected to contribute to intelligent automation in various industrial and service fields, including cable and wire assembly, manufacturing that handles soft components, and clothing organization and packaging.
KAIST (President Kwang Hyung Lee) announced on the 21st of August that the research team led by Professor Daehyung Park of the School of Computing developed an artificial intelligence technology called “INR-DOM (Implicit Neural-Representation for Deformable Object Manipulation),” which enables robots to skillfully handle objects whose shape continuously changes like elastic bands and which are visually difficult to distinguish.
Professor Park’s research team developed a technology that allows robots to completely reconstruct the overall shape of a deformable object from partially observed three-dimensional information and to learn manipulation strategies based on it. Additionally, the team introduced a new two-stage learning framework that combines reinforcement learning and contrastive learning so that robots can efficiently learn specific tasks. The trained controller achieved significantly higher task success rates compared to existing technologies in a simulation environment, and in real robot experiments, it demonstrated a high level of manipulation capability, such as untying complicatedly entangled rubber bands, thereby greatly expanding the applicability of robots in handling deformable objects.
Deformable Object Manipulation (DOM) is one of the long-standing challenges in robotics. This is because deformable objects have infinite degrees of freedom, making their movements difficult to predict, and the phenomenon of self-occlusion, in which the object hides parts of itself, makes it difficult for robots to grasp their overall state.
To solve these problems, representation methods of deformable object states and control technologies based on reinforcement learning have been widely studied. However, existing representation methods could not accurately represent continuously deforming surfaces or complex three-dimensional structures of deformable objects, and since state representation and reinforcement learning were separated, there was a limitation in constructing a suitable state representation space needed for object manipulation.
To overcome these limitations, the research team utilized “Implicit Neural Representation.” This technology receives partial three-dimensional information (point cloud*) observed by the robot and reconstructs the overall shape of the object, including unseen parts, as a continuous surface (signed distance function, SDF). This enables robots to imagine and understand the overall shape of the object just like humans.
*Point cloud 3D information: a method of representing the three-dimensional shape of an object as a “set of points” on its surface.
Furthermore, the research team introduced a two-stage learning framework. In the first stage of pre-training, a model is trained to reconstruct the complete shape from incomplete point cloud data, securing a state representation module that is robust to occlusion and capable of well representing the surfaces of stretching objects. In the second stage of fine-tuning, reinforcement learning and contrastive learning are used together to optimize the control policy and state representation module so that the robot can clearly distinguish subtle differences between the current state and the goal state and efficiently find the optimal action required for task execution.
When the INR-DOM technology developed by the research team was mounted on a robot and tested, it showed overwhelmingly higher success rates than the best existing technologies in three complex tasks in a simulation environment: inserting a rubber ring into a groove (sealing), installing an O-ring onto a part (installation), and untying tangled rubber bands (disentanglement). In particular, in the most challenging task, disentanglement, the success rate reached 75%, which was about 49% higher than the best existing technology (ACID, 26%).
The research team also verified that INR-DOM technology is applicable in real environments by combining sample-efficient robotic reinforcement learning with INR-DOM and performing reinforcement learning in a real-world environment.
As a result, in actual environments, the robot performed insertion, installation, and disentanglement tasks with a success rate of over 90%, and in particular, in the visually difficult bidirectional disentanglement task, it achieved a 25% higher success rate compared to existing image-based reinforcement learning methods, proving that robust manipulation is possible despite visual ambiguity.
Minseok Song, a master’s student and first author of this research, stated that “this research has shown the possibility that robots can understand the overall shape of deformable objects even with incomplete information and perform complex manipulation based on that understanding.” He added, “It will greatly contribute to the advancement of robot technology that performs sophisticated tasks in cooperation with humans or in place of humans in various fields such as manufacturing, logistics, and medicine.”
This study, with KAIST School of Computing master’s student Minseok Song as first author, was presented at the top international robotics conference, Robotics: Science and Systems (RSS) 2025, held June 21–25 at USC in Los Angeles.
※ Paper title: “Implicit Neural-Representation Learning for Elastic Deformable-Object Manipulations”
※ DOI: https://www.roboticsproceedings.org/ (to be released), currently https://arxiv.org/abs/2505.00500
This research was supported by the Ministry of Science and ICT through the Institute of Information & Communications Technology Planning & Evaluation (IITP)’s projects “Core Software Technology Development for Complex-Intelligence Autonomous Agents” (RS-2024-00336738; Development of Mission Execution Procedure Generation Technology for Autonomous Agents’ Complex Task Autonomy), “Core Technology Development for Human-Centered Artificial Intelligence” (RS-2022-II220311; Goal-Oriented Reinforcement Learning Technology for Multi-Contact Robot Manipulation of Everyday Objects), “Core Computing Technology” (RS-2024-00509279; Global AI Frontier Lab), as well as support from Samsung Electronics. More details can be found at https://inr-dom.github.io.
KAIST’s Wearable Robot Design Wins ‘2025 Red Dot Award Best of the Best’
<Professor Hyunjoon Park, M.S candidate Eun-ju Kang, Prospective M.S candidate Jae-seong Kim, undergraduate student Min-su Kim>
A team led by Professor Hyunjoon Park from the Department of Industrial Design won the ‘Best of the Best’ award at the 2025 Red Dot Design Awards, one of the world's top three design awards, for their 'Angel Robotics WSF1 VISION Concept.'
The design for the next-generation wearable robot for people with paraplegia successfully implements functionality, aesthetics, and social inclusion. This latest achievement follows the team's iF Design Award win for the WalkON Suit F1 prototype, which also won a gold medal at the Cybathlon last year. This marks consecutive wins at top-tier international design awards.
KAIST (President Kwang-hyung Lee) announced on the 8th of August that Move Lab, a research team led by Professor Hyunjoon Park from the Department of Industrial Design, won the 'Best of the Best' award in the Design Concept-Professional category at the prestigious '2025 Red Dot Design Awards' for their next-generation wearable robot design, the ‘Angel Robotics WSF1 VISION Concept.’
The German 'Red Dot Design Awards' is one of the world's most well-known design competitions. It is considered one of the world's top three design awards along with Germany’s iF Design Awards and America’s IDEA. The ‘Best of the Best’ award is given to the best design in a category and is awarded only to a very select few of the top designs (within the top 1%) among all Red Dot Award winners.
Professor Hyunjoon Park’s team was honored with the ‘Best of the Best’ award for a user-friendly follow-up development of the ‘WalkON Suit F1 prototype,’ which won a gold medal at the 2024 Cybathlon and an iF Design Award in 2025.
<Figure 1. WSF1 Vision Concept Main Image>
This award-winning design is the result of industry-academic cooperation with Angel Robotics Inc., founded by Professor Kyoungchul Kong from the KAIST Department of Mechanical Engineering. It is a concept design that proposes a next-generation wearable robot (an ultra-personal mobility device) that can be used by people with paraplegia in their daily lives.
The research team focused on transforming Angel Robotics Inc.'s advanced engineering platform into an intuitive and emotional, user-centric experience, implementing a design solution that simultaneously possesses functionality, aesthetics, and social inclusion.
<Figure 2. WSF1 Vision Concept Full Exterior (Front View)>
The WSF1 VISION Concept includes innovative features implemented in Professor Kyoungchul Kong’s Exo Lab, such as:
An autonomous access function where the robot finds the user on its own.
A front-loading mechanism designed for the user to put it on alone while seated.
Multi-directional walking functionality realized through 12 powerful torque actuators and the latest control algorithms.
AI vision technology, along with a multi-visual display system that provides navigation and omnidirectional vision.
This provides users with a safer and more convenient mobility experience.
The strong yet elegant silhouette was achieved through a design process that pursued perfection in proportion, surfaces, and details not seen in existing wearable robots. In particular, the fabric cover that wraps around the entire thigh from the robot's hip joint is a stylish element that respects the wearer's self-esteem and individuality, like fashionable athletic wear. It also acts as a device for the wearer to psychologically feel safe in interacting with the robot and blending in with the general public. This presents a new aesthetic for wearable robots where function and form are harmonized.
<Figure 3. WSF1 Vision Concept's Operating Principle. It walks autonomously and is worn from the front while the user is seated.>
KAIST Professor Hyunjoon Park said of the award, "We are focusing on using technology, aesthetics, and human-centered innovation to present advanced technical solutions as easy, enjoyable, and cool experiences for users. Based on Angel Robotics Inc.'s vision of 'recreating human ability with technology,' the WSF1 VISION Concept aimed to break away from the traditional framework of wearable robots and deliver a design experience that adds dignity, independence, and new style to the user's life."
<Figure 4. WSF1 Vision Concept Detail Image>
A physical model of the WSF1 VISION Concept is scheduled to be unveiled in the Future Hall of the 2025 Gwangju Design Biennale from August 30 to November 2. The theme is 'Po-yong-ji-deok' (the virtue of inclusion), and it will showcase the role of design language in creating an inclusive future society.
<Figure 5. WSF1 Vision Concept: Image of a Person Wearing and Walking>
Approaches to Human-Robot Interaction Using Biosignals
<(From left) Dr. Hwa-young Jeong, Professor Kyung-seo Park, Dr. Yoon-tae Jeong, Dr. Ji-hoon Seo, Professor Min-kyu Je, Professor Jung Kim >
A joint research team led by Professor Jung Kim of KAIST Department of Mechanical Engineering and Professor Min-kyu Je of the Department of Electrical and Electronic Engineering recently published a review paper on the latest trends and advancements in intuitive Human-Robot Interaction (HRI) using bio-potential and bio-impedance in the internationally renowned academic journal 'Nature Reviews Electrical Engineering'.
This review paper is the result of a collaborative effort by Dr. Kyung-seo Park (DGIST, co-first author), Dr. Hwa-young Jeong (EPFL, co-first author), Dr. Yoon-tae Jeong (IMEC), and Dr. Ji-hoon Seo (UCSD), all doctoral graduates from the two laboratories. Nature Reviews Electrical Engineering is a review specialized journal in the field of electrical, electronic, and artificial intelligence technology, newly launched by Nature Publishing Group last year. It is known to invite world-renowned scholars in the field through strict selection criteria. Professor Jung Kim's research team's paper, titled "Using bio-potential and bio-impedance for intuitive human-robot interaction," was published on July 18, 2025. (DOI: https://doi.org/10.1038/s44287-025-00191-5)
This review paper explains how biosignals can be used to quickly and accurately detect movement intentions and introduces advancements in movement prediction technology based on neural signals and muscle activity. It also focuses on the crucial role of integrated circuits (ICs) in maximizing low-noise performance and energy efficiency in biosignal sensing, covering thelatest development trends in low-noise, low-power designs for accurately measuring bio-potential and impedance signals.
The review emphasizes the importance of hybrid and multi-modal sensing approaches, presenting the possibility of building robust, intuitive, and scalable HRI systems. The research team stressed that collaboration between sensor and IC design fields is essential for the practical application of biosignal-based HRI systems and stated that interdisciplinary collaboration will play a significant role in the development of next-generation HRI technology. Dr. Hwa-young Jeong, a co-first author of the paper, presented the potential of bio-potential and impedance signals to make human-robot interaction more intuitive and efficient, predicting that it will make significant contributions to the development of HRI technologies such as rehabilitation robots and robotic prostheses using biosignals in the future. This research was supported by several research projects, including the Human Plus Project of the National Research Foundation of Korea.
KAIST Develops Robots That React to Danger Like Humans
<(From left) Ph.D candidate See-On Park, Professor Jongwon Lee, and Professor Shinhyun Choi>
In the midst of the co-development of artificial intelligence and robotic advancements, developing technologies that enable robots to efficiently perceive and respond to their surroundings like humans has become a crucial task. In this context, Korean researchers are gaining attention for newly implementing an artificial sensory nervous system that mimics the sensory nervous system of living organisms without the need for separate complex software or circuitry. This breakthrough technology is expected to be applied in fields such as in ultra-small robots and robotic prosthetics, where intelligent and energy-efficient responses to external stimuli are essential.
KAIST (President Kwang Hyung Lee) announced on July15th that a joint research team led by Endowed Chair Professor Shinhyun Choi of the School of Electrical Engineering at KAIST and Professor Jongwon Lee of the Department of Semiconductor Convergence at Chungnam National University (President Jung Kyum Kim) developed a next-generation neuromorphic semiconductor-based artificial sensory nervous system. This system mimics the functions of a living organism's sensory nervous system, and enables a new type of robotic system that can efficiently responds to external stimuli.
In nature, animals — including humans — ignore safe or familiar stimuli and selectively react sensitively to important or dangerous ones. This selective response helps prevent unnecessary energy consumption while maintaining rapid awareness of critical signals. For instance, the sound of an air conditioner or the feel of clothing against the skin soon become familiar and are disregarded. However, if someone calls your name or a sharp object touches your skin, a rapid focus and response occur. These behaviors are regulated by the 'habituation' and 'sensitization' functions in the sensory nervous system. Attempts have been consistently made to apply these sensory nervous system functions of living organisms in order to create robots that efficiently respond to external environments like humans.
However, implementing complex neural characteristics such as habituation and sensitization in robots has faced difficulties in miniaturization and energy efficiency due to the need for separate software or complex circuitry. In particular, there have been attempts to utilize memristors, a neuromorphic semiconductor. A memristor is a next-generation electrical device, which has been widely utilized as an artificial synapse due to its ability to store analog value in the form of device resistance. However, existing memristors had limitations in mimicking the complex characteristics of the nervous system because they only allowed simple monotonic changes in conductivity.
To overcome these limitations, the research team developed a new memristor capable of reproducing complex neural response patterns such as habituation and sensitization within a single device. By introducing additional layer inside the memristor that alter conductivity in opposite directions, the device can more realistically emulate the dynamic synaptic behaviors of a real nervous system — for example, decreasing its response to repeated safe stimuli but quickly regaining sensitivity when a danger signal is detected.
<New memristor mimicking functions of sensory nervous system such as habituation/sensitization>
Using this new memristor, the research team built an artificial sensory nervous system capable of recognizing touch and pain, an applied it to a robotic hand to test its performance. When safe tactile stimuli were repeatedly applied, the robot hand, which initially reacted sensitively to unfamiliar tactile stimuli, gradually showed habituation characteristics by ignoring the stimuli. Later, when stimuli were applied along with an electric shock, it recognized this as a danger signal and showed sensitization characteristics by reacting sensitively again. Through this, it was experimentally proven that robots can efficiently respond to stimuli like humans without separate complex software or processors, verifying the possibility of developing energy-efficient neuro-inspired robots.
<Robot arm with memristor-based artificial sensory nervous system>
See-On Park, researcher at KAIST, stated, "By mimicking the human sensory nervous system with next-generation semiconductors, we have opened up the possibility of implementing a new concept of robots that are smarter and more energy-efficient in responding to external environments." He added, "This technology is expected to be utilized in various fusion fields of next-generation semiconductors and robotics, such as ultra-small robots, military robots, and medical robots like robotic prosthetics".
This research was published online on July 1st in the international journal 'Nature Communications,' with Ph.D candidate See-On Park as the first author.
Paper Title: Experimental demonstration of third-order memristor-based artificial sensory nervous system for neuro-inspired robotics
DOI: https://doi.org/10.1038/s41467-025-60818-x
This research was supported by the Korea National Research Foundation's Next-Generation Intelligent Semiconductor Technology Development Project, the Mid-Career Researcher Program, the PIM Artificial Intelligence Semiconductor Core Technology Development Project, the Excellent New Researcher Program, and the Nano Convergence Technology Division, National Nanofab Center's (NNFC) Nano-Medical Device Project.
KAIST Professor Jee-Hwan Ryu Receives Global IEEE Robotics Journal Best Paper Award
- Professor Jee-Hwan Ryu of Civil and Environmental Engineering receives the Best Paper Award from the Institute of Electrical and Electronics Engineers (IEEE) Robotics Journal, officially presented at ICRA, a world-renowned robotics conference.
- This is the highest level of international recognition, awarded to only the top 5 papers out of approximately 1,500 published in 2024.
- Securing a new working channel technology for soft growing robots expands the practicality and application possibilities in the field of soft robotics.
< Professor Jee-Hwan Ryu (left), Nam Gyun Kim, Ph.D. Candidate (right) from the KAIST Department of Civil and Environmental Engineering and KAIST Robotics Program >
KAIST (President Kwang-Hyung Lee) announced on the 6th that Professor Jee-Hwan Ryu from the Department of Civil and Environmental Engineering received the 2024 Best Paper Award from the Robotics and Automation Letters (RA-L), a premier journal under the IEEE, at the '2025 IEEE International Conference on Robotics and Automation (ICRA)' held in Atlanta, USA, on May 22nd.
This Best Paper Award is a prestigious honor presented to only the top 5 papers out of approximately 1,500 published in 2024, boasting high international competition and authority.
The award-winning paper by Professor Ryu proposes a novel working channel securing mechanism that significantly expands the practicality and application possibilities of 'Soft Growing Robots,' which are based on soft materials that move or perform tasks through a growing motion similar to plant roots.
< IEEE Robotics Journal Award Ceremony >
Existing soft growing robots move by inflating or contracting their bodies through increasing or decreasing internal pressure, which can lead to blockages in their internal passages. In contrast, the newly developed soft growing robot achieves a growing function while maintaining the internal passage pressure equal to the external atmospheric pressure, thereby successfully securing an internal passage while retaining the robot's flexible and soft characteristics.
This structure allows various materials or tools to be freely delivered through the internal passage (working channel) within the robot and offers the advantage of performing multi-purpose tasks by flexibly replacing equipment according to the working environment.
The research team fabricated a prototype to prove the effectiveness of this technology and verified its performance through various experiments. Specifically, in the slide plate experiment, they confirmed whether materials or equipment could pass through the robot's internal channel without obstruction, and in the pipe pulling experiment, they verified if a long pipe-shaped tool could be pulled through the internal channel.
< Figure 1. Overall hardware structure of the proposed soft growing robot (left) and a cross-sectional view composing the inflatable structure (right) >
Experimental results demonstrated that the internal channel remained stable even while the robot was growing, serving as a key basis for supporting the technology's practicality and scalability.
Professor Jee-Hwan Ryu stated, "This award is very meaningful as it signifies the global recognition of Korea's robotics technology and academic achievements. Especially, it holds great significance in achieving technical progress that can greatly expand the practicality and application fields of soft growing robots. This achievement was possible thanks to the dedication and collaboration of the research team, and I will continue to contribute to the development of robotics technology through innovative research."
< Figure 2. Material supplying mechanism of the Soft Growing Robot >
This research was co-authored by Dongoh Seo, Ph.D. Candidate in Civil and Environmental Engineering, and Nam Gyun Kim, Ph.D. Candidate in Robotics. It was published in IEEE Robotics and Automation Letters on September 1, 2024.
(Paper Title: Inflatable-Structure-Based Working-Channel Securing Mechanism for Soft Growing Robots, DOI: 10.1109/LRA.2024.3426322)
This project was supported simultaneously by the National Research Foundation of Korea's Future Promising Convergence Technology Pioneer Research Project and Mid-career Researcher Project.
RAIBO Runs over Walls with Feline Agility... Ready for Effortless Search over Mountaineous and Rough Terrains
< Photo 1. Research Team Photo (Professor Jemin Hwangbo, second from right in the front row) >
KAIST's quadrupedal robot, RAIBO, can now move at high speed across discontinuous and complex terrains such as stairs, gaps, walls, and debris. It has demonstrated its ability to run on vertical walls, leap over 1.3-meter-wide gaps, sprint at approximately 14.4 km/h over stepping stones, and move quickly and nimbly on terrain combining 30° slopes, stairs, and stepping stones. RAIBO is expected to be deployed soon for practical missions such as disaster site exploration and mountain searches.
Professor Jemin Hwangbo's research team in the Department of Mechanical Engineering at our university announced on June 3rd that they have developed a quadrupedal robot navigation framework capable of high-speed locomotion at 14.4 km/h (4m/s) even on discontinuous and complex terrains such as walls, stairs, and stepping stones.
The research team developed a quadrupedal navigation system that enables the robot to reach its target destination quickly and safely in complex and discontinuous terrain.
To achieve this, they approached the problem by breaking it down into two stages: first, developing a planner for planning foothold positions, and second, developing a tracker to accurately follow the planned foothold positions.
First, the planner module quickly searches for physically feasible foothold positions using a sampling-based optimization method with neural network-based heuristics and verifies the optimal path through simulation rollouts.
While existing methods considered various factors such as contact timing and robot posture in addition to foothold positions, this research significantly reduced computational complexity by setting only foothold positions as the search space. Furthermore, inspired by the walking method of cats, the introduction of a structure where the hind feet step on the same spots as the front feet further significantly reduced computational complexity.
< Figure 1. High-speed navigation across various discontinuous terrains >
Second, the tracker module is trained to accurately step on planned positions, and tracking training is conducted through a generative model that competes in environments of appropriate difficulty.
The tracker is trained through reinforcement learning to accurately step on planned plots, and during this process, a generative model called the 'map generator' provides the target distribution.
This generative model is trained simultaneously and adversarially with the tracker to allow the tracker to progressively adapt to more challenging difficulties. Subsequently, a sampling-based planner was designed to generate feasible foothold plans that can reflect the characteristics and performance of the trained tracker.
This hierarchical structure showed superior performance in both planning speed and stability compared to existing techniques, and experiments proved its high-speed locomotion capabilities across various obstacles and discontinuous terrains, as well as its general applicability to unseen terrains.
Professor Jemin Hwangbo stated, "We approached the problem of high-speed navigation in discontinuous terrain, which previously required a significantly large amount of computation, from the simple perspective of how to select the footprint positions. Inspired by the placements of cat's paw, allowing the hind feet to step where the front feet stepped drastically reduced computation. We expect this to significantly expand the range of discontinuous terrain that walking robots can overcome and enable them to traverse it at high speeds, contributing to the robot's ability to perform practical missions such as disaster site exploration and mountain searches."
This research achievement was published in the May 2025 issue of the international journal Science Robotics.
Paper Title: High-speed control and navigation for quadrupedal robots on complex and discrete terrain, (https://www.science.org/doi/10.1126/scirobotics.ads6192)YouTube Link: https://youtu.be/EZbM594T3c4?si=kfxLF2XnVUvYVIyk