
KAIST (President Kwang Hyung Lee) announced on the 25th that a research team led by Professor Jemin Hwangbo of the Department of Mechanical Engineering developed a quadrupedal robot control technology that can walk robustly with agility even in deformable terrain such as sandy beach.

< Photo. RAI Lab Team with Professor Hwangbo in the middle of the back row. >
Professor Hwangbo's research team developed a technology to model the force received by a walking robot on the ground made of granular materials such as sand and simulate it via a quadrupedal robot. Also, the team worked on an artificial neural network structure which is suitable in making real-time decisions needed in adapting to various types of ground without prior information while walking at the same time and applied it on to reinforcement learning. The trained neural network controller is expected to expand the scope of application of quadrupedal walking robots by proving its robustness in changing terrain, such as the ability to move in high-speed even on a sandy beach and walk and turn on soft grounds like an air mattress without losing balance.
This research, with Ph.D. Student Soo-Young Choi of KAIST Department of Mechanical Engineering as the first author, was published in January in the “Science Robotics”. (Paper title: Learning quadrupedal locomotion on deformable terrain).
Reinforcement learning is an AI learning method used to create a machine that collects data on the results of various actions in an arbitrary situation and utilizes that set of data to perform a task. Because the amount of data required for reinforcement learning is so vast, a method of collecting data through simulations that approximates physical phenomena in the real environment is widely used.
In particular, learning-based controllers in the field of walking robots have been applied to real environments after learning through data collected in simulations to successfully perform walking controls in various terrains.
However, since the performance of the learning-based controller rapidly decreases when the actual environment has any discrepancy from the learned simulation environment, it is important to implement an environment similar to the real one in the data collection stage. Therefore, in order to create a learning-based controller that can maintain balance in a deforming terrain, the simulator must provide a similar contact experience.
The research team defined a contact model that predicted the force generated upon contact from the motion dynamics of a walking body based on a ground reaction force model that considered the additional mass effect of granular media defined in previous studies.
Furthermore, by calculating the force generated from one or several contacts at each time step, the deforming terrain was efficiently simulated.
The research team also introduced an artificial neural network structure that implicitly predicts ground characteristics by using a recurrent neural network that analyzes time-series data from the robot's sensors.
The learned controller was mounted on the robot 'RaiBo', which was built hands-on by the research team to show high-speed walking of up to 3.03 m/s on a sandy beach where the robot's feet were completely submerged in the sand. Even when applied to harder grounds, such as grassy fields, and a running track, it was able to run stably by adapting to the characteristics of the ground without any additional programming or revision to the controlling algorithm.
In addition, it rotated with stability at 1.54 rad/s (approximately 90° per second) on an air mattress and demonstrated its quick adaptability even in the situation in which the terrain suddenly turned soft.
The research team demonstrated the importance of providing a suitable contact experience during the learning process by comparison with a controller that assumed the ground to be rigid, and proved that the proposed recurrent neural network modifies the controller's walking method according to the ground properties.
The simulation and learning methodology developed by the research team is expected to contribute to robots performing practical tasks as it expands the range of terrains that various walking robots can operate on.
The first author, Suyoung Choi, said, “It has been shown that providing a learning-based controller with a close contact experience with real deforming ground is essential for application to deforming terrain.” He went on to add that “The proposed controller can be used without prior information on the terrain, so it can be applied to various robot walking studies.”
This research was carried out with the support of the Samsung Research Funding & Incubation Center of Samsung Electronics.

< Figure 1. Adaptability of the proposed controller to various ground environments. The controller learned from a wide range of randomized granular media simulations showed adaptability to various natural and artificial terrains, and demonstrated high-speed walking ability and energy efficiency. >

< Figure 2. Contact model definition for simulation of granular substrates. The research team used a model that considered the additional mass effect for the vertical force and a Coulomb friction model for the horizontal direction while approximating the contact with the granular medium as occurring at a point. Furthermore, a model that simulates the ground resistance that can occur on the side of the foot was introduced and used for simulation. >
<(From Left) M.S candidate Soyoung Choi, Ph.D candidate Seong-Hyeon Hwang, Professor Steven Euijong Whang> Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types of data. KAIST researchers have now developed a new multimodal AI training technology that enables models to recognize both text and images even
2025-10-14<From the middle of the back row, clockwise: Professor Hae-Won Park, Dongyun Kang (Ph.D. candidate), Hajun Kim (Ph.D. candidate), JongHun Choe (Ph.D. candidate), Min-su Kim (Research Professor)> KAIST research team's independently developed humanoid robot boasts world-class driving performance, reaching speeds of 12km/h, along with excellent stability, maintaining balance even with its eyes closed or on rough terrain. Furthermore, it can perform complex human-specific movements such
2025-09-19KAIST announced on September 16 that it is gaining attention as a "cradle of Korean robotics" as various robot startups founded on campus have recently succeeded in attracting investment. Rainbow Robotics, founded by Professor Jun-Ho Oh of the Department of Mechanical Engineering, set a new milestone in the robotics industry by successfully going public with its world-class humanoid technology. Following this, Angel Robotics, a company specializing in rehabilitation and medical robots founded b
2025-09-16<Group Photo of Kick-off Meeting> On September 3, KAIST announced the official launch of the "2025 Deep Tech Scale-up Valley Nurturing Project" with a kick-off meeting at the KAIST Department of Mechanical Engineering. KAIST was selected for this project by the Ministry of Science and ICT and the Research and Development Special District Foundation. With this selection, the university plans to create a "Robot Valley". Over the next three and a half years, KAIST will receive a total of
2025-09-03KAIST (President Kwang Hyung Lee) announced on the 28th of August that, together with Jeonbuk State, Jeonbuk National University, and Sungkyunkwan University, it has jointly won the Ministry of Science and ICT’s pilot project for the “Physical AI Core Technology Proof of Concept (PoC)”, with KAIST serving as the overall research lead. The consortium also plans to participate in a full-scale demonstration project that is expected to reach a total scale of 1 trillion KRW in the f
2025-08-29