
<(From Left)Ph.D candidate Jihyun Lee, Professor Tae-Kyun Kim, M.S candidate Changmin Lee>
The era has begun where AI moves beyond merely 'plausibly drawing' to understanding even why clothes flutter and wrinkles form. A KAIST research team has developed a new generative AI that learns movement and interaction in 3D space following physical laws. This technology, which overcomes the limitations of existing 2D-based video AI, is expected to enhance the realism of avatars in films, the metaverse, and games, and significantly reduce the need for motion capture or manual 3D graphics work.
KAIST (President Kwang Hyung Lee) announced on the 22nd that the research team of Professor Tae-Kyun (T-K) Kim from the School of Computing has developed 'MPMAvatar,' a spatial and physics-based generative AI model that overcomes the limitations of existing 2D pixel-based video generation technology.
To solve the problems of conventional 2D technology, the research team proposed a new method that reconstructs multi-view images into 3D space using Gaussian Splatting and combines it with the Material Point Method (MPM), a physics simulation technique.
In other words, the AI was trained to learn physical laws on its own by stereoscopically reconstructing videos taken from multiple viewpoints and allowing objects within that space to move and interact as if they were in real physical world.
This enables the AI to compute the movement based on objects' material, shape, and external forces, and then learn the physical laws by comparing the results with actual videos.
The research team represented the 3D space using point-units, and by applying both Gaussian and MPM to each point, they simultaneously achieved physically natural movement and realistic video rendering.
That is, they divided the 3D space into numerous small points, making each point move and deform like a real object, thereby realizing natural video that is nearly indistinguishable from reality.
In particular, to precisely express the interaction of thin and complex objects like clothing, they calculated both the object's surface (mesh) and its particle-unit structure (point), and utilized the Material Point Method (MPM), which calculates the object's movement and deformation in 3D space according to physical laws.
Furthermore, they developed a new collision handling technology to realistically reproduce scenes where clothes or objects move and collide with each other in multiple spots and complex manner.
The generative AI model MPMAvatar, to which this technology is applied, successfully reproduced the realistic movement and interaction of a person wearing loose clothing, and also succeeded in 'Zero-shot' generation, where the AI processes data it has never seen during the learning process by inferring on its own.

<Figure 1. Modeling new human poses and clothing dynamics from multi-view video input, and zero-shot generation of novel physical interactions.>
The proposed method is applicable to various physical properties, such as rigid bodies, deformable objects, and fluids, allowing it to be used not only for avatars but also for the generation of general complex scenes.

<“Figure 2. Depiction of graceful dance movements and soft clothing folds, like Navillera.>
Professor Tae-Kyun (T-K) Kim explained, "This technology goes beyond AI simply drawing a picture; it makes the AI understand 'why' the world in front of it looks the way it does. This research demonstrates the potential of 'Physical AI' that understands and predicts physical laws, marking an important turning point toward AGI (Artificial General Intelligence)." He added, "It is expected to be practically applied across the broaden immersive content industry, including virtual production, films, short-form contents, and adverts, creating significant change."
The research team is currently expanding this technology to develop a model that can generate physically consistent 3D videos simply from a user's text input.
This research involved Changmin Lee, a Master's student at the KAIST Graduate School of AI, as the first author, and Jihyun Lee, a Ph.D. student at the KAIST School of Computing, as a co-author. The research results will be presented at NeurIPS, the most prestigious international academic conference in the field of AI, on December 2nd, and the program code is to be fully released.
· Paper: C. Lee, J. Lee, T-K. Kim, MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics, Proc. of Thirty-Ninth Annual Conf. on Neural Information Processing Systems (NeurIPS), San Diego, US, 2025
· arXiv version: https://arxiv.org/abs/2510.01619
· Related Project Site: https://kaistchangmin.github.io/MPMAvatar/
· Related video links showing the 'Navillera'-like dancing drawn by AI:
o https://www.youtube.com/shorts/ZE2KoRvUF5c
o https://youtu.be/ytrKDNqACqM
This work was supported by the Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) through the Human-Oriented Next-Generation Challenging AGI Technology Project (RS-2025-25443318) and the Professional AI Talent Development Program for Multimodal AI Agents (RS-2025-25441313).
<(From Left) Ph.D candidate Daehee Kwon, Ph.D candidate Sehyun lee, Professor Jaesik Choi> Although deep learning–based image recognition technology is rapidly advancing, it still remains difficult to clearly explain the criteria AI uses internally to observe and judge images. In particular, technologies that analyze how large-scale models combine various concepts (e.g., cat ears, car wheels) to reach a conclusion have long been recognized as a major unsolved challenge. KAIST (Pr
2025-11-26< 2025 OPEN KAIST (Demonstration of the cluster systems and AI drone program conducted in Prof. Il-Chul Moon’s Lab, Department of Industrial & Systems Engineering)> KAIST announced on November 25th that it is operating the 'Science Education Sharing (KSOP),' 'OPEN KAIST,' and 'KAIST-style IT/AI Academy for the General Public, social contribution programs based on science popularization,in line with the government's policy to spread science culture. Through these initiatives, K
2025-11-25<(From Left) Ph.D candidate Insook Ahn from KAIST, Professor Jinju Han from KAIST, (Upper Left) Yangsik Kim from Inhan University School of Medicine, Ph.D candidate Soyeon Chang(psychiatrist)> Major depressive disorder (MDD) is characterized by a lowered mood and loss of interest, contributing not only to difficulties in academic and professional life but also as a major cause of suicide in South Korea. However, there are currently no objective biological markers that can be used for di
2025-11-20< (From left) KAIST Professors Yoonjae Choi, Tae-Kyun Kim, Jong Chul Ye, Hyunwoo Kim, Seunghoon Hong, Sang Yup Lee > KAIST announced on the 14th of November that it has been selected as a major participating institution in the 'Lunit Consortium' for the 'AI Specialized Foundation Model Development Project' supervised by the Ministry of Science and ICT, and has officially started developing an AI foundation model for the medical science and bio fields. Through this project, KAIST plans
2025-11-14< (From Left) Professor Joo Han Nam, President Kwang Hyung Lee, President and Vice President Students of KAIST Orchestra, Professor Han-Na Chang, Professor Hyeon-Jeong Suk > "It is very meaningful to be able to share the joy of music with future science and technology leaders at KAIST and to explore the possibilities of a new field of performing arts hand-in-hand with AI." – Han-Na Chang, KAIST Visiting Distinguished Professor KAIST announced on the 13th of November that it h
2025-11-13