
<(From Left)Ph.D candidate Jihyun Lee, Professor Tae-Kyun Kim, M.S candidate Changmin Lee>
The era has begun where AI moves beyond merely 'plausibly drawing' to understanding even why clothes flutter and wrinkles form. A KAIST research team has developed a new generative AI that learns movement and interaction in 3D space following physical laws. This technology, which overcomes the limitations of existing 2D-based video AI, is expected to enhance the realism of avatars in films, the metaverse, and games, and significantly reduce the need for motion capture or manual 3D graphics work.
KAIST (President Kwang Hyung Lee) announced on the 22nd that the research team of Professor Tae-Kyun (T-K) Kim from the School of Computing has developed 'MPMAvatar,' a spatial and physics-based generative AI model that overcomes the limitations of existing 2D pixel-based video generation technology.
To solve the problems of conventional 2D technology, the research team proposed a new method that reconstructs multi-view images into 3D space using Gaussian Splatting and combines it with the Material Point Method (MPM), a physics simulation technique.
In other words, the AI was trained to learn physical laws on its own by stereoscopically reconstructing videos taken from multiple viewpoints and allowing objects within that space to move and interact as if they were in real physical world.
This enables the AI to compute the movement based on objects' material, shape, and external forces, and then learn the physical laws by comparing the results with actual videos.
The research team represented the 3D space using point-units, and by applying both Gaussian and MPM to each point, they simultaneously achieved physically natural movement and realistic video rendering.
That is, they divided the 3D space into numerous small points, making each point move and deform like a real object, thereby realizing natural video that is nearly indistinguishable from reality.
In particular, to precisely express the interaction of thin and complex objects like clothing, they calculated both the object's surface (mesh) and its particle-unit structure (point), and utilized the Material Point Method (MPM), which calculates the object's movement and deformation in 3D space according to physical laws.
Furthermore, they developed a new collision handling technology to realistically reproduce scenes where clothes or objects move and collide with each other in multiple spots and complex manner.
The generative AI model MPMAvatar, to which this technology is applied, successfully reproduced the realistic movement and interaction of a person wearing loose clothing, and also succeeded in 'Zero-shot' generation, where the AI processes data it has never seen during the learning process by inferring on its own.

<Figure 1. Modeling new human poses and clothing dynamics from multi-view video input, and zero-shot generation of novel physical interactions.>
The proposed method is applicable to various physical properties, such as rigid bodies, deformable objects, and fluids, allowing it to be used not only for avatars but also for the generation of general complex scenes.

<“Figure 2. Depiction of graceful dance movements and soft clothing folds, like Navillera.>
Professor Tae-Kyun (T-K) Kim explained, "This technology goes beyond AI simply drawing a picture; it makes the AI understand 'why' the world in front of it looks the way it does. This research demonstrates the potential of 'Physical AI' that understands and predicts physical laws, marking an important turning point toward AGI (Artificial General Intelligence)." He added, "It is expected to be practically applied across the broaden immersive content industry, including virtual production, films, short-form contents, and adverts, creating significant change."
The research team is currently expanding this technology to develop a model that can generate physically consistent 3D videos simply from a user's text input.
This research involved Changmin Lee, a Master's student at the KAIST Graduate School of AI, as the first author, and Jihyun Lee, a Ph.D. student at the KAIST School of Computing, as a co-author. The research results will be presented at NeurIPS, the most prestigious international academic conference in the field of AI, on December 2nd, and the program code is to be fully released.
· Paper: C. Lee, J. Lee, T-K. Kim, MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics, Proc. of Thirty-Ninth Annual Conf. on Neural Information Processing Systems (NeurIPS), San Diego, US, 2025
· arXiv version: https://arxiv.org/abs/2510.01619
· Related Project Site: https://kaistchangmin.github.io/MPMAvatar/
· Related video links showing the 'Navillera'-like dancing drawn by AI:
o https://www.youtube.com/shorts/ZE2KoRvUF5c
o https://youtu.be/ytrKDNqACqM
This work was supported by the Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) through the Human-Oriented Next-Generation Challenging AGI Technology Project (RS-2025-25443318) and the Professional AI Talent Development Program for Multimodal AI Agents (RS-2025-25441313).
KAIST announced that it will host the ‘AI Agent-Based Solopreneurship Program Information Session’ and the ‘Entrepreneurial Mutual Growth Fair 2026’ for two days from May 18th to 19th. In this event, KAIST’s new AI-based solopreneurship model, which utilizes AI not merely as an operational tool but as a ‘Co-founder,’ will be introduced in depth. The university will hold an information session for the ‘AI Solopreneur Support Project,’ whic
2026-05-13< Professor Yiyun Kang (Photo Credit: Ryan Lash / TED) > KAIST announced on April 17th that Professor Yiyun Kang of the Department of Industrial Design has been selected as a speaker for the Main Stage at TED 2026, the world-renowned knowledge conference. Founded in 1984 under the motto "Ideas Worth Spreading," TED is an American non-profit knowledge platform where scholars, innovators, and artists from around the globe gather annually to lead global discourse. Previous Korean speakers
2026-04-18< (From left) Undergraduate researcher Taewon Kim and Professor Sangsik Kim > A new technology has been developed that allows light to be "designed" into desired forms, potentially making Artificial Intelligence (AI) and communication technologies faster and more accurate. A KAIST research team has developed an "integrated photonic resonator"—a core component of next-generation optical integrated circuits that process data using light. The research is particularly significant as i
2026-04-16<(From left) Photos of the KAIST Science Festival exhibition hall and booths from the previous year> KAIST announced on April 10th that KAIST will participate in the ‘2026 Korea Science and Technology Festival,’ the largest science festival in the country, to mark Science Month in April. KAIST will operate ‘KAIST Play World,’ an interactive exhibition hall showcasing the pinnacle of AI and robotics. This year’s festival will be held in two parts: ‘20
2026-04-13< (From left) Professor Gyu Rie Lee, Professor David Baker > Under the foundation of research cooperation established through the Ministry of Science and ICT's InnoCORE (InnoCORE) project, KAIST InnoCORE researchers have derived meaningful research results. Following a visit by Professor David Baker (University of Washington, USA), the 2024 Nobel Laureate in Chemistry, KAIST has revealed research findings on designing proteins that accurately recognize desired compounds using AI through
2026-04-09