KAIST (President Kwang Hyung Lee) is leading the transition to AI Transformation (AX) by advancing research topics based on the practical technological demands of industries, fostering AI talent, and demonstrating research outcomes in industrial settings. In this context, KAIST announced on the 13th of August that it is at the forefront of strengthening the nation's AI technology competitiveness by developing core AI technologies via national R&D projects for generative AI led by the Ministry of Science and ICT.
In the 'Generative AI Leading Talent Cultivation Project,' KAIST was selected as a joint research institution for all three projects—two led by industry partners and one by a research institution—and will thus be tasked with the dual challenge of developing core generative AI technologies and cultivating practical, core talent through industry-academia collaborations.
Moreover, in the 'Development of a Proprietary AI Foundation Model' project, KAIST faculty members are participating as key researchers in four out of five consortia, establishing the university as a central hub for domestic generative AI research.
Each project in the Generative AI Leading Talent Cultivation Project will receive 6.7 billion won, while each consortium in the proprietary AI foundation model development project will receive a total of 200 billion won in government support, including GPU infrastructure.
As part of the 'Generative AI Leading Talent Cultivation Project,' which runs until the end of 2028, KAIST is collaborating with LG AI Research. Professor Noseong Park from the School of Computing will participate as the principal investigator for KAIST, conducting research in the field of physics-based generative AI (Physical AI). This project focuses on developing image and video generation technologies based on physical laws and developing a 'World Model.'
<(From Left) Professor Noseong Park, Professor Jae-gil Lee, Professor Jiyoung Whang, Professor Sung-Eui Yoon, Professor Hyunwoo Kim>
In particular, research being conducted by Professor Noseong Park's team and Professor Sung-Eui Yoon's team proposes a model structure designed to help AI learn the real-world rules of the physical world more precisely. This is considered a core technology for Physical AI.
Professors Noseong Park, Jae-gil Lee, Jiyoung Hwang, Sung-Eui Yoon, and Hyun-Woo Kim from the School of Computing, who have been globally recognized for their achievements in the AI field, are jointly participating in this project. This year, they have presented work at top AI conferences such as ICLR, ICRA, ICCV, and ICML, including: ▲ Research on physics-based Ollivier Ricci-flow (ICLR 2025, Prof. Noseong Park) ▲ Technology to improve the navigation efficiency of quadruped robots (ICRA 2025, Prof. Sung-Eui Yoon) ▲ A multimodal large language model for text-video retrieval (ICCV 2025, Prof. Hyun-Woo Kim) ▲ Structured representation learning for knowledge generation (ICML 2025, Prof. Jiyoung Whang).
In the collaboration with NC AI, Professor Tae-Kyun Kim from the School of Computing is participating as the principal investigator to develop multimodal AI agent technology. The research will explore technologies applicable to the entire gaming industry, such as 3D modeling, animation, avatar expression generation, and character AI. It is expected to contribute to training practical AI talents by giving them hands-on experience in the industrial field and making the game production pipeline more efficient.
As the principal investigator, Professor Tae-Kyun Kim, a renowned scholar in 3D computer vision and generative AI, is developing key technologies for creating immersive avatars in the virtual and gaming industries. He will apply a first-person full-body motion diffusion model, which he developed through a joint research project with Meta, to VR and AR environments.
<Professor Tae-Kyun Kim, Minhyeok Seong, and Tae-Hyun Oh from the School of Computing, and Professor Sung-Hee Lee, Woon-Tack Woo, Jun-Yong Noh, and Kyung-Tae Lim from the Graduate School of Culture Technology, Professor Ki-min Lee, Seungryong Kim from the Kim Jae-chul Graduate School of AI>
Professor Tae-Kyun Kim, Minhyeok Seong, and Tae-Hyun Oh from the School of Computing, and Professors Sung-Hee Lee, Woon-Tack Woo, Jun-Yong Noh, and Kyung-Tae Lim from the Graduate School of Culture Technology, are participating in the NC AI project. They have presented globally recognized work at CVPR 2025 and ICLR 2025, including: ▲ A first-person full-body motion diffusion model (CVPR 2025, Prof. Tae-Kyun Kim) ▲ Stochastic diffusion synchronization technology for image generation (ICLR 2025, Prof. Minhyeok Seong) ▲ The creation of a large-scale 3D facial mesh video dataset (ICLR 2025, Prof. Tae-Hyun Oh) ▲ Object-adaptive agent motion generation technology, InterFaceRays (Eurographics 2025, Prof. Sung-Hee Lee) ▲ 3D neural face editing technology (CVPR 2025, Prof. Jun-Yong Noh) ▲ Research on selective search augmentation for multilingual vision-language models (COLING 2025, Prof. Kyung-Tae Lim).
In the project led by the Korea Electronics Technology Institute (KETI), Professor Seungryong Kim from the Kim Jae-chul Graduate School of AI is participating in generative AI technology development. His team recently developed new technology for extracting robust point-tracking information from video data in collaboration with Adobe Research and Google DeepMind, proposing a key technology for clearly understanding and generating videos.
Each industry partner will open joint courses with KAIST and provide their generative AI foundation models for education and research. Selected outstanding students will be dispatched to these companies to conduct practical research, and KAIST faculty will also serve as adjunct professors at the in-house AI graduate school established by LG AI Research.
<Egocentric Whole-Body Motion Diffusion (CVPR 2025, Prof. Taekyun Kim's Lab), Stochastic Diffusion Synchronization for Image Generation (ICLR 2025, Prof. Minhyuk Sung's Lab), A Large-Scale 3D Face Mesh Video Dataset (ICLR 2025, Prof. Taehyun Oh's Lab), InterFaceRays: Object-Adaptive Agent Action Generation (Eurographics 2025, Prof. Sunghee Lee's Lab), 3D Neural Face Editing (CVPR 2025, Prof. Junyong Noh's Lab), and Selective Retrieval Augmentation for Multilingual Vision-Language Models (COLING 2025, Prof. Kyeong-tae Lim's Lab)>
Meanwhile, KAIST showed an unrivaled presence by participating in four consortia for the Ministry of Science and ICT's 'Proprietary AI Foundation Model Development' project.
In the NC AI Consortium, Professors Tae-Kyun Kim, Sung-Eui Yoon, Noseong Park, Jiyoung Hwang, and Minhyeok Seong from the School of Computing are participating, focusing on the development of multimodal foundation models (LMMs) and robot-based models. They are particularly concentrating on developing LMMs that learn common sense about space, physics, and time. They have formed a research team optimized for developing next-generation, multimodal AI models that can understand and interact with the physical world, equipped with an 'all-purpose AI brain' capable of simultaneously understanding and processing diverse information such as text, images, video, and sound.
In the Upstage Consortium, Professors Jae-gil Lee and Hyeon-eon Oh from the School of Computing, both renowned scholars in data AI and NLP (natural language processing), along with Professor Kyung-Tae Lim from the Graduate School of Culture Technology, an LLM expert, are responsible for developing vertical models for industries such as finance, law, and manufacturing. The KAIST researchers will concentrate on developing practical AI models that are directly applicable to industrial settings and tailored to each specific industry.
The Naver Consortium includes Professor Tae-Hyun Oh from the School of Computing, who has developed key technology for multimodal learning and compositional language-vision models, Professor Hyun-Woo Kim, who has proposed video reasoning and generation methods using language models, and faculty from the Kim Jae-chul Graduate School of AI and the Department of Electrical Engineering.
In the SKT Consortium, Professor Ki-min Lee from the Kim Jae-chul Graduate School of AI, who has achieved outstanding results in text-to-image generation, human preference modeling, and visual robotic manipulation technology development, is participating. This technology is expected to play a key role in developing personalized services and customized AI solutions for telecommunications companies.
This outcome is considered a successful culmination of KAIST's strategy for developing AI technology based on industry demand and centered on on-site demonstrations.
KAIST President Kwang Hyung Lee said, "For AI technology to go beyond academic achievements and be connected to and practical for industry, continuous government support, research, and education centered on industry-academia collaboration are essential. KAIST will continue to strive to solve problems in industrial settings and make a real contribution to enhancing the competitiveness of the AI ecosystem."