< Photo 1. Professor Jaesik Choi, KAIST Kim Jaechul Graduate School of AI >
Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text "creative," its ability to generate truly creative images remains limited. KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary.
Professor Jaesik Choi's research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training.
< Photo 2. Gayoung Lee, Researcher at NAVER AI Lab; Dahee Kwon, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Jiyeon Han, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Junho Kim, Researcher at NAVER AI Lab >
Professor Choi's research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns. Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation.
Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model.
Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training.
< Figure 1. Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. >
The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility.
In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods.
Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, "This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation."
They added, "This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem."
< Figure 2. Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. >
This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.
* Paper Title: Enhancing Creative Generation on Stable Diffusion-based Models
* DOI: https://doi.org/10.48550/arXiv.2503.23538
This research was supported by the KAIST-NAVER Ultra-creative AI Research Center, the Innovation Growth Engine Project Explainable AI, the AI Research Hub Project, and research on flexible evolving AI technology development in line with increasingly strengthened ethical policies, all funded by the Ministry of Science and ICT through the Institute for Information & Communications Technology Promotion. It also received support from the KAIST AI Graduate School Program and was carried out at the KAIST Future Defense AI Specialized Research Center with support from the Defense Acquisition Program Administration and the Agency for Defense Development.
<Photo1. Group Photo of Team Atlanta> Team Atlanta, led by Professor Insu Yun of the Department of Electrical and Electronic Engineering at KAIST and Tae-soo Kim, an executive from Samsung Research, along with researchers from POSTECH and Georgia Tech, won the final championship at the AI Cyber Challenge (AIxCC) hosted by the Defense Advanced Research Projects Agency (DARPA). The final was held at the world's largest hacking conference, DEF CON 33, in Las Vegas on August 8 (local time)
2025-08-10<(From Left) M.S candidate Dongwoo Kim from KAIST, Ph.D candidate Hyun-Gi Lee from KAIST, Intern Yeham Kang from KAIST, M.S candidate Seongjae Bae from KAIST, Professor Dong-Hwa Seo from KAIST, (From top right, from left) Senior Researcher Inchul Park from POSCO Holdings, Senior Researcher Jung Woo Park, senior researcher from POSCO Holdings> A joint research team from industry and academia in Korea has successfully developed an autonomous lab that uses AI and automation to create ne
2025-08-06<(Front row, fourth from the right) President Kwang Hyung Lee of KAIST, (back row, fifth from the right) Forum co-host Representative Hyung-Doo Choi, (back row, sixth from the left) Forum co-host Representative Han-Kyu Kim, along with ruling and opposition party members of the Science, ICT, Broadcasting, and Communications Committee and the Trade, Industry, Energy, SMEs, and Startups Committee, as well as Professors Hoe-Jun Yoo and Jung Kim from KAIST)> KAIST (President Kwang Hyung Lee)
2025-07-31<Photo1. (From left) Ph.D candidate Yong-hoo Kwon, M.S candidate Do-hwan Kim, Professor Jung-woo Choi, Dr. Dong-heon Lee> 'Acoustic separation and classification technology' is a next-generation artificial intelligence (AI) core technology that enables the early detection of abnormal sounds in areas such as drones, fault detection of factory pipelines, and border surveillance systems, or allows for the separation and editing of spatial audio by sound source when producing AR/VR conten
2025-07-13Professor Moon-Jeong Choi from KAIST’s Graduate School of Science and Technology Policy has been appointed as an advisor for "Innovate for Impact" at the AI for Good Global Summit, organized by the International Telecommunication Union (ITU), a specialized agency of the United Nations (UN). The ITU is the UN's oldest specialized agency in the field of information and communication technology (ICT) and serves as a crucial body for coordinating global ICT policies and standards. This advis
2025-07-08