
<(From Left) Ph.D candidate Daehee Kwon, Ph.D candidate Sehyun lee, Professor Jaesik Choi>
Although deep learning–based image recognition technology is rapidly advancing, it still remains difficult to clearly explain the criteria AI uses internally to observe and judge images. In particular, technologies that analyze how large-scale models combine various concepts (e.g., cat ears, car wheels) to reach a conclusion have long been recognized as a major unsolved challenge.
KAIST (President Kwang Hyung Lee) announced on the 26th of November that Professor Jaesik Choi’s research team at the Kim Jaechul Graduate School of AI has developed a new explainable AI (XAI) technology that visualizes the concept-formation process inside a model at the level of circuits, enabling humans to understand the basis on which AI makes decisions.
The study is evaluated as a significant step forward that allows researchers to structurally examine “how AI thinks.”
Inside deep learning models, there exist basic computational units called neurons, which function similarly to those in the human brain. Neurons detect small features within an image—such as the shape of an ear, a specific color, or an outline—and compute a value (signal) that is transmitted to the next layer.
In contrast, a circuit refers to a structure in which multiple neurons are connected to jointly recognize a single meaning (concept). For example, to recognize the concept of cat ear, neurons detecting outline shapes, neurons detecting triangular forms, and neurons detecting fur-color patterns must activate in sequence, forming a functional unit (circuit).
Up until now, most explanation techniques have taken a neuron-centric approach based on the idea that “a specific neuron detects a specific concept.” However, in reality, deep learning models form concepts through cooperative circuit structures involving many neurons. Based on this observation, the KAIST research team proposed a technique that expands the unit of concept representation from “neuron → circuit.”
The research team’s newly developed technology, Granular Concept Circuits (GCC), is a novel method that analyzes and visualizes how an image-classification model internally forms concepts at the circuit level.
GCC automatically traces circuits by computing Neuron Sensitivity and Semantic Flow. Neuron Sensitivity indicates how strongly a neuron responds to a particular feature, while Semantic Flow measures how strongly that feature is passed on to the next concept. Using these metrics, the system can visualize, step-by-step, how basic features such as color and texture are assembled into higher-level concepts.
The team conducted experiments in which specific circuits were temporarily disabled (ablation). As a result, when the circuit responsible for a concept was deactivated, the AI’s predictions actually changed.
In other words, the experiment directly demonstrated that the corresponding circuit indeed performs the function of recognizing that concept.
This study is regarded as the first to reveal, at a fine-grained circuit level, the actual structural process by which concepts are formed inside complex deep learning models. Through this, the research suggests practical applicability across the entire explainable AI (XAI) domain—including strengthening transparency in AI decision-making, analyzing the causes of misclassification, detecting bias, improving model debugging and architecture, and enhancing safety and accountability.
The research team stated, “This technology shows the concept structures that AI forms internally in a way that humans can understand,” adding that “this study provides a scientific starting point for researching how AI thinks.”
Professor Jaesik Choi emphasized, “Unlike previous approaches that simplified complex models for explanation, this is the first approach to precisely interpret the model’s interior at the level of fine-grained circuits,” and added, “We demonstrated that the concepts learned by AI can be automatically traced and visualized.”

< external_image >
< Overview of the Conceptual Circuit Proposed by the Research Team >

This study, with Ph.D. candidates Dahee Kwon and Sehyun Lee from KAIST Kim Jaechul Graduate School of AI as co–first authors, was presented on October 21 at the International Conference on Computer Vision (ICCV).
Paper title: Granular Concept Circuits: Toward a Fine-Grained Circuit Discovery for Concept Representations
Paper link: https://openaccess.thecvf.com/content/ICCV2025/papers/Kwon_Granular_Concept_Circuits_Toward_a_Fine-Grained_Circuit_Discovery_for_Concept_ICCV_2025_paper.pdf
This research was supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the “Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation” project, the AI Research Hub Project, and the KAIST AI Graduate School Program, and was carried out with support from the Defense Acquisition Program Administration (DAPA) and the Agency for Defense Development (ADD) at the KAIST Center for Applied Research in Artificial Intelligence.
<(From Left) Ph.D candidate Geunhee Gwak, Professor Young-Sik Ra, Dr. Chan Roh, Ph.D candidate Young-Do Yoon from KAIST, (Top Left) Professor M.S Kim from Imperial College London> Optical quantum computers are gaining attention as a next-generation computing technology with high speed and scalability. However, accurately characterizing complex optical processes, where multiple optical modes interact to generate quantum entanglement, has been considered an extremely challenging task. KAI
2025-11-18< (From left) Ph.D candidate Wonho Zhung, Ph.D cadidate Joongwon Lee , Prof. Woo Young Kim , Ph.D candidate Jisu Seo > Traditional drug development methods involve identifying a target protin (e.g., a cancer cell receptor) that causes disease, and then searching through countless molecular candidates (potential drugs) that could bind to that protein and block its function. This process is costly, time-consuming, and has a low success rate. KAIST researchers have developed an AI model th
2025-08-12<(From Left) M.S candidate Dongwoo Kim from KAIST, Ph.D candidate Hyun-Gi Lee from KAIST, Intern Yeham Kang from KAIST, M.S candidate Seongjae Bae from KAIST, Professor Dong-Hwa Seo from KAIST, (From top right, from left) Senior Researcher Inchul Park from POSCO Holdings, Senior Researcher Jung Woo Park, senior researcher from POSCO Holdings> A joint research team from industry and academia in Korea has successfully developed an autonomous lab that uses AI and automation to create ne
2025-08-06<(From Left)Prof. Yong Man Ro and Ph.D. candidate Sejin Park> Se Jin Park, a researcher from Professor Yong Man Ro’s team at KAIST, has announced 'SpeechSSM', a spoken language model capable of generating long-duration speech that sounds natural and remains consistent. An efficient processing technique based on linear sequence modeling overcomes the limitations of existing spoken language models, enabling high-quality speech generation without time constraints. It is expe
2025-07-04<From left> President Abdulla Al-Salman(King Saud University), President Kwang Hyung Lee(KAIST) KAIST (President Kwang Hyung Lee) and King Saud University (President Abdulla Al-Salman) held a meeting on July 3 at the KAIST Campus in Seoul and agreed to pursue strategic cooperation in AI and digital platform development. The global AI landscape is increasingly polarized between closed models developed by the U.S. and China’s nationally focused technology ecosystems. In this context
2025-07-04