How Does AI Think? KAIST Achieves First Visualization of the Internal Structure Behind AI Decision-Making
<(From Left) Ph.D candidate Daehee Kwon, Ph.D candidate Sehyun lee, Professor Jaesik Choi>
Although deep learning–based image recognition technology is rapidly advancing, it still remains difficult to clearly explain the criteria AI uses internally to observe and judge images. In particular, technologies that analyze how large-scale models combine various concepts (e.g., cat ears, car wheels) to reach a conclusion have long been recognized as a major unsolved challenge.
KAIST (President Kwang Hyung Lee) announced on the 26th of November that Professor Jaesik Choi’s research team at the Kim Jaechul Graduate School of AI has developed a new explainable AI (XAI) technology that visualizes the concept-formation process inside a model at the level of circuits, enabling humans to understand the basis on which AI makes decisions.
The study is evaluated as a significant step forward that allows researchers to structurally examine “how AI thinks.”
Inside deep learning models, there exist basic computational units called neurons, which function similarly to those in the human brain. Neurons detect small features within an image—such as the shape of an ear, a specific color, or an outline—and compute a value (signal) that is transmitted to the next layer.
In contrast, a circuit refers to a structure in which multiple neurons are connected to jointly recognize a single meaning (concept). For example, to recognize the concept of cat ear, neurons detecting outline shapes, neurons detecting triangular forms, and neurons detecting fur-color patterns must activate in sequence, forming a functional unit (circuit).
Up until now, most explanation techniques have taken a neuron-centric approach based on the idea that “a specific neuron detects a specific concept.” However, in reality, deep learning models form concepts through cooperative circuit structures involving many neurons. Based on this observation, the KAIST research team proposed a technique that expands the unit of concept representation from “neuron → circuit.”
The research team’s newly developed technology, Granular Concept Circuits (GCC), is a novel method that analyzes and visualizes how an image-classification model internally forms concepts at the circuit level.
GCC automatically traces circuits by computing Neuron Sensitivity and Semantic Flow. Neuron Sensitivity indicates how strongly a neuron responds to a particular feature, while Semantic Flow measures how strongly that feature is passed on to the next concept. Using these metrics, the system can visualize, step-by-step, how basic features such as color and texture are assembled into higher-level concepts.
The team conducted experiments in which specific circuits were temporarily disabled (ablation). As a result, when the circuit responsible for a concept was deactivated, the AI’s predictions actually changed.
In other words, the experiment directly demonstrated that the corresponding circuit indeed performs the function of recognizing that concept.
This study is regarded as the first to reveal, at a fine-grained circuit level, the actual structural process by which concepts are formed inside complex deep learning models. Through this, the research suggests practical applicability across the entire explainable AI (XAI) domain—including strengthening transparency in AI decision-making, analyzing the causes of misclassification, detecting bias, improving model debugging and architecture, and enhancing safety and accountability.
The research team stated, “This technology shows the concept structures that AI forms internally in a way that humans can understand,” adding that “this study provides a scientific starting point for researching how AI thinks.”
Professor Jaesik Choi emphasized, “Unlike previous approaches that simplified complex models for explanation, this is the first approach to precisely interpret the model’s interior at the level of fine-grained circuits,” and added, “We demonstrated that the concepts learned by AI can be automatically traced and visualized.”
< Overview of the Conceptual Circuit Proposed by the Research Team >
This study, with Ph.D. candidates Dahee Kwon and Sehyun Lee from KAIST Kim Jaechul Graduate School of AI as co–first authors, was presented on October 21 at the International Conference on Computer Vision (ICCV).
Paper title: Granular Concept Circuits: Toward a Fine-Grained Circuit Discovery for Concept Representations
Paper link: https://openaccess.thecvf.com/content/ICCV2025/papers/Kwon_Granular_Concept_Circuits_Toward_a_Fine-Grained_Circuit_Discovery_for_Concept_ICCV_2025_paper.pdf
This research was supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the “Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation” project, the AI Research Hub Project, and the KAIST AI Graduate School Program, and was carried out with support from the Defense Acquisition Program Administration (DAPA) and the Agency for Defense Development (ADD) at the KAIST Center for Applied Research in Artificial Intelligence.
Efficient Quantum Process Tomography for Enabling Scalable Optical Quantum Computing
<(From Left) Ph.D candidate Geunhee Gwak, Professor Young-Sik Ra, Dr. Chan Roh, Ph.D candidate Young-Do Yoon from KAIST, (Top Left) Professor M.S Kim from Imperial College London>
Optical quantum computers are gaining attention as a next-generation computing technology with high speed and scalability. However, accurately characterizing complex optical processes, where multiple optical modes interact to generate quantum entanglement, has been considered an extremely challenging task. KAIST research team has overcome this limitation, developing a highly efficient technique that enables complete characterization of complex multimode quantum operations in experiment. This technology, which can analyze large-scale operations with less data, represents an important step toward scalable quantum computing and quantum communication technologies.
KAIST announced on November 17th that a research team led by Professor Young-Sik Ra from the Department of Physics has developed a Multimode Quantum Process Tomography technique capable of efficiently identifying the characteristics of second-order nonlinear optical quantum processes that are essential for optical quantum computing.
Efficient 'CT Scan' Technology for Quantum Computers
'Tomography' is a technique, similar to a medical CT scan, that reconstructs an invisible internal structure from diverse measurements. Similarly, quantum computing requires a method that reconstructs the internal workings of quantum operations using various measurement data. To outperform conventional computers, a quantum computer must be capable of manipulating a large number of quantum units (qubits or qumodes) at the same time. However, as the number of qubits or quantum optical modes (qumodes) increases, the resources required for tomography grows exponentially, making existing technologies unable to analyze systems with even five or more optical modes.
With the newly developed technique, the research team is now able to clearly determine what actually happens inside an optical quantum computer, as if taking a CT scan.
Introducing a New Mathematical Framework Based on Amplification and Noise Matrices
Inside a quantum computer, multiple optical modes interact in a highly complex and entangled way. The research team has introduced a new mathematical framework that precisely describes multimode second-order nonlinear optical quantum processes.
This method analyzes how input states change under a given operation using two key components: the 'Amplification matrix,' which describes how the mean fields of light are transformed, and the 'Noise matrix,' which captures the noise or loss introduced through environmental interactions.
Together, these components create a 'quantum state map' that enables accurate and simultaneous observation of both the ideal quantum evolution of light (unitary changes) and the unavoidable noise (non-unitary changes) present in real devices. This leads to a much more realistic characterization of how an optical quantum computer actually operates.
Reducing the Required Measurement Data and Expanding Analysis to 16 Modes
To determine how a quantum operation works, the research team input several types of quantum states and observed how the outputs changed. They then applied a statistical method known as Maximum Likelihood Estimation to reconstruct the internal operation that most accurately explains the collected data while satisfying the necessary physical conditions.
Using this approach, the research team dramatically reduced the amount of measurement data required. Whereas existing methods quickly become impractical—requiring enormous datasets even for systems with slightly more than a few modes and typically limiting analysis to about five modes—the new technique overcomes this bottleneck. The team successfully performed the world’s first experimental characterization of a large-scale optical quantum operation involving 16 modes, an unprecedented milestone in the field.
<Figure1.Experimental scheme. (Left) Various coherent states are used as input probes to determine the amplification matrix. (Right) A vacuum input state is used to additionally determine the noise matrix.>
<Figure2.Characterization results. (a) 16-mode second-order nonlinear optical quantum process. (b) Cluster state generation. (c) Mode-dependent loss with nonlinear interaction. (d) Quantum noise channel. Left and right columns show the amplification and noise matrices, respectively>
Professor Young-Sik Ra stated, "This research significantly increases the efficiency of Quantum Process Tomography, a foundational technology essential for quantum computing. The acquired technology will greatly contribute to enhancing the scalability and reliability of various quantum technologies, including quantum computing, quantum communication, and quantum sensing."
The study, in which Geunhee Gwak (Integrated M.S, Ph.D. Candidate, Department of Physics) participated as the first author, and Dr. Chan Roh (Postdoctoral Researcher), Young-Do Yoon (Integrated M.S./Ph.D. Candidate), and Professor Myungshik Kim (Imperial College London) participated as co-authors, was formally published online in the prominent international academic journal 'Nature Photonics' on November 11, 2025.
※ Article Title: Completely characterizing multimode second-order nonlinear optical quantum processes, DOI:10.1038/s41566-025-01787-x
This research was supported by the National Research Foundation of Korea (Quantum Computing Technology Development Project, Mid-career Researcher Support Project, Quantum Simulator Development for Material Innovation Project, Quantum Technology R&D Flagship Project, Basic Research Lab Support Project), the Institute of Information & Communications Technology Planning & Evaluation (Core Source Technology for Quantum Internet Project, University ICT Research Center Support Project), and the US Air Force Research Laboratory.
KAIST Develops Neuromorphic Semiconductor Chip that Learns and Corrects Itself
< Photo. The research team of the School of Electrical Engineering posed by the newly deveoped processor. (From center to the right) Professor Young-Gyu Yoon, Integrated Master's and Doctoral Program Students Seungjae Han and Hakcheon Jeong and Professor Shinhyun Choi >
- Professor Shinhyun Choi and Professor Young-Gyu Yoon’s Joint Research Team from the School of Electrical Engineering developed a computing chip that can learn, correct errors, and process AI tasks
- Equipping a computing chip with high-reliability memristor devices with self-error correction functions for real-time learning and image processing
Existing computer systems have separate data processing and storage devices, making them inefficient for processing complex data like AI. A KAIST research team has developed a memristor-based integrated system similar to the way our brain processes information. It is now ready for application in various devices including smart security cameras, allowing them to recognize suspicious activity immediately without having to rely on remote cloud servers, and medical devices with which it can help analyze health data in real time.
KAIST (President Kwang Hyung Lee) announced on the 17th of January that the joint research team of Professor Shinhyun Choi and Professor Young-Gyu Yoon of the School of Electrical Engineering has developed a next-generation neuromorphic semiconductor-based ultra-small computing chip that can learn and correct errors on its own.
< Figure 1. Scanning electron microscope (SEM) image of a computing chip equipped with a highly reliable selector-less 32×32 memristor crossbar array (left). Hardware system developed for real-time artificial intelligence implementation (right). >
What is special about this computing chip is that it can learn and correct errors that occur due to non-ideal characteristics that were difficult to solve in existing neuromorphic devices. For example, when processing a video stream, the chip learns to automatically separate a moving object from the background, and it becomes better at this task over time.
This self-learning ability has been proven by achieving accuracy comparable to ideal computer simulations in real-time image processing. The research team's main achievement is that it has completed a system that is both reliable and practical, beyond the development of brain-like components.
The research team has developed the world's first memristor-based integrated system that can adapt to immediate environmental changes, and has presented an innovative solution that overcomes the limitations of existing technology.
< Figure 2. Background and foreground separation results of an image containing non-ideal characteristics of memristor devices (left). Real-time image separation results through on-device learning using the memristor computing chip developed by our research team (right). >
At the heart of this innovation is a next-generation semiconductor device called a memristor*. The variable resistance characteristics of this device can replace the role of synapses in neural networks, and by utilizing it, data storage and computation can be performed simultaneously, just like our brain cells.
*Memristor: A compound word of memory and resistor, next-generation electrical device whose resistance value is determined by the amount and direction of charge that has flowed between the two terminals in the past.
The research team designed a highly reliable memristor that can precisely control resistance changes and developed an efficient system that excludes complex compensation processes through self-learning. This study is significant in that it experimentally verified the commercialization possibility of a next-generation neuromorphic semiconductor-based integrated system that supports real-time learning and inference.
This technology will revolutionize the way artificial intelligence is used in everyday devices, allowing AI tasks to be processed locally without relying on remote cloud servers, making them faster, more privacy-protected, and more energy-efficient.
“This system is like a smart workspace where everything is within arm’s reach instead of having to go back and forth between desks and file cabinets,” explained KAIST researchers Hakcheon Jeong and Seungjae Han, who led the development of this technology. “This is similar to the way our brain processes information, where everything is processed efficiently at once at one spot.”
The research was conducted with Hakcheon Jeong and Seungjae Han, the students of Integrated Master's and Doctoral Program at KAIST School of Electrical Engineering being the co-first authors, the results of which was published online in the international academic journal, Nature Electronics, on January 8, 2025.
*Paper title: Self-supervised video processing with self-calibration on an analogue computing platform based on a selector-less memristor array ( https://doi.org/10.1038/s41928-024-01318-6 )
This research was supported by the Next-Generation Intelligent Semiconductor Technology Development Project, Excellent New Researcher Project and PIM AI Semiconductor Core Technology Development Project of the National Research Foundation of Korea, and the Electronics and Telecommunications Research Institute Research and Development Support Project of the Institute of Information & communications Technology Planning & Evaluation.
KAIST Proposes a New Way to Circumvent a Long-time Frustration in Neural Computing
The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. The technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future.
KAIST (President Kwang-Hyung Lee) announced on the 16th of December that Professor Se-Bum Paik 's research team in the Department of Brain Cognitive Sciences solved the weight transport problem*, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks.
*Weight transport problem: This is the biggest obstacle to the development of artificial intelligence that mimics the biological brain. It is the fundamental reason why large-scale memory and computational work are required in the learning of general artificial neural networks, unlike biological brains.
Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning.
< Figure 1. Illustration depicting the method of random noise training and its effects >
This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different.
At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem.
In 2016, a joint research team from Oxford University and DeepMind in the UK first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality.
KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise).
As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training.
< Figure 2. Illustration depicting the meta-learning effect of random noise training >
The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.’ It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport.
< Figure 3. Illustration depicting research on understanding the brain's operating principles through artificial neural networks >
Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning,” and added, “It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.”
This study, in which Jeonghwan Cheon, a Master’s candidate of KAIST Department of Brain and Cognitive Sciences participated as the first author and Professor Sang Wan Lee of the same department as a co-author, was presented at the 38th Neural Information Processing Systems (NeurIPS), the world's top artificial intelligence conference, on December 14th in Vancouver, Canada. (Paper title: Pretraining with random noise for fast and robust learning without weight transport)
This study was conducted with the support of the National Research Foundation of Korea's Basic Research Program in Science and Engineering, the Information and Communications Technology Planning and Evaluation Institute's Talent Development Program, and the KAIST Singularity Professor Program.
KAIST Awarded Presidential Commendation for Contributions in Software Industry
- At the “25th Software Industry Day” celebration held in the afternoon on Monday, December 2nd, 2024 at Yangjae L Tower in Seoul
- KAIST was awarded the “Presidential Commendation” for its contributions for the advancement of the Software Industry in the Group Category
- Korea’s first AI master’s and doctoral degree program opened at KAIST Kim Jaechul Graduate School of AI
- Focus on training non-major developers through SW Officer Training Academy "Jungle", Machine Learning Engineer Bootcamp, etc., talents who can integrate development and collaboration, and advanced talents in the latest AI technologies.
- Professor Minjoon Seo of KAIST Kim Jaechul Graduate School of AI received Prime Minister’s Commendation for his contributions for the advancement of the software industry.
< Photo 1. Professor Kyung-soo Kim, the Senior Vice President for Planning and Budget (second from the left) and the Manager of Planning Team, Mr. Sunghoon Jung, stand at the stage after receiving the Presidential Commendation as KAIST was selected as one of the groups that contributed to the advancement of the software industry at the "25th Software Industry Day" celebration. >
“KAIST has been leading the way in achieving the grand goal of fostering 1 million AI talents in Korea by services that pan from providing various educational opportunities, from developing the capabilities of experts with no computer science specialty to fostering advanced professionals. I would like to thank all members of KAIST community who worked hard to achieve the great feat of receiving the Presidential Commendations.” (KAIST President Kwang Hyung Lee)
KAIST (President Kwang Hyung Lee) announced on December 3rd that it was selected as a group that contributed to the advancement of the software industry at the “2024 Software Industry Day” celebration held at the Yangjae El Tower in Seoul on the 2nd of December and received a presidential commendation.
The “Software Industry Day”, hosted by the Ministry of Science and ICT and organized by the National IT Industry Promotion Agency and the Korea Software Industry Association, is an event designed to promote the status of software industry workers in Korea and to honor their achievements.
Every year, those who have made significant contributions to policy development, human resource development, and export growth for industry revitalization are selected and awarded the ‘Software Industry Development Contribution Award.’
KAIST was recognized for its contribution to developing a demand-based, industrial field-centric curriculum and fostering non-major developers and convergence talents with the goal of expanding software value and fostering excellent human resources.
< Photo 2. Senior Vice President for Planning and Budget Kyung-soo Kim receiving the commendation as the representative of KAIST >
Specifically, it first opened the SW Officer Training Academy "Jungle" to foster convergent program developers equipped with the abilities to handle both the computer coding and human interactions for collaborations. This is a non-degree program that provides intensive study and assignments for 5 months for graduates and intellectuals without prior knowledge of computer science.
KAIST Kim Jaechul Graduate School of AI opened and operated Korea’s first master's and doctoral degree program in the field of artificial intelligence. In addition, it planned a “Machine Learning Engineers’ Boot Camp” and conducted lectures and practical training for a total of 16 weeks on the latest AI technologies such as deep learning basics and large language models. It aims to strengthen the practical capabilities of start-up companies while lowering the threshold for companies to introduce AI technology.
Also, KAIST was selected to participate in the 1st and 2nd stages of the Software-centered University Project and has been taking part in the project since 2016. Through this, it was highly evaluated for promoting curriculum based on latest technology, an autonomous system where students directly select integrated education, and expansion of internships.
< Photo 3. Professor Minjoon Seo of Kim Jaechul Graduate School of AI, who received the Prime Minister's Commendation for his contribution to the advancement of the software industry on the same day >
At the awards ceremony that day, Professor Minjoon Seo of KAIST Kim Jaechul Graduate School of AI also received the Prime Minister's Commendation for his contribution to the advancement of the software industry. Professor Seo was recognized for his leading research achievements in the fields of AI and natural language processing by publishing 28 papers in top international AI conferences over the past four years.
At the same time, he was noted for his contributions to enhancing the originality and innovation of language model research, such as △knowledge encoding, △knowledge access and utilization, and △high-dimensional inference performance, and for demonstrating leadership in the international academic community.
President Kwang Hyung Lee of KAIST stated, “Our university will continue to do its best to foster software talents with global competitiveness through continuous development of cutting-edge curriculum and innovative degree systems.”
Phage resistant Escherichia coli strains developed to reduce fermentation failure
A genome engineering-based systematic strategy for developing phage resistant Escherichia coli strains has been successfully developed through the collaborative efforts of a team led by Professor Sang Yup Lee, Professor Shi Chen, and Professor Lianrong Wang. This study by Xuan Zou et al. was published in Nature Communications in August 2022 and featured in Nature Communications Editors’ Highlights. The collaboration by the School of Pharmaceutical Sciences at Wuhan University, the First Affiliated Hospital of Shenzhen University, and the KAIST Department of Chemical and Biomolecular Engineering has made an important advance in the metabolic engineering and fermentation industry as it solves a big problem of phage infection causing fermentation failure.
Systems metabolic engineering is a highly interdisciplinary field that has made the development of microbial cell factories to produce various bioproducts including chemicals, fuels, and materials possible in a sustainable and environmentally friendly way, mitigating the impact of worldwide resource depletion and climate change. Escherichia coli is one of the most important chassis microbial strains, given its wide applications in the bio-based production of a diverse range of chemicals and materials. With the development of tools and strategies for systems metabolic engineering using E. coli, a highly optimized and well-characterized cell factory will play a crucial role in converting cheap and readily available raw materials into products of great economic and industrial value.
However, the consistent problem of phage contamination in fermentation imposes a devastating impact on host cells and threatens the productivity of bacterial bioprocesses in biotechnology facilities, which can lead to widespread fermentation failure and immeasurable economic loss. Host-controlled defense systems can be developed into effective genetic engineering solutions to address bacteriophage contamination in industrial-scale fermentation; however, most of the resistance mechanisms only narrowly restrict phages and their effect on phage contamination will be limited.
There have been attempts to develop diverse abilities/systems for environmental adaptation or antiviral defense. The team’s collaborative efforts developed a new type II single-stranded DNA phosphorothioation (Ssp) defense system derived from E. coli 3234/A, which can be used in multiple industrial E. coli strains (e.g., E. coli K-12, B and W) to provide broad protection against various types of dsDNA coliphages. Furthermore, they developed a systematic genome engineering strategy involving the simultaneous genomic integration of the Ssp defense module and mutations in components that are essential to the phage life cycle. This strategy can be used to transform E. coli hosts that are highly susceptible to phage attack into strains with powerful restriction effects on the tested bacteriophages. This endows hosts with strong resistance against a wide spectrum of phage infections without affecting bacterial growth and normal physiological function. More importantly, the resulting engineered phage-resistant strains maintained the capabilities of producing the desired chemicals and recombinant proteins even under high levels of phage cocktail challenge, which provides crucial protection against phage attacks.
This is a major step forward, as it provides a systematic solution for engineering phage-resistant bacterial strains, especially industrial bioproduction strains, to protect cells from a wide range of bacteriophages. Considering the functionality of this engineering strategy with diverse E. coli strains, the strategy reported in this study can be widely extended to other bacterial species and industrial applications, which will be of great interest to researchers in academia and industry alike.
Fig. A schematic model of the systematic strategy for engineering phage-sensitive industrial E. coli strains into strains with broad antiphage activities. Through the simultaneous genomic integration of a DNA phosphorothioation-based Ssp defense module and mutations of components essential for the phage life cycle, the engineered E. coli strains show strong resistance against diverse phages tested and maintain the capabilities of producing example recombinant proteins, even under high levels of phage cocktail challenge.
T-GPS Processes a Graph with Trillion Edges on a Single Computer
Trillion-scale graph processing simulation on a single computer presents a new concept of graph processing
A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer.
Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect.
Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine.
The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible.
The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high.
The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach.
The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated.
The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers.
Professor Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, “T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm.”
This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP).
Publication:
Park, H., et al. (2021) “Trillion-scale Graph Processing Simulation based on Top-Down Graph Upscaling,” Presented at the IEEE ICDE 2021 (April 19-22, 2021, Chania, Greece)
Profile:
Min-Soo Kim
Associate Professor
minsoo.k@kaist.ac.kr
http://infolab.kaist.ac.kr
School of Computing
KAIST
A Comprehensive Review of Biosynthesis of Inorganic Nanomaterials Using Microorganisms and Bacteriophages
There are diverse methods for producing numerous inorganic nanomaterials involving many experimental variables. Among the numerous possible matches, finding the best pair for synthesizing in an environmentally friendly way has been a longstanding challenge for researchers and industries.
A KAIST bioprocess engineering research team led by Distinguished Professor Sang Yup Lee conducted a summary of 146 biosynthesized single and multi-element inorganic nanomaterials covering 55 elements in the periodic table synthesized using wild-type and genetically engineered microorganisms. Their research highlights the diverse applications of biogenic nanomaterials and gives strategies for improving the biosynthesis of nanomaterials in terms of their producibility, crystallinity, size, and shape.
The research team described a 10-step flow chart for developing the biosynthesis of inorganic nanomaterials using microorganisms and bacteriophages. The research was published at Nature Review Chemistry as a cover and hero paper on December 3.
“We suggest general strategies for microbial nanomaterial biosynthesis via a step-by-step flow chart and give our perspectives on the future of nanomaterial biosynthesis and applications. This flow chart will serve as a general guide for those wishing to prepare biosynthetic inorganic nanomaterials using microbial cells,” explained Dr.Yoojin Choi, a co-author of this research.
Most inorganic nanomaterials are produced using physical and chemical methods and biological synthesis has been gaining more and more attention. However, conventional synthesis processes have drawbacks in terms of high energy consumption and non-environmentally friendly processes. Meanwhile, microorganisms such as microalgae, yeasts, fungi, bacteria, and even viruses can be utilized as biofactories to produce single and multi-element inorganic nanomaterials under mild conditions.
After conducting a massive survey, the research team summed up that the development of genetically engineered microorganisms with increased inorganic-ion-binding affinity, inorganic-ion-reduction ability, and nanomaterial biosynthetic efficiency has enabled the synthesis of many inorganic nanomaterials.
Among the strategies, the team introduced their analysis of a Pourbaix diagram for controlling the size and morphology of a product. The research team said this Pourbaix diagram analysis can be widely employed for biosynthesizing new nanomaterials with industrial applications.Professor Sang Yup Lee added, “This research provides extensive information and perspectives on the biosynthesis of diverse inorganic nanomaterials using microorganisms and bacteriophages and their applications. We expect that biosynthetic inorganic nanomaterials will find more diverse and innovative applications across diverse fields of science and technology.”
Dr. Choi started this research in 2018 and her interview about completing this extensive research was featured in an article at Nature Career article on December 4.
-ProfileDistinguished Professor Sang Yup Lee leesy@kaist.ac.krMetabolic &Biomolecular Engineering National Research Laboratoryhttp://mbel.kaist.ac.krDepartment of Chemical and Biomolecular EngineeringKAIST
Before Eyes Open, They Get Ready to See
- Spontaneous retinal waves can generate long-range horizontal connectivity in visual cortex. -
A KAIST research team’s computational simulations demonstrated that the waves of spontaneous neural activity in the retinas of still-closed eyes in mammals develop long-range horizontal connections in the visual cortex during early developmental stages.
This new finding featured in the August 19 edition of Journal of Neuroscience as a cover article has resolved a long-standing puzzle for understanding visual neuroscience regarding the early organization of functional architectures in the mammalian visual cortex before eye-opening, especially the long-range horizontal connectivity known as “feature-specific” circuitry.
To prepare the animal to see when its eyes open, neural circuits in the brain’s visual system must begin developing earlier. However, the proper development of many brain regions involved in vision generally requires sensory input through the eyes.
In the primary visual cortex of the higher mammalian taxa, cortical neurons of similar functional tuning to a visual feature are linked together by long-range horizontal circuits that play a crucial role in visual information processing.
Surprisingly, these long-range horizontal connections in the primary visual cortex of higher mammals emerge before the onset of sensory experience, and the mechanism underlying this phenomenon has remained elusive.
To investigate this mechanism, a group of researchers led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering at KAIST implemented computational simulations of early visual pathways using data obtained from the retinal circuits in young animals before eye-opening, including cats, monkeys, and mice.
From these simulations, the researchers found that spontaneous waves propagating in ON and OFF retinal mosaics can initialize the wiring of long-range horizontal connections by selectively co-activating cortical neurons of similar functional tuning, whereas equivalent random activities cannot induce such organizations.
The simulations also showed that emerged long-range horizontal connections can induce the patterned cortical activities, matching the topography of underlying functional maps even in salt-and-pepper type organizations observed in rodents. This result implies that the model developed by Professor Paik and his group can provide a universal principle for the developmental mechanism of long-range horizontal connections in both higher mammals as well as rodents.
Professor Paik said, “Our model provides a deeper understanding of how the functional architectures in the visual cortex can originate from the spatial organization of the periphery, without sensory experience during early developmental periods.”
He continued, “We believe that our findings will be of great interest to scientists working in a wide range of fields such as neuroscience, vision science, and developmental biology.”
This work was supported by the National Research Foundation of Korea (NRF). Undergraduate student Jinwoo Kim participated in this research project and presented the findings as the lead author as part of the Undergraduate Research Participation (URP) Program at KAIST.
Figures and image credit: Professor Se-Bum Paik, KAIST
Image usage restrictions: News organizations may use or redistribute these figures and image, with proper attribution, as part of news coverage of this paper only.
Publication:
Jinwoo Kim, Min Song, and Se-Bum Paik. (2020). Spontaneous retinal waves generate long-range horizontal connectivity in visual cortex. Journal of Neuroscience, Available online athttps://www.jneurosci.org/content/early/2020/07/17/JNEUROSCI.0649-20.2020
Profile: Se-Bum Paik
Assistant Professor
sbpaik@kaist.ac.kr
http://vs.kaist.ac.kr/
VSNN Laboratory
Department of Bio and Brain Engineering
Program of Brain and Cognitive Engineering
http://kaist.ac.kr
Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea
Profile: Jinwoo Kim
Undergraduate Student
bugkjw@kaist.ac.kr
Department of Bio and Brain Engineering, KAIST
Profile: Min Song
Ph.D. Candidate
night@kaist.ac.kr
Program of Brain and Cognitive Engineering, KAIST
(END)
A Study Finds Neuropeptide Somatostatin Enhances Visual Processing
Researchers have confirmed that neuropeptide somatostatin can improve cognitive function in the brain. A research group of Professor Seung-Hee Lee from the Department of Biological Sciences at KAIST found that the application of neuropeptide somatostatin improves visual processing and cognitive behaviors by reducing excitatory inputs to parvalbumin-positive interneurons in the cortex.
This study, reported at Science Advances on April 22nd (EST), sheds a new light on the therapeutics of neurodegenerative diseases. According to a recent study in Korea, one in ten seniors over 65 is experiencing dementia-related symptoms in their daily lives such like memory loss, cognitive decline, and motion function disorders. Professor Lee believes that somatostatin treatment can be directly applied to the recovery of cognitive functions in Alzheimer’s disease patients.
Professor Lee started this study noting the fact that the level of somatostatin expression was dramatically decreased in the cerebral cortex and cerebrospinal fluid of Alzheimer’s disease patients
Somatostatin-expressing neurons in the cortex are known to exert the dendritic inhibition of pyramidal neurons via GABAergic transmission. Previous studies focused on their inhibitory effects on cortical circuits, but somatostatin-expressing neurons can co-release somatostatin upon activation. Despite the abundant expression of somatostatin and its receptors in the cerebral cortex, it was not known if somatostatin could modulate cognitive processing in the cortex.
The research team demonstrated that the somatostatin treatment into the cerebral cortex could enhance visual processing and cognitive behaviors in mice. The research team combined behaviors, in vivo and in vitro electrophysiology, and electron microscopy techniques to reveal how the activation of somatostatin receptors in vivo enhanced the ability of visual recognition in animals. Interestingly, somatostatin release can reduce excitatory synaptic transmission to another subtype of GABAergic interneurons, parvalbumin (PV)-expressing neurons.
As somatostatin is a stable and safe neuropeptide expressed naturally in the mammalian brain, it was safe to be injected into the cortex and cerebrospinal fluid, showing a potential application to drug development for curing cognitive disorders in humans.
Professor Lee said, “Our research confirmed the key role of the neuropeptide SST in modulating cortical function and enhancing cognitive ability in the mammalian brain. I hope new drugs can be developed based on the function of somatostatin to treat cognitive disabilities in many patients suffering from neurological disorders.”
This study was supported by the National Research Foundation of Korea.
Publication:
Song, Y. H et al. (2020) ‘Somatostatin enhances visual processing and perception by suppressing excitatory inputs to parvalbumin-positive interneurons in V1’, Science Advances, 6(17). Available online at https://doi.org/10.1126/sciadv.aaz0517
Profile:
Seung-Hee Lee
Associate Professor
shlee1@kaist.ac.kr
https://sites.google.com/site/leelab2013/
Sensory Processing Lab (SPL)
Department of Biological Sciences (BIO)
Korea Advanced Institute of Science and Technology (KAIST)
Profile:
You-Hyang Song
Researcher (Ph.D.)
dbgidtm17@kaist.ac.kr
SPL, KAIST BIO
Profile:
Yang-Sun Hwang
Researcher (M.S.)
hys940129@kaist.ac.kr
SPL, KAIST BIO
(END)
Scientists Observe the Elusive Kondo Screening Cloud
Scientists ended a 50-year quest by directly observing a quantum phenomenon
An international research group of Professor Heung-Sun Sim has ended a 50-year quest by directly observing a quantum phenomenon known as a Kondo screening cloud. This research, published in Nature on March 11, opens a novel way to engineer spin screening and entanglement. According to the research, the cloud can mediate interactions between distant spins confined in quantum dots, which is a necessary protocol for semiconductor spin-based quantum information processing. This spin-spin interaction mediated by the Kondo cloud is unique since both its strength and sign (two spins favor either parallel or anti-parallel configuration) are electrically tunable, while conventional schemes cannot reverse the sign.
This phenomenon, which is important for many physical phenomena such as dilute magnetic impurities and spin glasses, is essentially a cloud that masks magnetic impurities in a material. It was known to exist but its spatial extension had never been observed, creating controversy over whether such an extension actually existed.
Magnetism arises from a property of electrons known as spin, meaning that they have angular momentum aligned in one of either two directions, conventionally known as up and down. However, due to a phenomenon known as the Kondo effect, the spins of conduction electrons—the electrons that flow freely in a material—become entangled with a localized magnetic impurity, and effectively screen it. The strength of this spin coupling, calibrated as a temperature, is known as the Kondo temperature.
The size of the cloud is another important parameter for a material containing multiple magnetic impurities because the spins in the cloud couple with one another and mediate the coupling between magnetic impurities when the clouds overlap. This happens in various materials such as Kondo lattices, spin glasses, and high temperature superconductors.
Although the Kondo effect for a single magnetic impurity is now a text-book subject in many-body physics, detection of its key object, the Kondo cloud and its length, has remained elusive despite many attempts during the past five decades. Experiments using nuclear magnetic resonance or scanning tunneling microscopy, two common methods for understanding the structure of matter, have either shown no signature of the cloud, or demonstrated a signature only at a very short distance, less than 1 nanometer, so much shorter than the predicted cloud size, which was in the micron range.
In the present study, the authors observed a Kondo screening cloud formed by an impurity defined as a localized electron spin in a quantum dot—a type of “artificial atom”—coupled to quasi-one-dimensional conduction electrons, and then used an interferometer to measure changes in the Kondo temperature, allowing them to investigate the presence of a cloud at the interferometer end.
Essentially, they slightly perturbed the conduction electrons at a location away from the quantum dot using an electrostatic gate. The wave of conducting electrons scattered by this perturbation returned back to the quantum dot and interfered with itself. This is similar to how a wave on a water surface being scattered by a wall forms a stripe pattern. The Kondo cloud is a quantum mechanical object which acts to preserve the wave nature of electrons inside the cloud.
Even though there is no direct electrostatic influence of the perturbation on the quantum dot, this interference modifies the Kondo signature measured by electron conductance through the quantum dot if the perturbation is present inside the cloud. In the study, the researchers found that the length as well as the shape of the cloud is universally scaled by the inverse of the Kondo temperature, and that the cloud’s size and shape were in good agreement with theoretical calculations.
Professor Sim at the Department of Physics proposed the method for detecting the Kondo cloud in the co-research with the RIKEN Center for Emergent Matter Science, the City University of Hong Kong, the University of Tokyo, and Ruhr University Bochum in Germany.
Professor Sim said, “The observed spin cloud is a micrometer-size object that has quantum mechanical wave nature and entanglement. This is why the spin cloud has not been observed despite a long search. It is remarkable in a fundamental and technical point of view that such a large quantum object can now be created, controlled, and detected.
Dr. Michihisa Yamamoto of the RIKEN Center for Emergent Matter Science also said, “It is very satisfying to have been able to obtain real space image of the Kondo cloud, as it is a real breakthrough for understanding various systems containing multiple magnetic impurities. The size of the Kondo cloud in semiconductors was found to be much larger than the typical size of semiconductor devices.”
Publication:
Borzenets et al. (2020) Observation of the Kondo screening cloud. Nature, 579. pp.210-213. Available online at https://doi.org/10.1038/s41586-020-2058-6
Profile:
Heung-Sun Sim, PhD
Professor
hssim@kaist.ac.kr
https://qet.kaist.ac.kr/
Quantum Electron Correlation & Transport Theory Group (QECT Lab)
https://qc.kaist.ac.kr/index.php/group1/
Center for Quantum Coherence In COndensed Matter
Department of Physics
https://www.kaist.ac.kr
Korea Advanced Institute of Science and Technology (KAIST)
Daejeon, Republic of Korea
Professor Hojong Chang’s Research Team Wins ISIITA 2020 Best Paper Award
The paper written by Professor Hojong Chang’s research team from KAIST Institute for IT Convergence won the best paper award from the International Symposium on Innovation in Information Technology Application (ISIITA) 2020, held this month at Ton Duc Thang University in Vietnam.
ISIITA is a networking symposium where leading researchers from various fields including information and communications, biotechnology, and computer systems come together and share on the convergence of technology.
Professor Chang’s team won the best paper award at this year’s symposium with its paper, “A Study of Single Photon Counting System for Quantitative Analysis of Luminescence”. The awarded paper discusses the realization of a signal processing system for silicon photomultipliers.
The silicon photomultiplier is the core of a urinalysis technique that tests for sodium and potassium in the body using simple chemical reactions. If our bodily sodium and potassium levels exceed a certain amount, it can lead to high blood pressure, cardiovascular problems, and kidney damage.
Through this research, the team has developed a core technique that quantifies the sodium and potassium discharged in the urine. When the reagent is injected into the urine, a very small amount of light is emitted as a result of the chemical reaction. However, if there is a large amount of sodium and potassium, they interrupt the reaction and reduce the emission. The key to this measurement technique is digitizing the strength of this very fine emission of light. Professor Chang’s team developed a system that uses a photomultiplier to measure the chemiluminescence.
Professor Chang said, “I look forward for this signal processing system greatly helping to prevent diseases caused by the excessive consumption of sodium and potassium through quick and easy detection.”
Researcher Byunghun Han who carried out the central research for the system design added, “We are planning to focus on miniaturizing the developed technique, so that anyone can carry our device around like a cellphone.”
The research was supported by the Ministry of Science and ICT.
(END)