본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.26
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Engineering
by recently order
by view order
KAIST Develops AI-Driven Performance Prediction Model to Advance Space Electric Propulsion Technology
< (From left) PhD candidate Youngho Kim, Professor Wonho Choe, and PhD candidate Jaehong Park from the Department of Nuclear and Quantum Engineering > Hall thrusters, a key space technology for missions like SpaceX's Starlink constellation and NASA's Psyche asteroid mission, are high-efficiency electric propulsion devices using plasma technology*. The KAIST research team announced that the AI-designed Hall thruster developed for CubeSats will be installed on the KAIST-Hall Effect Rocket Orbiter (K-HERO) CubeSat to demonstrate its in-orbit performance during the fourth launch of the Korean Launch Vehicle called Nuri rocket (KSLV-2) scheduled for November this year. *Plasma is one of the four states of matter, where gases are heated to high energies, causing them to separate into charged ions and electrons. Plasma is used not only in space electric propulsion but also in semiconductor manufacturing, display processes, and sterilization devices. On February 3rd, the research team from the KAIST Department of Nuclear and Quantum Engineering’s Electric Propulsion Laboratory, led by Professor Wonho Choe, announced the development of an AI-based technique to accurately predict the performance of Hall thrusters, the engines of satellites and space probes. Hall thrusters provide high fuel efficiency, requiring minimal propellant to achieve significant acceleration of spacecrafts or satellites while producing substantial thrust relative to power consumption. Due to these advantages, Hall thrusters are widely used in various space missions, including the formation flight of satellite constellations, deorbiting maneuvers for space debris mitigation, and deep space missions such as asteroid exploration. As the space industry continues to grow during the NewSpace era, the demand for Hall thrusters suited to diverse missions is increasing. To rapidly develop highly efficient, mission-optimized Hall thrusters, it is essential to predict thruster performance accurately from the design phase. However, conventional methods have limitations, as they struggle to handle the complex plasma phenomena within Hall thrusters or are only applicable under specific conditions, leading to lower prediction accuracy. The research team developed an AI-based performance prediction technique with high accuracy, significantly reducing the time and cost associated with the iterative design, fabrication, and testing of thrusters. Since 2003, Professor Wonho Choe’s team has been leading research on electric propulsion development in Korea. The team applied a neural network ensemble model to predict thruster performance using 18,000 Hall thruster training data points generated from their in-house numerical simulation tool. The in-house numerical simulation tool, developed to model plasma physics and thrust performance, played a crucial role in providing high-quality training data. The simulation’s accuracy was validated through comparisons with experimental data from ten KAIST in-house Hall thrusters, with an average prediction error of less than 10%. < Figure 1. This research has been selected as the cover article for the March 2025 issue (Volume 7, Issue 3) of the AI interdisciplinary journal, Advanced Intelligent Systems. > The trained neural network ensemble model acts as a digital twin, accurately predicting the Hall thruster performance within seconds based on thruster design variables. Notably, it offers detailed analyses of performance parameters such as thrust and discharge current, accounting for Hall thruster design variables like propellant flow rate and magnetic field—factors that are challenging to evaluate using traditional scaling laws. This AI model demonstrated an average prediction error of less than 5% for the in-house 700 W and 1 kW KAIST Hall thrusters and less than 9% for a 5 kW high-power Hall thruster developed by the University of Michigan and the U.S. Air Force Research Laboratory. This confirms the broad applicability of the AI prediction method across different power levels of Hall thrusters. Professor Wonho Choe stated, “The AI-based prediction technique developed by our team is highly accurate and is already being utilized in the analysis of thrust performance and the development of highly efficient, low-power Hall thrusters for satellites and spacecraft. This AI approach can also be applied beyond Hall thrusters to various industries, including semiconductor manufacturing, surface processing, and coating, through ion beam sources.” < Figure 2. The AI-based prediction technique developed by the research team accurately predicts thrust performance based on design variables, making it highly valuable for the development of high-efficiency Hall thrusters. The neural network ensemble processes design variables, such as channel geometry and magnetic field information, and outputs key performance metrics like thrust and prediction accuracy, enabling efficient thruster design and performance analysis. > Additionally, Professor Choe mentioned, “The CubeSat Hall thruster, developed using the AI technique in collaboration with our lab startup—Cosmo Bee, an electric propulsion company—will be tested in orbit this November aboard the K-HERO 3U (30 x 10 x 10 cm) CubeSat, scheduled for launch on the fourth flight of the KSLV-2 Nuri rocket.” This research was published online in Advanced Intelligent Systems on December 25, 2024 with PhD candidate Jaehong Park as the first author and was selected as the journal’s cover article, highlighting its innovation. < Figure 3. Image of the 150 W low-power Hall thruster for small and micro satellites, developed in collaboration with Cosmo Bee and the KAIST team. The thruster will be tested in orbit on the K-HERO CubeSat during the KSLV-2 Nuri rocket’s fourth launch in Q4 2025. > This research was supported by the National Research Foundation of Korea’s Space Pioneer Program (200mN High Thrust Electric Propulsion System Development). (Paper Title: Predicting Performance of Hall Effect Ion Source Using Machine Learning, DOI: https://doi.org/10.1002/aisy.202400555 ) < Figure 4. Graphs of the predicted thrust and discharge current of KAIST’s 700 W Hall thruster using the AI model (HallNN). The left image shows the Hall thruster operating in KAIST Electric Propulsion Laboratory’s vacuum chamber, while the center and right graphs present the prediction results for thrust and discharge current based on anode mass flow rate. The red lines represent AI predictions, and the blue dots represent experimental results, with a prediction error of less than 5%. >
2025.02.03
View 2627
KAIST Uncovers the Principles of Gene Expression Regulation in Cancer and Cellular Functions
< (From left) Professor Seyun Kim, Professor Gwangrog Lee, Dr. Hyoungjoon Ahn, Dr. Jeongmin Yu, Professor Won-Ki Cho, and (below) PhD candidate Kwangmin Ryu of the Department of Biological Sciences> A research team at KAIST has identified the core gene expression networks regulated by key proteins that fundamentally drive phenomena such as cancer development, metastasis, tissue differentiation from stem cells, and neural activation processes. This discovery lays the foundation for developing innovative therapeutic technologies. On the 22nd of January, KAIST (represented by President Kwang Hyung Lee) announced that the joint research team led by Professors Seyun Kim, Gwangrog Lee, and Won-Ki Cho from the Department of Biological Sciences had uncovered essential mechanisms controlling gene expression in animal cells. Inositol phosphate metabolites produced by inositol metabolism enzymes serve as vital secondary messengers in eukaryotic cell signaling systems and are broadly implicated in cancer, obesity, diabetes, and neurological disorders. The research team demonstrated that the inositol polyphosphate multikinase (IPMK) enzyme, a key player in the inositol metabolism system, acts as a critical transcriptional activator within the core gene expression networks of animal cells. Notably, although IPMK was previously reported to play an important role in the transcription process governed by serum response factor (SRF), a representative transcription factor in animal cells, the precise mechanism of its action was unclear. SRF is a transcription factor directly controlling the expression of at least 200–300 genes, regulating cell growth, proliferation, apoptosis, and motility, and is indispensable for organ development, such as in the heart. The team discovered that IPMK binds directly to SRF, altering the three-dimensional structure of the SRF protein. This interaction facilitates the transcriptional activity of various genes through the SRF activated by IPMK, demonstrating that IPMK acts as a critical regulatory switch to enhance SRF's protein activity. < Figure 1. The serum response factor (SRF) protein, a key transcription factor in animal cells, directly binds to inositol polyphosphate multikinase (IPMK) enzyme and undergoes structural change to acquire DNA binding ability, and precisely regulates growth and differentiation of animal cells through transcriptional activation. > The team further verified that disruptions in the direct interaction between IPMK and SRF lead to the reduced functionality and activity of SRF, causing severe impairments in gene expression. By highlighting the significance of the intrinsically disordered region (IDR) in SRF, the researchers underscored the biological importance of intrinsically disordered proteins (IDPs). Unlike most proteins that adopt distinct structures through folding, IDPs, including those with IDRs, do not exhibit specific structures but play crucial biological roles, attracting significant attention in the scientific community. Professor Seyun Kim commented, "This study provides a vital mechanism proving that IPMK, a key enzyme in the inositol metabolism system, is a major transcriptional activator in the core gene expression network of animal cells. By understanding fundamental processes such as cancer development and metastasis, tissue differentiation from stem cells, and neural activation through SRF, we hope this discovery will lead to the broad application of innovative therapeutic technologies." The findings were published on January 7th in the international journal Nucleic Acids Research (IF=16.7, top 1.8% in Biochemistry and Molecular Biology), under the title “Single-molecule analysis reveals that IPMK enhances the DNA-binding activity of the transcription factor SRF" (DOI: 10.1093/nar/gkae1281). This research was supported by the National Research Foundation of Korea's Mid-career Research Program, Leading Research Center Program, and Global Research Laboratory Program, as well as by the Suh Kyungbae Science Foundation and the Samsung Future Technology Development Program.
2025.01.24
View 6369
A Way for Smartwatches to Detect Depression Risks Devised by KAIST and U of Michigan Researchers
- A international joint research team of KAIST and the University of Michigan developed a digital biomarker for predicting symptoms of depression based on data collected by smartwatches - It has the potential to be used as a medical technology to replace the economically burdensome fMRI measurement test - It is expected to expand the scope of digital health data analysis The CORONA virus pandemic also brought about a pandemic of mental illness. Approximately one billion people worldwide suffer from various psychiatric conditions. Korea is one of more serious cases, with approximately 1.8 million patients exhibiting depression and anxiety disorders, and the total number of patients with clinical mental diseases has increased by 37% in five years to approximately 4.65 million. A joint research team from Korea and the US has developed a technology that uses biometric data collected through wearable devices to predict tomorrow's mood and, further, to predict the possibility of developing symptoms of depression. < Figure 1. Schematic diagram of the research results. Based on the biometric data collected by a smartwatch, a mathematical algorithm that solves the inverse problem to estimate the brain's circadian phase and sleep stages has been developed. This algorithm can estimate the degrees of circadian disruption, and these estimates can be used as the digital biomarkers to predict depression risks. > KAIST (President Kwang Hyung Lee) announced on the 15th of January that the research team under Professor Dae Wook Kim from the Department of Brain and Cognitive Sciences and the team under Professor Daniel B. Forger from the Department of Mathematics at the University of Michigan in the United States have developed a technology to predict symptoms of depression such as sleep disorders, depression, loss of appetite, overeating, and decreased concentration in shift workers from the activity and heart rate data collected from smartwatches. According to WHO, a promising new treatment direction for mental illness focuses on the sleep and circadian timekeeping system located in the hypothalamus of the brain, which directly affect impulsivity, emotional responses, decision-making, and overall mood. However, in order to measure endogenous circadian rhythms and sleep states, blood or saliva must be drawn every 30 minutes throughout the night to measure changes in the concentration of the melatonin hormone in our bodies and polysomnography (PSG) must be performed. As such treatments requires hospitalization and most psychiatric patients only visit for outpatient treatment, there has been no significant progress in developing treatment methods that take these two factors into account. In addition, the cost of the PSG test, which is approximately $1000, leaves mental health treatment considering sleep and circadian rhythms out of reach for the socially disadvantaged. The solution to overcome these problems is to employ wearable devices for the easier collection of biometric data such as heart rate, body temperature, and activity level in real time without spatial constraints. However, current wearable devices have the limitation of providing only indirect information on biomarkers required by medical staff, such as the phase of the circadian clock. The joint research team developed a filtering technology that accurately estimates the phase of the circadian clock, which changes daily, such as heart rate and activity time series data collected from a smartwatch. This is an implementation of a digital twin that precisely describes the circadian rhythm in the brain, and it can be used to estimate circadian rhythm disruption. < Figure 2. The suprachiasmatic nucleus located in the hypothalamus of the brain is the central biological clock that regulates the 24-hour physiological rhythm and plays a key role in maintaining the body’s circadian rhythm. If the phase of this biological clock is disrupted, it affects various parts of the brain, which can cause psychiatric conditions such as depression. > The possibility of using the digital twin of this circadian clock to predict the symptoms of depression was verified through collaboration with the research team of Professor Srijan Sen of the Michigan Neuroscience Institute and Professor Amy Bohnert of the Department of Psychiatry of the University of Michigan. The collaborative research team conducted a large-scale prospective cohort study involving approximately 800 shift workers and showed that the circadian rhythm disruption digital biomarker estimated through the technology can predict tomorrow's mood as well as six symptoms, including sleep problems, appetite changes, decreased concentration, and suicidal thoughts, which are representative symptoms of depression. < Figure 3. The circadian rhythm of hormones such as melatonin regulates various physiological functions and behaviors such as heart rate and activity level. These physiological and behavioral signals can be measured in daily life through wearable devices. In order to estimate the body’s circadian rhythm inversely based on the measured biometric signals, a mathematical algorithm is needed. This algorithm plays a key role in accurately identifying the characteristics of circadian rhythms by extracting hidden physiological patterns from biosignals. > Professor Dae Wook Kim said, "It is very meaningful to be able to conduct research that provides a clue for ways to apply wearable biometric data using mathematics that have not previously been utilized for actual disease management." He added, "We expect that this research will be able to present continuous and non-invasive mental health monitoring technology. This is expected to present a new paradigm for mental health care. By resolving some of the major problems socially disadvantaged people may face in current treatment practices, they may be able to take more active steps when experiencing symptoms of depression, such as seeking counsel before things get out of hand." < Figure 4. A mathematical algorithm was devised to circumvent the problems of estimating the phase of the brain's biological clock and sleep stages inversely from the biodata collected by a smartwatch. This algorithm can estimate the degree of daily circadian rhythm disruption, and this estimate can be used as a digital biomarker to predict depression symptoms. > The results of this study, in which Professor Dae Wook Kim of the Department of Brain and Cognitive Sciences at KAIST participated as the joint first author and corresponding author, were published in the online version of the international academic journal npj Digital Medicine on December 5, 2024. (Paper title: The real-world association between digital markers of circadian disruption and mental health risks) DOI: 10.1038/s41746-024-01348-6 This study was conducted with the support of the KAIST's Research Support Program for New Faculty Members, the US National Science Foundation, the US National Institutes of Health, and the US Army Research Institute MURI Program.
2025.01.20
View 3319
KAIST Develops Neuromorphic Semiconductor Chip that Learns and Corrects Itself
< Photo. The research team of the School of Electrical Engineering posed by the newly deveoped processor. (From center to the right) Professor Young-Gyu Yoon, Integrated Master's and Doctoral Program Students Seungjae Han and Hakcheon Jeong and Professor Shinhyun Choi > - Professor Shinhyun Choi and Professor Young-Gyu Yoon’s Joint Research Team from the School of Electrical Engineering developed a computing chip that can learn, correct errors, and process AI tasks - Equipping a computing chip with high-reliability memristor devices with self-error correction functions for real-time learning and image processing Existing computer systems have separate data processing and storage devices, making them inefficient for processing complex data like AI. A KAIST research team has developed a memristor-based integrated system similar to the way our brain processes information. It is now ready for application in various devices including smart security cameras, allowing them to recognize suspicious activity immediately without having to rely on remote cloud servers, and medical devices with which it can help analyze health data in real time. KAIST (President Kwang Hyung Lee) announced on the 17th of January that the joint research team of Professor Shinhyun Choi and Professor Young-Gyu Yoon of the School of Electrical Engineering has developed a next-generation neuromorphic semiconductor-based ultra-small computing chip that can learn and correct errors on its own. < Figure 1. Scanning electron microscope (SEM) image of a computing chip equipped with a highly reliable selector-less 32×32 memristor crossbar array (left). Hardware system developed for real-time artificial intelligence implementation (right). > What is special about this computing chip is that it can learn and correct errors that occur due to non-ideal characteristics that were difficult to solve in existing neuromorphic devices. For example, when processing a video stream, the chip learns to automatically separate a moving object from the background, and it becomes better at this task over time. This self-learning ability has been proven by achieving accuracy comparable to ideal computer simulations in real-time image processing. The research team's main achievement is that it has completed a system that is both reliable and practical, beyond the development of brain-like components. The research team has developed the world's first memristor-based integrated system that can adapt to immediate environmental changes, and has presented an innovative solution that overcomes the limitations of existing technology. < Figure 2. Background and foreground separation results of an image containing non-ideal characteristics of memristor devices (left). Real-time image separation results through on-device learning using the memristor computing chip developed by our research team (right). > At the heart of this innovation is a next-generation semiconductor device called a memristor*. The variable resistance characteristics of this device can replace the role of synapses in neural networks, and by utilizing it, data storage and computation can be performed simultaneously, just like our brain cells. *Memristor: A compound word of memory and resistor, next-generation electrical device whose resistance value is determined by the amount and direction of charge that has flowed between the two terminals in the past. The research team designed a highly reliable memristor that can precisely control resistance changes and developed an efficient system that excludes complex compensation processes through self-learning. This study is significant in that it experimentally verified the commercialization possibility of a next-generation neuromorphic semiconductor-based integrated system that supports real-time learning and inference. This technology will revolutionize the way artificial intelligence is used in everyday devices, allowing AI tasks to be processed locally without relying on remote cloud servers, making them faster, more privacy-protected, and more energy-efficient. “This system is like a smart workspace where everything is within arm’s reach instead of having to go back and forth between desks and file cabinets,” explained KAIST researchers Hakcheon Jeong and Seungjae Han, who led the development of this technology. “This is similar to the way our brain processes information, where everything is processed efficiently at once at one spot.” The research was conducted with Hakcheon Jeong and Seungjae Han, the students of Integrated Master's and Doctoral Program at KAIST School of Electrical Engineering being the co-first authors, the results of which was published online in the international academic journal, Nature Electronics, on January 8, 2025. *Paper title: Self-supervised video processing with self-calibration on an analogue computing platform based on a selector-less memristor array ( https://doi.org/10.1038/s41928-024-01318-6 ) This research was supported by the Next-Generation Intelligent Semiconductor Technology Development Project, Excellent New Researcher Project and PIM AI Semiconductor Core Technology Development Project of the National Research Foundation of Korea, and the Electronics and Telecommunications Research Institute Research and Development Support Project of the Institute of Information & communications Technology Planning & Evaluation.
2025.01.17
View 3638
KAIST Develops Insect-Eye-Inspired Camera Capturing 9,120 Frames Per Second
< (From left) Bio and Brain Engineering PhD Student Jae-Myeong Kwon, Professor Ki-Hun Jeong, PhD Student Hyun-Kyung Kim, PhD Student Young-Gil Cha, and Professor Min H. Kim of the School of Computing > The compound eyes of insects can detect fast-moving objects in parallel and, in low-light conditions, enhance sensitivity by integrating signals over time to determine motion. Inspired by these biological mechanisms, KAIST researchers have successfully developed a low-cost, high-speed camera that overcomes the limitations of frame rate and sensitivity faced by conventional high-speed cameras. KAIST (represented by President Kwang Hyung Lee) announced on the 16th of January that a research team led by Professors Ki-Hun Jeong (Department of Bio and Brain Engineering) and Min H. Kim (School of Computing) has developed a novel bio-inspired camera capable of ultra-high-speed imaging with high sensitivity by mimicking the visual structure of insect eyes. High-quality imaging under high-speed and low-light conditions is a critical challenge in many applications. While conventional high-speed cameras excel in capturing fast motion, their sensitivity decreases as frame rates increase because the time available to collect light is reduced. To address this issue, the research team adopted an approach similar to insect vision, utilizing multiple optical channels and temporal summation. Unlike traditional monocular camera systems, the bio-inspired camera employs a compound-eye-like structure that allows for the parallel acquisition of frames from different time intervals. < Figure 1. (A) Vision in a fast-eyed insect. Reflected light from swiftly moving objects sequentially stimulates the photoreceptors along the individual optical channels called ommatidia, of which the visual signals are separately and parallelly processed via the lamina and medulla. Each neural response is temporally summed to enhance the visual signals. The parallel processing and temporal summation allow fast and low-light imaging in dim light. (B) High-speed and high-sensitivity microlens array camera (HS-MAC). A rolling shutter image sensor is utilized to simultaneously acquire multiple frames by channel division, and temporal summation is performed in parallel to realize high speed and sensitivity even in a low-light environment. In addition, the frame components of a single fragmented array image are stitched into a single blurred frame, which is subsequently deblurred by compressive image reconstruction. > During this process, light is accumulated over overlapping time periods for each frame, increasing the signal-to-noise ratio. The researchers demonstrated that their bio-inspired camera could capture objects up to 40 times dimmer than those detectable by conventional high-speed cameras. The team also introduced a "channel-splitting" technique to significantly enhance the camera's speed, achieving frame rates thousands of times faster than those supported by the image sensors used in packaging. Additionally, a "compressed image restoration" algorithm was employed to eliminate blur caused by frame integration and reconstruct sharp images. The resulting bio-inspired camera is less than one millimeter thick and extremely compact, capable of capturing 9,120 frames per second while providing clear images in low-light conditions. < Figure 2. A high-speed, high-sensitivity biomimetic camera packaged in an image sensor. It is made small enough to fit on a finger, with a thickness of less than 1 mm. > The research team plans to extend this technology to develop advanced image processing algorithms for 3D imaging and super-resolution imaging, aiming for applications in biomedical imaging, mobile devices, and various other camera technologies. Hyun-Kyung Kim, a doctoral student in the Department of Bio and Brain Engineering at KAIST and the study's first author, stated, “We have experimentally validated that the insect-eye-inspired camera delivers outstanding performance in high-speed and low-light imaging despite its small size. This camera opens up possibilities for diverse applications in portable camera systems, security surveillance, and medical imaging.” < Figure 3. Rotating plate and flame captured using the high-speed, high-sensitivity biomimetic camera. The rotating plate at 1,950 rpm was accurately captured at 9,120 fps. In addition, the pinch-off of the flame with a faint intensity of 880 µlux was accurately captured at 1,020 fps. > This research was published in the international journal Science Advances in January 2025 (Paper Title: “Biologically-inspired microlens array camera for high-speed and high-sensitivity imaging”). DOI: https://doi.org/10.1126/sciadv.ads3389 This study was supported by the Korea Research Institute for Defense Technology Planning and Advancement (KRIT) of the Defense Acquisition Program Administration (DAPA), the Ministry of Science and ICT, and the Ministry of Trade, Industry and Energy (MOTIE).
2025.01.16
View 4341
KAIST Research Team Develops Stretchable Microelectrodes Array for Organoid Signal Monitoring
< Photo 1. (From top left) Professor Hyunjoo J. Lee, Dr. Mi-Young Son, Dr. Mi-Ok Lee(In the front row from left) Doctoral student Kiup Kim, Doctoral student Youngsun Lee > On January 14th, the KAIST research team led by Professor Hyunjoo J. Lee from the School of Electrical Engineering in collaboration with Dr. Mi-Young Son and Dr. Mi-Ok Lee at Korea Research Institute of Bioscience and Biotechnology (KRIBB) announced the development of a highly stretchable microelectrode array (sMEA) designed for non-invasive electrophysiological signal measurement of organoids. Organoids* are highly promising models for human biology and are expected to replace many animal experiments. Their potential applications include disease modeling, drug screening, and personalized medicine as they closely mimic the structure and function of humans. *Organoids: three-dimensional in vitro tissue models derived from human stem cells Despite these advantages, existing organoid research has primarily focused on genetic analysis, with limited studies on organoid functionality. For effective drug evaluation and precise biological research, technology that preserves the three-dimensional structure of organoids while enabling real-time monitoring of their functions is needed. However, it’s challenging to provide non-invasive ways to evaluate the functionalities without incurring damage to the tissues. This challenge is particularly significant for electrophysiological signal measurement in cardiac and brain organoids since the sensor needs to be in direct contact with organoids of varying size and irregular shape. Achieving tight contact between electrodes and the external surface of the organoids without damaging the organoids has been a persistent challenge. < Figure 1. Schematic image of highly stretchable MEA (sMEA) with protruding microelectrodes. > The KAIST research team developed a highly stretchable microelectrode array with a unique serpentine structure that contacts the surface of organoids in a highly conformal fashion. They successfully demonstrated real-time measurement and analysis of electrophysiological signals from two types of electrogenic organoids (heart and brain). By employing a micro-electromechanical system (MEMS)-based process, the team fabricated the serpentine-structured microelectrode array and used an electrochemical deposition process to develop PEDOT:PSS-based protruding microelectrodes. These innovations demonstrated exceptional stretchability and close surface adherence to various organoid sizes. The protruding microelectrodes improved contact between organoids and the electrodes, ensuring stable and reliable electrophysiological signal measurements with high signal-to-noise ratios (SNR). < Figure 2. Conceptual illustration, optical image, and fluorescence images of an organoid captured by the sMEA with protruding microelectrodes.> Using this technology, the team successfully monitored and analyzed electrophysiological signals from cardiac spheroids of various sizes, revealing three-dimensional signal propagation patterns and identifying changes in signal characteristics according to size. They also measured electrophysiological signals in midbrain organoids, demonstrating the versatility of the technology. Additionally, they monitored signal modulations induced by various drugs, showcasing the potential of this technology for drug screening applications. < Figure 3. SNR improvement effect by protruding PEDOT:PSS microelectrodes. > Prof. Hyunjoo Jenny Lee stated, “By integrating MEMS technology and electrochemical deposition techniques, we successfully developed a stretchable microelectrode array adaptable to organoids of diverse sizes and shapes. The high practicality is a major advantage of this system since the fabrication is based on semiconductor fabrication with high volume production, reliability, and accuracy. This technology that enables in situ, real-time analysis of states and functionalities of organoids will be a game changer in high-through drug screening.” This study led by Ph.D. candidate Kiup Kim from KAIST and Ph.D. candidate Youngsun Lee from KRIBB, with significant contributions from Dr. Kwang Bo Jung, was published online on December 15, 2024 in Advanced Materials (IF: 27.4). < Figure 4. Drug screening using cardiac spheroids and midbrain organoids.> This research was supported by a grant from 3D-TissueChip Based Drug Discovery Platform Technology Development Program (No. 20009209) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea), by the Commercialization Promotion Agency for R&D Outcomes (COMPA) funded by the Ministry of Science and ICT (MSIT) (RS-2024-00415902), by the K-Brain Project of the National Research Foundation (NRF) funded by the Korean government (MSIT) (RS-2023-00262568), by BK21 FOUR (Connected AI Education & Research Program for Industry and Society Innovation, KAIST EE, No. 4120200113769), and by Korea Research Institute of Bioscience and Biotechnology (KRIBB) Research Initiative Program (KGM4722432).
2025.01.14
View 1819
KAIST develops ‘Hoverbike’ to roam the future skies
< Photo 1. A group photo of the research team > Hoverbike is a kind of next-generation mobility that can complement the existing transportation system and can be used as an air transportation means without traffic congestion through high-weight payloads and long-distance flights. It is expected that domestic researchers will contribute to the development of the domestic PAV* and UAM markets by developing a domestically developed manned/unmanned hybrid aircraft that escapes dependence on foreign technology through the development of a high-performance hoverbike. *PAV: Personal Aerial Vehicle. It is a key element of future urban air mobility (UAM, Urban Air Mobility) and constitutes an important part of the next-generation transportation system. KAIST (President Kwang-Hyung Lee) announced on the 27th of December that the research team of Professor Hyochoong Bang of the Department of Aerospace Engineering successfully developed the core technology of a highly reliable multipurpose vertical takeoff and landing hoverbike that can be operated by both manned and unmanned vehicles. This research was participated by the research teams of Professor Jae-Hung Han, Professor Ji-yun Lee, Professor Jae-myung Ahn, Professor Han-Lim Choi, and Professor Chang-Hun Lee of the Department of Aerospace Engineering at KAIST, Professor Dongjin Lee of the Department of Unmanned Aerial Vehicles at Hanseo University, and Professor Jong-Oh Park of the Department of Electronics Engineering at Dong-A University. The research team secured key technologies related to the optimal design of a multipurpose aircraft, hybrid propulsion system, highly reliable precision navigation and flight control system, autonomous flight, and fault detection for the development of a high-performance hoverbike. < Figure 1. Key features of high-reliability multi-purpose hoverbike > The hoverbike platform introduced a gasoline engine-based hybrid system to overcome the shortcomings of battery-based drones, achieving approximately 60% better performance and maximum payload weight compared to overseas technology levels. Through this, it is expected to be utilized in various fields such as emergency supply delivery, logistics, and rescue activities for civilian use, and military transport and mission support for military use. The navigation system was applied by implementing multi-sensor fusion technology based on DGPS/INS* to enable stable flight even in environments without GPS or with weak signals using high-reliability precision navigation technology. *DGPS/INS: Navigation solution combining high accuracy of Differential GPS (DGPS) and Inertial Navigation System (INS) In addition, high-reliability flight control technology was developed to enable reliable maneuvering even under external factors such as payload and wind, and model uncertainty, and fault detection technology was also developed. A guidance technique to automatically land on a helipad after selecting a safe automatic landing area by configuring a high-reliability autonomous flight system was implemented with high accuracy. Stable operation is possible even in complex environments through obstacle avoidance and automatic landing autonomous flight technology. < Figure 2. Hoverbike prototype model > Professor Hyochoong Bang, the research director, emphasized, “We have proven the high practicality of the hoverbike in various environments through high-reliability flight control and precision navigation technology.” He added, “The hoverbike is a promising research result that can not only provide a major path leading to PAVs and future aircraft, but also surpass existing drone technology by several levels. This achievement is even more meaningful because it is the result of five years of effort by eight joint research teams, including the project’s practitioners, PhD students Kwangwoo Jang and Hyungjoo Ahn.” This study aims to secure core technologies for manned/unmanned multipurpose hoverbikes that can be utilized as new concept aircraft in the defense and civilian sectors. It started as the Defense Acquisition Program Administration’s Defense Technology for Future Challenge Research and Development Project in 2019 and was completed in 2024 under the management of the Agency for Defense Development. It is scheduled to be exhibited for the first time at the 2025 Drone Show Korea (DSK2025), which will be held at BEXCO in Busan from February 26 to 28, 2025.
2024.12.27
View 2426
KAIST Develops Foundational Technology to Revert Cancer Cells to Normal Cells
Despite the development of numerous cancer treatment technologies, the common goal of current cancer therapies is to eliminate cancer cells. This approach, however, faces fundamental limitations, including cancer cells developing resistance and returning, as well as severe side effects from the destruction of healthy cells. < (From top left) Bio and Brain Engineering PhD candidates Juhee Kim, Jeong-Ryeol Gong, Chun-Kyung Lee, and Hoon-Min Kim posed for a group photo with Professor Kwang-Hyun Cho > KAIST (represented by President Kwang Hyung Lee) announced on the 20th of December that a research team led by Professor Kwang-Hyun Cho from the Department of Bio and Brain Engineering has developed a groundbreaking technology that can treat colon cancer by converting cancer cells into a state resembling normal colon cells without killing them, thus avoiding side effects. The research team focused on the observation that during the oncogenesis process, normal cells regress along their differentiation trajectory. Building on this insight, they developed a technology to create a digital twin of the gene network associated with the differentiation trajectory of normal cells. < Figure 1. Technology for creating a digital twin of a gene network from single-cell transcriptome data of a normal cell differentiation trajectory. Professor Kwang-Hyun Cho's research team developed a digital twin creation technology that precisely observes the dynamics of gene regulatory relationships during the process of normal cells differentiating along a differentiation trajectory and analyzes the relationships among key genes to build a mathematical model that can be simulated (A-F). In addition, they developed a technology to discover key regulatory factors that control the differentiation trajectory of normal cells by simulating and analyzing this digital twin. > < Figure 2. Digital twin simulation simulating the differentiation trajectory of normal colon cells. The dynamics of single-cell transcriptome data for the differentiation trajectory of normal colon cells were analyzed (A) and a digital twin of the gene network was developed representing the regulatory relationships of key genes in this differentiation trajectory (B). The simulation results of the digital twin confirm that it readily reproduces the dynamics of single-cell transcriptome data (C, D). > Through simulation analysis, the team systematically identified master molecular switches that induce normal cell differentiation. When these switches were applied to colon cancer cells, the cancer cells reverted to a normal-like state, a result confirmed through molecular and cellular experiments as well as animal studies. < Figure 3. Discovery of top-level key control factors that induce differentiation of normal colon cells. By applying control factor discovery technology to the digital twin model, three genes, HDAC2, FOXA2, and MYB, were discovered as key control factors that induce differentiation of normal colon cells (A, B). The results of simulation analysis of the regulatory effects of the discovered control factors through the digital twin confirmed that they could induce complete differentiation of colon cells (C). > < Figure 4. Verification of the effect of the key control factors discovered using colon cancer cells and animal experiments on the reversibility of colon cancer. The key control factors of the normal colon cell differentiation trajectory discovered through digital twin simulation analysis were applied to actual colon cancer cells and colon cancer mouse animal models to experimentally verify the effect of cancer reversibility. The key control factors significantly reduced the proliferation of three colon cancer cell lines (A), and this was confirmed in the same way in animal models (B-D). > This research demonstrates that cancer cell reversion can be systematically achieved by analyzing and utilizing the digital twin of the cancer cell gene network, rather than relying on serendipitous discoveries. The findings hold significant promise for developing reversible cancer therapies that can be applied to various types of cancer. < Figure 5. The change in overall gene expression was confirmed through the regulation of the identified key regulatory factors, which converted the state of colon cancer cells to that of normal colon cells. The transcriptomes of colon cancer tissues and normal colon tissues from more than 400 colon cancer patients were compared with the transcriptomes of colon cancer cell lines and reversible colon cancer cell lines, respectively. The comparison results confirmed that the regulation of the identified key regulatory factors converted all three colon cancer cell lines to a state similar to the transcriptome expression of normal colon tissues. > Professor Kwang-Hyun Cho remarked, "The fact that cancer cells can be converted back to normal cells is an astonishing phenomenon. This study proves that such reversion can be systematically induced." He further emphasized, "This research introduces the novel concept of reversible cancer therapy by reverting cancer cells to normal cells. It also develops foundational technology for identifying targets for cancer reversion through the systematic analysis of normal cell differentiation trajectories." This research included contributions from Jeong-Ryeol Gong, Chun-Kyung Lee, Hoon-Min Kim, Juhee Kim, and Jaeog Jeon, and was published in the online edition of the international journal Advanced Science by Wiley on December 11. (Title: “Control of Cellular Differentiation Trajectories for Cancer Reversion”) DOI: https://doi.org/10.1002/advs.202402132 < Figure 6. Schematic diagram of the research results. Professor Kwang-Hyun Cho's research team developed a source technology to systematically discover key control factors that can induce reversibility of colon cancer cells through a systems biology approach and a digital twin simulation analysis of the differentiation trajectory of normal colon cells, and verified the effects of reversion on actual colon cancer through molecular cell experiments and animal experiments. > The study was supported by the Ministry of Science and ICT and the National Research Foundation of Korea through the Mid-Career Researcher Program and Basic Research Laboratory Program. The research findings have been transferred to BioRevert Inc., where they will be used for the development of practical cancer reversion therapies.
2024.12.23
View 77644
KAIST Proposes a New Way to Circumvent a Long-time Frustration in Neural Computing
The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. The technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future. KAIST (President Kwang-Hyung Lee) announced on the 16th of December that Professor Se-Bum Paik 's research team in the Department of Brain Cognitive Sciences solved the weight transport problem*, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks. *Weight transport problem: This is the biggest obstacle to the development of artificial intelligence that mimics the biological brain. It is the fundamental reason why large-scale memory and computational work are required in the learning of general artificial neural networks, unlike biological brains. Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning. < Figure 1. Illustration depicting the method of random noise training and its effects > This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different. At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem. In 2016, a joint research team from Oxford University and DeepMind in the UK first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality. KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise). As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training. < Figure 2. Illustration depicting the meta-learning effect of random noise training > The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.’ It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport. < Figure 3. Illustration depicting research on understanding the brain's operating principles through artificial neural networks > Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning,” and added, “It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.” This study, in which Jeonghwan Cheon, a Master’s candidate of KAIST Department of Brain and Cognitive Sciences participated as the first author and Professor Sang Wan Lee of the same department as a co-author, was presented at the 38th Neural Information Processing Systems (NeurIPS), the world's top artificial intelligence conference, on December 14th in Vancouver, Canada. (Paper title: Pretraining with random noise for fast and robust learning without weight transport) This study was conducted with the support of the National Research Foundation of Korea's Basic Research Program in Science and Engineering, the Information and Communications Technology Planning and Evaluation Institute's Talent Development Program, and the KAIST Singularity Professor Program.
2024.12.16
View 4127
KAIST Extends Lithium Metal Battery Lifespan by 750% Using Water
Lithium metal, a next-generation anode material, has been highlighted for overcoming the performance limitations of commercial batteries. However, issues inherent to lithium metal have caused shortened battery lifespans and increased fire risks. KAIST researchers have achieved a world-class breakthrough by extending the lifespan of lithium metal anodes by approximately 750% only using water. KAIST (represented by President Kwang Hyung Lee) announced on the 2nd of December that Professor Il-Doo Kim from the Department of Materials Science and Engineering, in collaboration with Professor Jiyoung Lee from Ajou University, successfully stabilized lithium growth and significantly enhanced the lifespan of next-generation lithium metal batteries using eco-friendly hollow nanofibers as protective layers. Conventional protective layer technologies, which involve applying a surface coating onto lithium metal in order to create an artificial interface with the electrolyte, have relied on toxic processes and expensive materials, with limited improvements in the lifespan of lithium metal anodes. < Figure 1. Schematic illustration of the fabrication process of the newly developed protective membrane by eco-friendly electrospinning process using water > To address these limitations, Professor Kim’s team proposed a hollow nanofiber protective layer capable of controlling lithium-ion growth through both physical and chemical means. This protective layer was manufactured through an environmentally friendly electrospinning process* using guar gum** extracted from plants as the primary material and utilizing water as the sole solvent. *Electrospinning process: A method where polymer solutions are subjected to an electric field, producing continuous fibers with diameters ranging from tens of nanometers to several micrometers. **Guar gum: A natural polymer extracted from guar beans, composed mainly of monosaccharides. Its oxidized functional groups regulate interactions with lithium ions. < Figure 2. Physical and chemical control of Lithium dendrite by the newly developed protective membrane > The nanofiber protective layer effectively controlled reversible chemical reactions between the electrolyte and lithium ions. The hollow spaces within the fibers suppressed the random accumulation of lithium ions on the metal surface, stabilizing the interface between the lithium metal surface and the electrolyte. < Figure 3. Performance of Lithium metal battery full cells with the newly developed protective membrane > As a result, the lithium metal anodes with this protective layer demonstrated approximately a 750% increase in lifespan compared to conventional lithium metal anodes. The battery retained 93.3% of its capacity even after 300 charge-discharge cycles, achieving world-class performance. The researchers also verified that this natural protective layer decomposes entirely within about a month in soil, proving its eco-friendly nature throughout the manufacturing and disposal process. < Figure 4. Excellent decomposition rate of the newly developed protective membrane > Professor Il-Doo Kim explained, “By leveraging both physical and chemical protective functions, we were able to guide reversible reactions between lithium metal and the electrolyte more effectively and suppress dendrite growth, resulting in lithium metal anodes with unprecedented lifespan characteristics.” He added, “As the environmental burden caused by battery production and disposal becomes a pressing issue due to surging battery demand, this water-based manufacturing method with biodegradable properties will significantly contribute to the commercialization of next-generation eco-friendly batteries.” This study was led by Dr. Jiyoung Lee (now a professor in the Department of Chemical Engineering at Ajou University) and Dr. Hyunsub Song (currently at Samsung Electronics), both graduates of KAIST’s Department of Materials Science and Engineering. The findings were published as a front cover article in Advanced Materials, Volume 36, Issue 47, on November 21. (Paper title: “Overcoming Chemical and Mechanical Instabilities in Lithium Metal Anodes with Sustainable and Eco-Friendly Artificial SEI Layer”) The research was supported by the KAIST-LG Energy Solution Frontier Research Lab (FRL), the Alchemist Project funded by the Ministry of Trade, Industry and Energy, and the Top-Tier Research Support Program from the Ministry of Science and ICT.
2024.12.12
View 3975
KAIST Scientifically Identifies a Method to Prevent Dental Erosion from Carbonated Drinks
A Korean research team, which had previously visualized and scientifically proven the harmful effects of carbonated drinks like cola on dental health using nanotechnology, has now identified a mechanism for an effective method to prevent tooth damage caused by these beverages. KAIST (represented by President Kwang Hyung Lee) announced on the 5th of December that a team led by Professor Seungbum Hong from the Department of Materials Science and Engineering, in collaboration with Seoul National University's School of Dentistry (Departments of Pediatric Dentistry and Oral Microbiology) and Professor Hye Ryung Byon’s research team from the Department of Chemistry, has revealed through nanotechnology that silver diamine fluoride (SDF)* forms a fluoride-containing protective layer on the tooth surface, effectively inhibiting cola-induced erosion. *SDF (Silver Diamine Fluoride): A dental agent primarily used for the treatment and prevention of tooth decay. SDF strengthens carious lesions, suppresses bacterial growth, and halts the progression of cavities. The team analyzed the surface morphology and mechanical properties of tooth enamel on a nanoscale using atomic force microscopy (AFM). They also examined the chemical properties of the nano-film formed by SDF treatment using X-ray photoelectron spectroscopy (XPS)* and Fourier-transform infrared spectroscopy (FTIR)*. *XPS (X-ray Photoelectron Spectroscopy): A powerful surface analysis technique used to investigate the chemical composition and electronic structure of materials. *FTIR (Fourier-Transform Infrared Spectroscopy): An analytical method that identifies the molecular structure and composition of materials by analyzing how they absorb or transmit infrared light. The findings showed significant differences in surface roughness and elastic modulus between teeth exposed to cola with and without SDF treatment. Teeth treated with SDF exhibited minimal changes in surface roughness due to erosion (from 64 nm to 70 nm) and maintained a high elastic modulus (from 215 GPa to 205 GPa). This was attributed to the formation of a fluoroapatite* layer by SDF, which acted as a protective shield. *Fluoroapatite: A phosphate mineral with the chemical formula Ca₅(PO₄)₃F (calcium fluoro-phosphate). It can occur naturally or be synthesized biologically/artificially and plays a crucial role in strengthening teeth and bones. < Figure 1. Schematic of the workflow. Surface morphology and mechanical properties of untreated and treated silver diamine fluoride (SDF) treated enamel exposed to cola were analyzed over time using atomic force microscopy (AFM). > Professor Young J. Kim from Seoul National University's Department of Pediatric Dentistry noted, "This technology could be applied to prevent dental erosion and strengthen teeth for both children and adults. It is a cost-effective and accessible dental treatment." < Figure 2. Changes in surface roughness and elastic modulus according to time of exposure to cola for SDF untreated and treated teeth. After 1 hour, the surface roughness of the SDF untreated teeth rapidly became rougher from 83 nm to 287 nm and the elastic modulus weakened from 125 GPa to 13 GPa, whereas the surface roughness of the SDF treated teeth changed only slightly from 64 nm to 70 nm and the elastic modulus barely changed from 215 GPa to 205 GPa, maintaining a similar state to the initial state. > Professor Seungbum Hong emphasized, "Dental health significantly impacts quality of life. This research offers an effective non-invasive method to prevent early dental erosion, moving beyond traditional surgical treatments. By simply applying SDF, dental erosion can be prevented and enamel strengthened, potentially reducing pain and costs associated with treatment." This study, led by the first author Aditi Saha, a PhD student in KAIST’s Department of Materials Science and Engineering, was published in the international journal Biomaterials Research on November 7 under the title "Nanoscale Study on Noninvasive Prevention of Dental Erosion of Enamel by Silver Diamine Fluoride". The research was supported by the National Research Foundation of Korea.
2024.12.11
View 2430
KAIST Develops a Multifunctional Structural Battery Capable of Energy Storage and Load Support
Structural batteries are used in industries such as eco-friendly, energy-based automobiles, mobility, and aerospace, and they must simultaneously meet the requirements of high energy density for energy storage and high load-bearing capacity. Conventional structural battery technology has struggled to enhance both functions concurrently. However, KAIST researchers have succeeded in developing foundational technology to address this issue. < Photo 1. (From left) Professor Seong Su Kim, PhD candidates Sangyoon Bae and Su Hyun Lim of the Department of Mechanical Engineering > < Photo 2. (From left) Professor Seong Su Kim and Master's Graduate Mohamad A. Raja of KAIST Department of Mechanical Engineering > KAIST (represented by President Kwang Hyung Lee) announced on the 19th of November that Professor Seong Su Kim's team from the Department of Mechanical Engineering has developed a thin, uniform, high-density, multifunctional structural carbon fiber composite battery* capable of supporting loads, and that is free from fire risks while offering high energy density. *Multifunctional structural batteries: Refers to the ability of each material in the composite to simultaneously serve as a load-bearing structure and an energy storage element. Early structural batteries involved embedding commercial lithium-ion batteries into layered composite materials. These batteries suffered from low integration of their mechanical and electrochemical properties, leading to challenges in material processing, assembly, and design optimization, making commercialization difficult. To overcome these challenges, Professor Kim's team explored the concept of "energy-storing composite materials," focusing on interface and curing properties, which are critical in traditional composite design. This led to the development of high-density multifunctional structural carbon fiber composite batteries that maximize multifunctionality. The team analyzed the curing mechanisms of epoxy resin, known for its strong mechanical properties, combined with ionic liquid and carbonate electrolyte-based solid polymer electrolytes. By controlling temperature and pressure, they were able to optimize the curing process. The newly developed structural battery was manufactured through vacuum compression molding, increasing the volume fraction of carbon fibers—serving as both electrodes and current collectors—by over 160% compared to previous carbon-fiber-based batteries. This greatly increased the contact area between electrodes and electrolytes, resulting in a high-density structural battery with improved electrochemical performance. Furthermore, the team effectively controlled air bubbles within the structural battery during the curing process, simultaneously enhancing the battery's mechanical properties. Professor Seong Su Kim, the lead researcher, explained, “We proposed a framework for designing solid polymer electrolytes, a core material for high-stiffness, ultra-thin structural batteries, from both material and structural perspectives. These material-based structural batteries can serve as internal components in cars, drones, airplanes, and robots, significantly extending their operating time with a single charge. This represents a foundational technology for next-generation multifunctional energy storage applications.” < Figure 2. Supplementary cover of ACS Applied Materials & Interfaces > Mohamad A. Raja, a master’s graduate of KAIST’s Department of Mechanical Engineering, participated as the first author of this research, which was published in the prestigious journal ACS Applied Materials & Interfaces on September 10. The paper was recognized for its excellence and selected as a supplementary cover article. (Paper title: “Thin, Uniform, and Highly Packed Multifunctional Structural Carbon Fiber Composite Battery Lamina Informed by Solid Polymer Electrolyte Cure Kinetics.” https://doi.org/10.1021/acsami.4c08698) This research was supported by the National Research Foundation of Korea’s Mid-Career Researcher Program and the National Semiconductor Research Laboratory Development Program.
2024.11.27
View 3053
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 89