본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
college+of+engineering
by recently order
by view order
Professor Lik-Hang Lee Offers Metaverse Course for Hong Kong Productivity Council
Professor Lik-Hang Lee from the Department of Industrial System Engineering will offer a metaverse course in partnership with the Hong Kong Productivity Council (HKPC) from the Spring 2022 semester to Hong Kong-based professionals. “The Metaverse Course for Professionals” aims to nurture world-class talents of the metaverse in response to surging demand for virtual worlds and virtual-physical blended environments. The HKPC’s R&D scientists, consultants, software engineers, and related professionals will attend the course. They will receive a professional certificate on managing and developing metaverse skills upon the completion of this intensive course. The course will provide essential skills and knowledge about the parallel virtual universe and how to leverage digitalization and industrialization in the metaverse era. The course includes comprehensive modules, such as designing and implementing virtual-physical blended environments, metaverse technology and ecosystems, immersive smart cities, token economies, and intelligent industrialization in the metaverse era. Professor Lee believes in the decades to come that we will see rising numbers of virtual worlds in cyberspace known as the ‘Immersive Internet’ that will be characterized by high levels of immersiveness, user interactivity, and user-machine collaborations. “Consumers in virtual worlds will create novel content as well as personalized products and services, becoming as catalyst for ‘hyperpersonalization’ in the next industrial revolution,” he said. Professor Lee said he will continue offering world-class education related to the metaverse to students in KAIST and professionals from various industrial sectors, as his Augmented Reality and Media Lab will focus on a variety of metaverse topics such as metaverse campuses and industrial metaverses. The HKPC has worked to address innovative solutions for Hong Kong industries and enterprises since 1967, helping them achieve optimized resource utilization, effectiveness, and cost reduction as well as enhanced productivity and competitiveness in both local and international markets. The HKPC has advocated for facilitating Hong Kong’s reindustrialization powered by Industry 4.0 and e-commerce 4.0 with a strong emphasis on R&D, IoT, AI, digital manufacturing. The Augmented Reality and Media Lab led by Professor Lee will continue its close partnerships with HKPC and its other partners to help build the epicentre of the metaverse in the region. Furthermore, the lab will fully leverage its well-established research niches in user-centric, virtual-physical cyberspace (https://www.lhlee.com/projects-8 ) to serve upcoming projects related to industrial metaverses, which aligns with the departmental focus on smart factories and artificial intelligence.
2022.04.06
View 6205
Baemin CEO Endows a Scholarship in Honor of the Late Professor Chwa
CEO Beom-Jun Kim of Woowa Brothers also known as ‘Baemin,’ a leading meal delivery app company, made a donation of 100 million KRW in honor of the late Professor Kyong-Yong Chwa from the School of Computing who passed away last year. The fund will be established for the “Kyong-Yong Chwa - Beom-Jun Kim Scholarship” to provide scholarships for four students over five years. Kim finished his BS in 1997 and MS in 1999 at the School of Computing and Professor Chwa was his advisor. The late Professor Chwa was a pioneering scholar who brought the concept of computer algorithms to Korea. After graduating from Seoul National University in electric engineering, Professor Chwa earned his PhD at Northwestern University and began teaching at KAIST in 1980. Professor Chwa served as the President of the Korean Institute of Information Scientists and Engineers and a fellow emeritus at the Korean Academy of Science and Technology. Professor Chwa encouraged younger students to participate in international computer programming contests. Under his wing, Team Korea, which was comprised of four high school students, including Kim, placed fourth in the International Olympiad Informatics (IOI). Kim, who participated in the contest as high school junior, won an individual gold medal in the fourth IOI competition in 1992. Since then, Korean students have actively participated in many competitions including the International Collegiate Programming Contest (ICPC) hosted by the Association for Computing Machinery. Kim said, “I feel fortunate to have met so many good friends and distinguished professors. With them, I had opportunities to grow. I would like to provide such opportunities to my juniors at KAIST. Professor Chwa was a larger than life figure in the field of computer programming. He was always caring and supported us with a warm heart. I want this donation to help carry on his legacy for our students and for them to seek greater challenges and bigger dreams.”
2022.03.25
View 5723
Decoding Brain Signals to Control a Robotic Arm
Advanced brain-machine interface system successfully interprets arm movement directions from neural signals in the brain Researchers have developed a mind-reading system for decoding neural signals from the brain during arm movement. The method, described in the journal Applied Soft Computing, can be used by a person to control a robotic arm through a brain-machine interface (BMI). A BMI is a device that translates nerve signals into commands to control a machine, such as a computer or a robotic limb. There are two main techniques for monitoring neural signals in BMIs: electroencephalography (EEG) and electrocorticography (ECoG). The EEG exhibits signals from electrodes on the surface of the scalp and is widely employed because it is non-invasive, relatively cheap, safe and easy to use. However, the EEG has low spatial resolution and detects irrelevant neural signals, which makes it difficult to interpret the intentions of individuals from the EEG. On the other hand, the ECoG is an invasive method that involves placing electrodes directly on the surface of the cerebral cortex below the scalp. Compared with the EEG, the ECoG can monitor neural signals with much higher spatial resolution and less background noise. However, this technique has several drawbacks. “The ECoG is primarily used to find potential sources of epileptic seizures, meaning the electrodes are placed in different locations for different patients and may not be in the optimal regions of the brain for detecting sensory and movement signals,” explained Professor Jaeseung Jeong, a brain scientist at KAIST. “This inconsistency makes it difficult to decode brain signals to predict movements.” To overcome these problems, Professor Jeong’s team developed a new method for decoding ECoG neural signals during arm movement. The system is based on a machine-learning system for analysing and predicting neural signals called an ‘echo-state network’ and a mathematical probability model called the Gaussian distribution. In the study, the researchers recorded ECoG signals from four individuals with epilepsy while they were performing a reach-and-grasp task. Because the ECoG electrodes were placed according to the potential sources of each patient’s epileptic seizures, only 22% to 44% of the electrodes were located in the regions of the brain responsible for controlling movement. During the movement task, the participants were given visual cues, either by placing a real tennis ball in front of them, or via a virtual reality headset showing a clip of a human arm reaching forward in first-person view. They were asked to reach forward, grasp an object, then return their hand and release the object, while wearing motion sensors on their wrists and fingers. In a second task, they were instructed to imagine reaching forward without moving their arms. The researchers monitored the signals from the ECoG electrodes during real and imaginary arm movements, and tested whether the new system could predict the direction of this movement from the neural signals. They found that the novel decoder successfully classified arm movements in 24 directions in three-dimensional space, both in the real and virtual tasks, and that the results were at least five times more accurate than chance. They also used a computer simulation to show that the novel ECoG decoder could control the movements of a robotic arm. Overall, the results suggest that the new machine learning-based BCI system successfully used ECoG signals to interpret the direction of the intended movements. The next steps will be to improve the accuracy and efficiency of the decoder. In the future, it could be used in a real-time BMI device to help people with movement or sensory impairments. This research was supported by the KAIST Global Singularity Research Program of 2021, Brain Research Program of the National Research Foundation of Korea funded by the Ministry of Science, ICT, and Future Planning, and the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education. -PublicationHoon-Hee Kim, Jaeseung Jeong, “An electrocorticographic decoder for arm movement for brain-machine interface using an echo state network and Gaussian readout,” Applied SoftComputing online December 31, 2021 (doi.org/10.1016/j.asoc.2021.108393) -ProfileProfessor Jaeseung JeongDepartment of Bio and Brain EngineeringCollege of EngineeringKAIST
2022.03.18
View 9078
CXL-Based Memory Disaggregation Technology Opens Up a New Direction for Big Data Solution Frameworks
A KAIST team’s compute express link (CXL) provides new insights on memory disaggregation and ensures direct access and high-performance capabilities A team from the Computer Architecture and Memory Systems Laboratory (CAMEL) at KAIST presented a new compute express link (CXL) solution whose directly accessible, and high-performance memory disaggregation opens new directions for big data memory processing. Professor Myoungsoo Jung said the team’s technology significantly improves performance compared to existing remote direct memory access (RDMA)-based memory disaggregation. CXL is a peripheral component interconnect-express (PCIe)-based new dynamic multi-protocol made for efficiently utilizing memory devices and accelerators. Many enterprise data centers and memory vendors are paying attention to it as the next-generation multi-protocol for the era of big data. Emerging big data applications such as machine learning, graph analytics, and in-memory databases require large memory capacities. However, scaling out the memory capacity via a prior memory interface like double data rate (DDR) is limited by the number of the central processing units (CPUs) and memory controllers. Therefore, memory disaggregation, which allows connecting a host to another host’s memory or memory nodes, has appeared. RDMA is a way that a host can directly access another host’s memory via InfiniBand, the commonly used network protocol in data centers. Nowadays, most existing memory disaggregation technologies employ RDMA to get a large memory capacity. As a result, a host can share another host’s memory by transferring the data between local and remote memory. Although RDMA-based memory disaggregation provides a large memory capacity to a host, two critical problems exist. First, scaling out the memory still needs an extra CPU to be added. Since passive memory such as dynamic random-access memory (DRAM), cannot operate by itself, it should be controlled by the CPU. Second, redundant data copies and software fabric interventions for RDMA-based memory disaggregation cause longer access latency. For example, remote memory access latency in RDMA-based memory disaggregation is multiple orders of magnitude longer than local memory access. To address these issues, Professor Jung’s team developed the CXL-based memory disaggregation framework, including CXL-enabled customized CPUs, CXL devices, CXL switches, and CXL-aware operating system modules. The team’s CXL device is a pure passive and directly accessible memory node that contains multiple DRAM dual inline memory modules (DIMMs) and a CXL memory controller. Since the CXL memory controller supports the memory in the CXL device, a host can utilize the memory node without processor or software intervention. The team’s CXL switch enables scaling out a host’s memory capacity by hierarchically connecting multiple CXL devices to the CXL switch allowing more than hundreds of devices. Atop the switches and devices, the team’s CXL-enabled operating system removes redundant data copy and protocol conversion exhibited by conventional RDMA, which can significantly decrease access latency to the memory nodes. In a test comparing loading 64B (cacheline) data from memory pooling devices, CXL-based memory disaggregation showed 8.2 times higher data load performance than RDMA-based memory disaggregation and even similar performance to local DRAM memory. In the team’s evaluations for a big data benchmark such as a machine learning-based test, CXL-based memory disaggregation technology also showed a maximum of 3.7 times higher performance than prior RDMA-based memory disaggregation technologies. “Escaping from the conventional RDMA-based memory disaggregation, our CXL-based memory disaggregation framework can provide high scalability and performance for diverse datacenters and cloud service infrastructures,” said Professor Jung. He went on to stress, “Our CXL-based memory disaggregation research will bring about a new paradigm for memory solutions that will lead the era of big data.” -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL)http://camelab.org School of Electrical EngineeringKAIST
2022.03.16
View 19470
'Fingerprint' Machine Learning Technique Identifies Different Bacteria in Seconds
A synergistic combination of surface-enhanced Raman spectroscopy and deep learning serves as an effective platform for separation-free detection of bacteria in arbitrary media Bacterial identification can take hours and often longer, precious time when diagnosing infections and selecting appropriate treatments. There may be a quicker, more accurate process according to researchers at KAIST. By teaching a deep learning algorithm to identify the “fingerprint” spectra of the molecular components of various bacteria, the researchers could classify various bacteria in different media with accuracies of up to 98%. Their results were made available online on Jan. 18 in Biosensors and Bioelectronics, ahead of publication in the journal’s April issue. Bacteria-induced illnesses, those caused by direct bacterial infection or by exposure to bacterial toxins, can induce painful symptoms and even lead to death, so the rapid detection of bacteria is crucial to prevent the intake of contaminated foods and to diagnose infections from clinical samples, such as urine. “By using surface-enhanced Raman spectroscopy (SERS) analysis boosted with a newly proposed deep learning model, we demonstrated a markedly simple, fast, and effective route to classify the signals of two common bacteria and their resident media without any separation procedures,” said Professor Sungho Jo from the School of Computing. Raman spectroscopy sends light through a sample to see how it scatters. The results reveal structural information about the sample — the spectral fingerprint — allowing researchers to identify its molecules. The surface-enhanced version places sample cells on noble metal nanostructures that help amplify the sample’s signals. However, it is challenging to obtain consistent and clear spectra of bacteria due to numerous overlapping peak sources, such as proteins in cell walls. “Moreover, strong signals of surrounding media are also enhanced to overwhelm target signals, requiring time-consuming and tedious bacterial separation steps,” said Professor Yeon Sik Jung from the Department of Materials Science and Engineering. To parse through the noisy signals, the researchers implemented an artificial intelligence method called deep learning that can hierarchically extract certain features of the spectral information to classify data. They specifically designed their model, named the dual-branch wide-kernel network (DualWKNet), to efficiently learn the correlation between spectral features. Such an ability is critical for analyzing one-dimensional spectral data, according to Professor Jo. “Despite having interfering signals or noise from the media, which make the general shapes of different bacterial spectra and their residing media signals look similar, high classification accuracies of bacterial types and their media were achieved,” Professor Jo said, explaining that DualWKNet allowed the team to identify key peaks in each class that were almost indiscernible in individual spectra, enhancing the classification accuracies. “Ultimately, with the use of DualWKNet replacing the bacteria and media separation steps, our method dramatically reduces analysis time.” The researchers plan to use their platform to study more bacteria and media types, using the information to build a training data library of various bacterial types in additional media to reduce the collection and detection times for new samples. “We developed a meaningful universal platform for rapid bacterial detection with the collaboration between SERS and deep learning,” Professor Jo said. “We hope to extend the use of our deep learning-based SERS analysis platform to detect numerous types of bacteria in additional media that are important for food or clinical analysis, such as blood.” The National R&D Program, through a National Research Foundation of Korea grant funded by the Ministry of Science and ICT, supported this research. -PublicationEojin Rho, Minjoon Kim, Seunghee H. Cho, Bongjae Choi, Hyungjoon Park, Hanhwi Jang, Yeon Sik Jung, Sungho Jo, “Separation-free bacterial identification in arbitrary media via deepneural network-based SERS analysis,” Biosensors and Bioelectronics online January 18, 2022 (doi.org/10.1016/j.bios.2022.113991) -ProfileProfessor Yeon Sik JungDepartment of Materials Science and EngineeringKAIST Professor Sungho JoSchool of ComputingKAIST
2022.03.04
View 19241
Thermal Superconductor Lab Becomes the 7th Cross-Generation Collaborative Lab
The Thermal Superconductor Lab led by Senior Professor Sung Jin Kim from the Department of Mechanical Engineering will team up with Junior Professor Youngsuk Nam to develop next-generation superconductors. The two professor team was selected as the 7th Cross-Generation Collaborative Lab last week and will sustain the academic legacy of Professor Kim’s three decades of research on superconductors. The team will continue to develop thin, next-generation superconductors that carry super thermal conductivity using phase transition control technology and thin film packaging. Thin-filmed, next-generation superconductors can be used in various high-temperature flexible electronic devices. The superconductors built inside of the semiconductor device packages will also be used for managing the low-powered but high-performance temperatures of semiconductor and electronic equipment. Professor Kim said, “I am very pleased that my research, know-how, and knowledge from over 30 years of work will continue through the Cross-Generation Collaborative Lab system with Professor Nam. We will spare no effort to advance superconductor technology and play a part in KAIST leading global technology fields.” Junior Professor Nam also stressed that the team is excited to continue its research on crucial technology for managing the temperatures of semiconductors and other electronic equipment. KAIST started this innovative research system in 2018, and in 2021 it established the steering committee to select new labs based on: originality, differentiation, and excellence; academic, social, economic impact; the urgency of cross-generation research; the senior professor’s academic excellence and international reputation; and the senior professor’s research vision. Selected labs receive 500 million KRW in research funding over five years.
2022.01.27
View 4552
Eco-Friendly Micro-Supercapacitors Using Fallen Leaves
Green micro-supercapacitors on a single leaf could easily be applied in wearable electronics, smart houses, and IoTs A KAIST research team has developed graphene-inorganic-hybrid micro-supercapacitors made of fallen leaves using femtosecond laser direct writing. The rapid development of wearable electronics requires breakthrough innovations in flexible energy storage devices in which micro-supercapacitors have drawn a great deal of interest due to their high power density, long lifetimes, and short charging times. Recently, there has been an enormous increase in waste batteries owing to the growing demand and the shortened replacement cycle in consumer electronics. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges. Forests cover about 30 percent of the Earth’s surface and produce a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is completely biodegradable, which makes it an attractive sustainable resource. Nevertheless, if the fallen leaves are left neglected instead of being used efficiently, they can contribute to fire hazards, air pollution, and global warming. To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a novel technology that can create 3D porous graphene microelectrodes with high electrical conductivity by irradiating femtosecond laser pulses on the leaves in ambient air. This one-step fabrication does not require any additional materials or pre-treatment. They showed that this technique could quickly and easily produce porous graphene electrodes at a low price, and demonstrated potential applications by fabricating graphene micro-supercapacitors to power an LED and an electronic watch. These results open up a new possibility for the mass production of flexible and green graphene-based electronic devices. Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.” This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research. -Publication Truong-Son Dinh Le, Yeong A. Lee, Han Ku Nam, Kyu Yeon Jang, Dongwook Yang, Byunggi Kim, Kanghoon Yim, Seung Woo Kim, Hana Yoon, and Young-jin Kim, “Green Flexible Graphene-Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses," December 05, 2021, Advanced Functional Materials (doi.org/10.1002/adfm.202107768) -ProfileProfessor Young-Jin KimUltra-Precision Metrology and Manufacturing (UPM2) LaboratoryDepartment of Mechanical EngineeringKAIST
2022.01.27
View 9367
AI Light-Field Camera Reads 3D Facial Expressions
Machine-learned, light-field camera reads facial expressions from high-contrast illumination invariant 3D facial images A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology. Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications. However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction. The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times. Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment. The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy – a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions. Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.” This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. -Publication“Machine-learned light-field camera that reads fascial expression from high-contrast and illumination invariant 3D facial images,” Sang-In Bae, Sangyeon Lee, Jae-Myeong Kwon, Hyun-Kyung Kim. Kyung-Won Jang, Doheon Lee, Ki-Hun Jeong, Advanced Intelligent Systems, December 16, 2021 (doi.org/10.1002/aisy.202100182) ProfileProfessor Ki-Hun JeongBiophotonic LaboratoryDepartment of Bio and Brain EngineeringKAIST Professor Doheon LeeDepartment of Bio and Brain EngineeringKAIST
2022.01.21
View 9461
Perigee-KAIST Rocket Research Center Launches Scientific Rocket
Undergraduate startup Perigree Aerospace develops suborbital rocket called Blue Whale 0.1 On December 29, Perigee Aerospace, an undergraduate startup, launched a test rocket with a length of 3.2 m, a diameter of 19 cm, and a weight of 51 kg, using ethanol and liquid oxygen as fuel. The launch took place off Jeju Island. It was aimed at building experience and checking the combustion of a liquid propulsion engine and the performance of pre-set flight and trajectory, communication, and navigation devices. It was also one of the projects marking the 50th anniversary of KAIST in 2021. However, after flying for several seconds, the rocket lost its track due to a gust of wind that activated the rocket’s automatic flight suspension system. "At the moment the rocket took off, there was a much stronger gust than expected," Dong-Yoon Shin, CEO of Perigee said. "The wind sent it flying off course and the automatic flight suspension system stopped its engine." However, Shin was not disappointed, saying the launch, which was conducted in collaboration with Perigee-KAIST Rocket Research Center provided a good experience. "Some people say that Blue Whale 0.1 is like a toy because of its small size. Of course, it's much smaller than the rockets I’ve dreamed of, but like other rockets, it has all the technology needed for launch," said Shin, who established his company in 2018 as a KAIST aerospace engineering student to develop small liquid-propellant orbital rockets. Perigee Aerospace aims to develop the world’s lightest launch vehicle using high-powered engines, with a goal of leading the global market for small launch vehicles in the new space generation. Perigee-KAIST Rocket Research Center was founded in 2019 for the research and development of rocket propellants and has been testing the combustion of rocket engines of various sizes in their liquid propellant rocket combustion lab located on the KAIST Munji Campus. The research center initiated the 50th anniversary rocket launch project in late April of last year, finished the examination of their preliminary design in late May, and secured a tentative launching site through the KAIST-Jejudo agreement in early July. The ethanol engine combustion was tested in late July, and an examination meeting regarding the detailed design that took place in late August was followed by two months of static firing tests of the assembled rocket in October and November. This was a very meaningful trial in which a domestic private enterprise founded by a college student collaborated with a university to successfully develop and launch a technically challenging liquid propellant rocket. Shin's near-term goal is to launch a two-stage orbital rocket that uses liquid methane as fuel and weighs 1.8 tons. To secure competitiveness in the small projectile market, KAIST and Perigee Aerospace have set up a joint research center to test various rocket engine sizes and develop the world's lightest projectile using a high-performance engine. Professor Jae-Hung Han, head of the Department of Aerospace Engineering, said, “The scientific rocket system secured through the launch of the celebratory rocket will be utilized for design and system-oriented education, and for carrying out various scientific missions.” He added, “It is very rare both domestically and globally that a scientific rocket designed by the initiatives of a department should be incorporated as part of a regular aerospace system design curriculum. This will be an exemplary case we can boast about to the rest of the world.” Perigee Aerospace will improve the technology they have developed through the course of this project to develop subminiature vehicles they may use to launch small satellites into the low Earth orbit. Shin said, “I am happy just with the fact that we have participated in a rocket project to celebrate the 50th anniversary of KAIST, and I would like to thank the engineers at my company and members of the KAIST Department of Aerospace Engineering.” He added, “I’m looking forward to the day that we develop a space launch vehicle that can deliver satellites even higher.”
2022.01.14
View 6523
Team KAIST Makes Its Presence Felt in the Self-Driving Tech Industry
Team KAIST finishes 4th at the inaugural CES Autonomous Racing Competition Team KAIST led by Professor Hyunchul Shim and Unmanned Systems Research Group (USRG) placed fourth in an autonomous race car competition in Las Vegas last week, making its presence felt in the self-driving automotive tech industry. Team KAIST, beat its first competitor, Auburn University, with speeds of up to 131 mph at the Autonomous Challenge at CES held at the Las Vegas Motor Speedway. However, the team failed to advance to the final round when it lost to PoliMOVE, comprised of the Polytechnic University of Milan and the University of Alabama, the final winner of the $150,000 USD race. A total of eight teams competed in the self-driving race. The race was conducted as a single elimination tournament consisting of multiple rounds of matches. Two cars took turns playing the role of defender and attacker, and each car attempted to outpace the other until one of them was unable to complete the mission. Each team designed the algorithm to control its racecar, the Dallara-built AV-21, which can reach a speed of up to 173 mph, and make it safely drive around the track at high speeds without crashing into the other. The event is the CES version of the Indy Autonomous Challenge, a competition that took place for the first time in October last year to encourage university students from around the world to develop complicated software for autonomous driving and advance relevant technologies. Team KAIST placed 4th at the Indy Autonomous Challenge, which qualified it to participate in this race. “The technical level of the CES race is much higher than last October’s and we had a very tough race. We advanced to the semifinals for two consecutive races. I think our autonomous vehicle technology is proving itself to the world,” said Professor Shim. Professor Shim’s research group has been working on the development of autonomous aerial and ground vehicles for the past 12 years. A self-driving car developed by the lab was certified by the South Korean government to run on public roads. The vehicle the team used cost more than 1 million USD to build. Many of the other teams had to repair their vehicle more than once due to accidents and had to spend a lot on repairs. “We are the only one who did not have any accidents, and this is a testament to our technological prowess,” said Professor Shim. He said the financial funding to purchase pricy parts and equipment for the racecar is always a challenge given the very tight research budget and absence of corporate sponsorships. However, Professor Shim and his research group plan to participate in the next race in September and in the 2023 CES race. “I think we need more systemic and proactive research and support systems to earn better results but there is nothing better than the group of passionate students who are taking part in this project with us,” Shim added.
2022.01.12
View 8035
AI Weather Forecasting Research Center Opens
The Kim Jaechul Graduate School of AI in collaboration with the National Institute of Meteorological Sciences (NIMS) under the National Meteorological Administration launched the AI Weather Forecasting Research Center last month. The KAIST AI Weather Forecasting Research Center headed by Professor Seyoung Yoon was established with funding from from the AlphaWeather Development Research Project of the National Institute of Meteorological Sciences. KAIST was finally selected asas the project facilitator. AlphaWeather is an AI system that utilizes and analyzes approximately approximately 150,000 ,000 pieces of weather information per hour to help weather forecasters produce accurate weather forecasts. The research center is composed of three research teams with the following goals: (a) developdevelop AI technology for precipitation nowcasting, (b) developdevelop AI technology for accelerating physical process-based numerical models, and (c) develop dAI technology for supporting weather forecasters. The teams consist of 15 staff member members from NIMS and 61 researchers from the Kim Jaechul Graduate School of AI at KAIST. The research center is developing an AI algorithm for precipitation nowcasting (with up to six hours of lead time), which uses satellite images, radar reflectivity, and data collected from weather stations. It is also developing an AI algorithm for correcting biases in the prediction results from multiple numerical models. Finally, it is Finally, it is developing AI technology that supports weather forecasters by standardizing and automating repetitive manual processes. After verification, the the results obtained will be used by by the Korean National Weather Service as a next-generation forecasting/special-reporting system intelligence engine from 2026.
2022.01.10
View 4634
Face Detection in Untrained Deep Neural Networks
A KAIST team shows that primitive visual selectivity of faces can arise spontaneously in completely untrained deep neural networks Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks. This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences. The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains. The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience. Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks. These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions. Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning”. He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.” This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project. -PublicationSeungdae Baek, Min Song, Jaeson Jang, Gwangsu Kim, and Se-Bum Baik, “Face detection in untrained deep neural network,” Nature Communications 12, 7328 on Dec.16, 2021 (https://doi.org/10.1038/s41467-021-27606-9) -ProfileProfessor Se-Bum PaikVisual System and Neural Network LaboratoryProgram of Brain and Cognitive EngineeringDepartment of Bio and Brain EngineeringCollege of EngineeringKAIST
2021.12.21
View 8250
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 57