Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
Professor Jaehyouk Choi, IT Young Engineer of the Year
Professor Jaehyouk Choi from the KAIST School of Electrical Engineering won the ‘IT Young Engineer Award’ for 2020. The award was co-presented by the Institute of Electrical and Electronics Engineers (IEEE) and the Institute of Electronics Engineers of Korea (IEIE), and sponsored by the Haedong Science and Culture Foundation. The ‘IT Young Engineer Award’ selects only one mid-career scientist or engineer 40 years old or younger every year, who has made a great contribution to academic or technological advancements in the field of IT. Professor Choi’s research topics include high-performance semiconductor circuit design for ultrahigh-speed communication systems including 5G communication. In particular, he is widely known for his field of the ‘ultra-low-noise, high-frequency signal generation circuit,’ key technology for next-generation wired and wireless communications, as well as for memory systems. He has published 64 papers in SCI journals and at international conferences, and applied for and registered 25 domestic and international patents. Professor Choi is also an active member of the Technical Program Committee of international symposiums in the field of semiconductor circuits including the International Solid-State Circuits Conference (ISSCC) and the European Solid-State Circuit Conference (ESSCIRC). Beginning this year, he also serves as a distinguished lecturer at the IEEE Solid-State Circuit Society (SSCS). (END)
Professor Jee-Hwan Ryu Receives IEEE ICRA 2020 Outstanding Reviewer Award
Professor Jee-Hwan Ryu from the Department of Civil and Environmental Engineering was selected as this year’s winner of the Outstanding Reviewer Award presented by the Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation (IEEE ICRA). The award ceremony took place on June 5 during the conference that is being held online May 31 through August 31 for three months. The IEEE ICRA Outstanding Reviewer Award is given every year to the top reviewers who have provided constructive and high-quality thesis reviews, and contributed to improving the quality of papers published as results of the conference. Professor Ryu was one of the four winners of this year’s award. He was selected from 9,425 candidates, which was approximately three times bigger than the candidate pool in previous years. He was strongly recommended by the editorial committee of the conference. (END)
Professor Jong Chul Ye Appointed as Distinguished Lecturer of IEEE EMBS
Professor Jong Chul Ye from the Department of Bio and Brain Engineering was appointed as a distinguished lecturer by the International Association of Electrical and Electronic Engineers (IEEE) Engineering in Medicine and Biology Society (EMBS). Professor Ye was invited to deliver a lecture on his leading research on artificial intelligence (AI) technology in medical video restoration. He will serve a term of two years beginning in 2020. IEEE EMBS's distinguished lecturer program is designed to educate researchers around the world on the latest trends and technology in biomedical engineering. Sponsored by IEEE, its members can attend lectures on the distinguished professor's research subject. Professor Ye said, "We are at a time where the importance of AI in medical imaging is increasing.” He added, “I am proud to be appointed as a distinguished lecturer of the IEEE EMBS in recognition of my contributions to this field.” (END)
Professor Junil Choi Receives Stephen O. Rice Prize
< Professor Junil Choi (second from the left) > Professor Junil Choi from the School of Electrical Engineering received the Stephen O. Rice Prize at the Global Communications Conference (GLOBECOM) hosted by the Institute of Electrical and Electronics Engineers (IEEE) in Hawaii on December 10, 2019. The Stephen O. Rice Prize is awarded to only one paper of exceptional merit every year. The IEEE Communications Society evaluates all papers published in the IEEE Transactions on Communications journal within the last three years, and marks each paper by aggregating its scores on originality, the number of citations, impact, and peer evaluation. Professor Choi won the prize for his research on one-bit analog-to-digital converters (ADCs) for multiuser massive multiple-input and multiple-output (MIMO) antenna systems published in 2016. In his paper, Professor Choi proposed a technology that can drastically reduce the power consumption of the multiuser massive MIMO antenna systems, which are the core technology for 5G and future wireless communication. Professor Choi’s paper has been cited more than 230 times in various academic journals and conference papers since its publication, and multiple follow-up studies are actively ongoing. In 2015, Professor Choi received the IEEE Signal Processing Society Best Paper Award, an award equals to the Stephen O. Rice Prize. He was also selected as the winner of the 15th Haedong Young Engineering Researcher Award presented by the Korean Institute of Communications and Information Sciences (KICS) on December 6, 2019 for his outstanding academic achievements, including 34 international journal publications and 26 US patent registrations. (END)
New IEEE Fellow, Professor Jong Chul Ye
Professor Jong Chul Ye from the Department of Bio and Brain Engineering was named a new fellow of the Institute of Electrical and Electronics Engineers (IEEE). IEEE announced this on December 1 in recognition of Professor Ye’s contributions to the development of signal processing and artificial intelligence (AI) technology in the field of biomedical imaging. As the world’s largest society in the electrical and electronics field, IEEE names the top 0.1% of their members as fellows based on their research achievements.Professor Ye has published more than 100 research papers in world-leading journals in the biomedical imaging field, including those affiliated with IEEE. He also gave a keynote talk at the yearly conference of the International Society for Magnetic Resonance Imaging (ISMRM) on medical AI technology. In addition, Professor Ye has been appointed to serve as the next chair of the Computational Imaging Technical Committee of the IEEE Signal Processing Society, and the chair of the IEEE Symposium on Biomedical Imaging (ISBI) 2020 to be held in April in Iowa, USA. Professor Ye said, “The importance of AI technology is developing in the biomedical imaging field. I feel proud that my contributions have been internationally recognized and allowed me to be named an IEEE fellow.”
New Anisotropic Conductive Film for Ultra-Fine Pitch Assembly Applications
(Professor Paik(right) and PhD Candidate Yoon) Higher resolution display electronic devices increasingly needs ultra-fine pitch assemblies. On that account, display driver interconnection technology has become a major challenge for upscaling display electronics. Researchers have moved to one step closer to realizing ultra-fine resolution for displays with a novel thermoplastic anchoring polymer layer structure. This new structure can significantly improve the ultra-fine pitch interconnection by effectively suppressing the movement of conductive particles. This film is expected to be applied to various mobile devices, large-sized OLED panels, and VR, among others. A research team under Professor Kyung-Wook Paik in the Department of Materials developed an anchoring polymer layer structure that can effectively suppress the movement of conductive particles during the bonding process of the anisotropic conductive films (ACFs). The new structure will significantly improve the conductive particle capture rate, addressing electrical short problems in the ultra-fine pitch assembly process. During the ultra-fine pitch bonding process, the conductive particles of conventional ACFs agglomerate between bumps and cause electrical short circuits. To overcome the electrical shortage problem caused by the free movement of conductive particles, higher tensile strength anchoring polymer layers incorporated with conductive particles were introduced into the ACFs to effectively prevent conductive particle movement. The team used nylon to produce a single layer film with well-distributed and incorporated conductive particles. The higher tensile strength of nylon completely suppressed the movement of conductive particles, raising the capture rate of conductive particles from 33% of the conventional ACFs to 90%. The nylon films showed no short circuit problem during the Chip on Glass assembly. Even more, they obtained excellent electrical conductivity, high reliability, and low cost ACFs during the ultra-fine pitch applications. Professor Paik believes this new type of ACFs can further be applied not only to VR, 4K and 8K UHD display products, but also to large-size OLED panels and mobile devices. His team completed a prototype of the film supported by the ‘H&S High-Tech,’ a domestic company and the ‘Innopolis Foundation.’ The study, whose first author is PhD candidate Dal-Jin Yoon, is described in the October issue of IEEE TCPMT. Figure 1: Schematic process of APL structure fabrication. Figure 2: Proto-type production of APL ACFs.
Robotic Herding of a Flock of Birds Using Drones
A joint team from KAIST, Caltech, and Imperial College London, presents a drone with a new algorithm to shepherd birds safely away from airports Researchers made a new algorithm for enabling a single robotic unmanned aerial vehicle to herd a flock of birds away from a designated airspace. This novel approach allows a single autonomous quadrotor drone to herd an entire flock of birds away without breaking their formation. Professor David Hyunchul Shim at KAIST in collaboration with Professor Soon-Jo Chung of Caltech and Professor Aditya Paranjape of Imperial College London investigated the problem of diverting a flock of birds away from a prescribed area, such as an airport, using a robotic UVA. A novel boundary control strategy called the m-waypoint algorithm was introduced for enabling a single pursuer UAV to safely herd the flock without fragmenting it. The team developed the herding algorithm on the basis of macroscopic properties of the flocking model and the response of the flock. They tested their robotic autonomous drone by successfully shepherding an entire flock of birds out of a designated airspace near KAIST’s campus in Daejeon, South Korea. This study is published in IEEE Transactions on Robotics. “It is quite interesting, and even awe-inspiring, to monitor how birds react to threats and collectively behave against threatening objects through the flock. We made careful observations of flock dynamics and interactions between flocks and the pursuer. This allowed us to create a new herding algorithm for ideal flight paths for incoming drones to move the flock away from a protected airspace,” said Professor Shim, who leads the Unmanned Systems Research Group at KAIST. Bird strikes can threaten the safety of airplanes and their passengers. Korean civil aircraft suffered more than 1,000 bird strikes between 2011 and 2016. In the US, 142,000 bird strikes destroyed 62 civilian airplanes, injured 279 people, and killed 25 between 1990 and 2013. In the UK in 2016, there were 1,835 confirmed bird strikes, about eight for every 10,000 flights. Bird and other wildlife collisions with aircraft cause well over 1.2 billion USD in damages to the aviation industry worldwide annually. In the worst case, Canadian geese knocked out both engines of a US Airway jet in January 2009. The flight had to make an emergency landing on the Hudson River. Airports and researchers have continued to reduce the risk of bird strikes through a variety of methods. They scare birds away using predators such as falcons or loud noises from small cannons or guns. Some airports try to prevent birds from coming by ridding the surrounding areas of crops that birds eat and hide in. However, birds are smart. “I was amazed with the birds’ capability to interact with flying objects. We thought that only birds of prey have a strong sense of maneuvering with the prey. But our observation of hundreds of migratory birds such as egrets and loons led us to reach the hypothesis that they all have similar levels of maneuvering with the flying objects. It will be very interesting to collaborate with ornithologists to study further with birds’ behaviors with aerial objects,” said Professor Shim. “Airports are trying to transform into smart airports. This algorithm will help improve safety for the aviation industry. In addition, this will also help control avian influenza that plagues farms nationwide every year,” he stressed. For this study, two drones were deployed. One drone performed various types of maneuvers around the flocks as a pursuer of herding drone, while a surveillance drone hovered at a high altitude with a camera pointing down for recording the trajectories of the pursuer drone and the birds. During the experiments on egrets, the birds made frequent visits to a hunting area nearby and a large number of egrets were found to return to their nests at sunset. During the time, the team attempted to fly the herding drone in various directions with respect to the flock. The drone approached the flock from the side. When the birds noticed the drone, they diverted from their original paths and flew at a 45˚ angle to their right. When the birds noticed the drone while it was still far away, they adjusted their paths horizontally and made smaller changes in the vertical direction. In the second round of the experiment on loons, the drone flew almost parallel to the flight path of a flock of birds, starting from an initial position located just off the nominal flight path. The birds had a nominal flight speed that was considerably higher than that of the drone so the interaction took place over a relatively short period of time. Professor Shim said, “I think we just completed the first step of the research. For the next step, more systems will be developed and integrated for bird detection, ranging, and automatic deployment of drones.” “Professor Chung at Caltech is a KAIST graduate. And his first student was Professor Paranjape who now teaches at Imperial. It is pretty interesting that this research was made by a KAIST faculty member, an alumnus, and his student on three different continents,” he said. (Figure A. Case 1: drone approaches the herd with sufficient distance to induce horizontal deviation) (Figure B. Case 2: drone approaches the herd abruptly to cause vertical deviation)
Professor Suh Chosen for IT Young Engineer Award
(The ceremony photo of Professor Changho Suh) Professor Changho Suh from the School of Electrical Engineering received the IT Young Engineer Award on June 28. This award is hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Institute of Electrical and Information Engineers (IEIE) and funded by the Haedong Science Foundation. The IT Young Engineer Award is given to researchers under the age of 40 in Korea. The selection criteria include the researches’ technical practicability, their social and environmental contributions, and their creativity. Professor Suh has shown outstanding academic performance in the field of telecommunications, distributed storage, and artificial intelligence and he has also contributed to technological commercialization. He published 23 papers in SCI journals and ten papers at top-level international conferences including the Conference on Neural Information Processing Systems and the International Conference on Machine Learning. His papers were cited more than 4,100 times. He has also achieved 30 international patent registrations. Currently, he is developing an autonomous driving system using an AI-tutor and deep learning technology. Professor Suh said, “It is my great honor to receive the IT Young Engineer Award. I strive to continue guiding students and carrying out research in order to make a contribution to the fields of IT and AI.”
Recognizing Seven Different Face Emotions on a Mobile Platform
(Professor Hoi-Jun Yoo) A KAIST research team succeeded in achieving face emotion recognition on a mobile platform by developing an AI semiconductor IC that processes two neural networks on a single chip. Professor Hoi-Jun Yoo and his team (Primary researcher: Jinmook Lee Ph. D. student) from the School of Electrical Engineering developed a unified deep neural network processing unit (UNPU). Deep learning is a technology for machine learning based on artificial neural networks, which allows a computer to learn by itself, just like a human. The developed chip adjusts the weight precision (from 1 bit to 16 bit) of a neural network inside of the semiconductor in order to optimize energy efficiency and accuracy. With a single chip, it can process a convolutional neural network (CNN) and recurrent neural network (RNN) simultaneously. CNN is used for categorizing and recognizing images while RNN is for action recognition and speech recognition, such as time-series information. Moreover, it enables an adjustment in energy efficiency and accuracy dynamically while recognizing objects. To realize mobile AI technology, it needs to process high-speed operations with low energy, otherwise the battery can run out quickly due to processing massive amounts of information at once. According to the team, this chip has better operation performance compared to world-class level mobile AI chips such as Google TPU. The energy efficiency of the new chip is 4 times higher than the TPU. In order to demonstrate its high performance, the team installed UNPU in a smartphone to facilitate automatic face emotion recognition on the smartphone. This system displays a user’s emotions in real time. The research results for this system were presented at the 2018 International Solid-State Circuits Conference (ISSCC) in San Francisco on February 13. Professor Yoo said, “We have developed a semiconductor that accelerates with low power requirements in order to realize AI on mobile platforms. We are hoping that this technology will be applied in various areas, such as object recognition, emotion recognition, action recognition, and automatic translation. Within one year, we will commercialize this technology.”
Highly Sensitive and Fast Indoor GNSS Signal Acquisition Technology
(Professor Seung-Hyun Kong (right) and Research Fellow Tae-Sun Kim) A research team led by Professor Seung-Hyun Kong at the Cho Chun Shik Graduate School of Green Transportation, KAIST, developed high-speed, high-sensitivity Global Navigation Satellite System (GNSS) signal acquisition (search and detection) technology that can produce GNSS positioning fixes indoors. Using the team’s new technology, GNSS signals will be sufficient to identify locations anywhere in the world, both indoors and outdoors. This new research finding was published in the international journal IEEE Signal Processing Magazine (IEEE SPM) this September. Global Positioning System (GPS) developed by the U.S. Department of Defense in the 1990s is the most widely-used satellite-based navigation system, and GNSS is a terminology to indicate conventional satellite based navigation systems, such as GPS and Russian GLONASS, as well as new satellite-based navigation systems under development, such as European GALILEO, Chinese COMPASS, and other regional satellite-based navigation systems. In general, GNSS signals are transmitted all over the globe from 20,000 km above the Earth and thus a GNSS signal received by a small antennae in an outdoor environment has weak signal power. In addition, GNSS signals penetrating building walls become extremely weak so the signal can be less than 1/1000th of the signal power received outside. Using conventional acquisition techniques including the frequency-domain correlation technique to acquire an extremely weak GNSS signal causes the computational cost to increase by over a million times and the processing time for acquisition also increases tremendously. Because of this, indoor measurement techniques using GNSS signals were considered practically impossible for the last 20 years. To resolve such limitations, the research team developed a Synthesized Doppler-frequency Hypothesis Testing (SDHT) technique to dramatically reduce the acquisition time and computational load for extremely weak GNSS signals indoors. In general, GNSS signal acquisition is a search process in which the instantaneous accurate code phase and Doppler frequency of the incoming GNSS signal are identified. However, the number of Doppler frequency hypotheses grows proportionally to the coherent correlation time that should be necessarily increased to detect weak signals. In practice, the coherent correlation time should be more than 1000 times longer for extremely weak GNSS signals so the number of Doppler frequency hypotheses is greater than 20,000. On the other hand, the SDHT algorithm indirectly tests the Doppler frequency hypothesis utilizing the coherent correlation results of neighboring hypotheses. Therefore, using SDHT, only around 20 hypotheses are tested using conventional correlation techniques and the remaining 19,980 hypotheses are calculated with simple mathematical operations. As a result, SDHT achieves a huge computational cost reduction (by about 1000 times) and is 800 times faster for signal acquisition compared to conventional techniques. This means only about 15 seconds is required to detect extremely weak GNSS signals in buildings using a personal computer. The team predicts further studies for strengthening SDHT technology and developing positioning systems robust enough to multipath in indoor environments will allow indoor GNSS measurements within several seconds inside most buildings using GNSS alone. Professor Kong said, “This development made us the leader in indoor GNSS positioning technology in the world.” He continued, “We hope to commercialize indoor GNSS systems to create a new market.” The research team is currently registering a patent in Korea and applying for patents overseas, as well as planning to commercialize the technology with the help of the Institute for Startup KAIST. (Figure1. Positioning Results for the GPS Indoor Positioning System using SDHT Technology)
Face Recognition System 'K-Eye' Presented by KAIST
Artificial intelligence (AI) is one of the key emerging technologies. Global IT companies are competitively launching the newest technologies and competition is heating up more than ever. However, most AI technologies focus on software and their operating speeds are low, making them a poor fit for mobile devices. Therefore, many big companies are investing to develop semiconductor chips for running AI programs with low power requirements but at high speeds. A research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. The system was made in collaboration with a start-up company, UX Factory Co. The K-Eye series consists of two types: a wearable type and a dongle type. The wearable type device can be used with a smartphone via Bluetooth, and it can operate for more than 24 hours with its internal battery. Users hanging K-Eye around their necks can conveniently check information about people by using their smartphone or smart watch, which connects K-Eye and allows users to access a database via their smart devices. A smartphone with K-EyeQ, the dongle type device, can recognize and share information about users at any time. When recognizing that an authorized user is looking at its screen, the smartphone automatically turns on without a passcode, fingerprint, or iris authentication. Since it can distinguish whether an input face is coming from a saved photograph versus a real person, the smartphone cannot be tricked by the user’s photograph. The K-Eye series carries other distinct features. It can detect a face at first and then recognize it, and it is possible to maintain “Always-on” status with low power consumption of less than 1mW. To accomplish this, the research team proposed two key technologies: an image sensor with “Always-on” face detection and the CNNP face recognition chip. The first key technology, the “Always-on” image sensor, can determine if there is a face in its camera range. Then, it can capture frames and set the device to operate only when a face exists, reducing the standby power significantly. The face detection sensor combines analog and digital processing to reduce power consumption. With this approach, the analog processor, combined with the CMOS Image Sensor array, distinguishes the background area from the area likely to include a face, and the digital processor then detects the face only in the selected area. Hence, it becomes effective in terms of frame capture, face detection processing, and memory usage. The second key technology, CNNP, achieved incredibly low power consumption by optimizing a convolutional neural network (CNN) in the areas of circuitry, architecture, and algorithms. First, the on-chip memory integrated in CNNP is specially designed to enable data to be read in a vertical direction as well as in a horizontal direction. Second, it has immense computational power with 1024 multipliers and accumulators operating in parallel and is capable of directly transferring the temporal results to each other without accessing to the external memory or on-chip communication network. Third, convolution calculations with a two-dimensional filter in the CNN algorithm are approximated into two sequential calculations of one-dimensional filters to achieve higher speeds and lower power consumption. With these new technologies, CNNP achieved 97% high accuracy but consumed only 1/5000 power of the GPU. Face recognition can be performed with only 0.62mW of power consumption, and the chip can show higher performance than the GPU by using more power. These chips were developed by Kyeongryeol Bong, a Ph. D. student under Professor Yoo and presented at the International Solid-State Circuit Conference (ISSCC) held in San Francisco in February. CNNP, which has the lowest reported power consumption in the world, has achieved a great deal of attention and has led to the development of the present K-Eye series for face recognition. Professor Yoo said “AI - processors will lead the era of the Fourth Industrial Revolution. With the development of this AI chip, we expect Korea to take the lead in global AI technology.” The research team and UX Factory Co. are preparing to commercialize the K-Eye series by the end of this year. According to a market researcher IDC, the market scale of the AI industry will grow from $127 billion last year to $165 billion in this year. (Photo caption: Schematic diagram of K-Eye system)
Crowdsourcing-Based Global Indoor Positioning System
Research team of Professor Dong-Soo Han of the School of Computing Intelligent Service Lab at KAIST developed a system for providing global indoor localization using Wi-Fi signals. The technology uses numerous smartphones to collect fingerprints of location data and label them automatically, significantly reducing the cost of constructing an indoor localization system while maintaining high accuracy. The method can be used in any building in the world, provided the floor plan is available and there are Wi-Fi fingerprints to collect. To accurately collect and label the location information of the Wi-Fi fingerprints, the research team analyzed indoor space utilization. This led to technology that classified indoor spaces into places used for stationary tasks (resting spaces) and spaces used to reach said places (transient spaces), and utilized separate algorithms to optimally and automatically collect location labelling data. Years ago, the team implemented a way to automatically label resting space locations from signals collected in various contexts such as homes, shops, and offices via the users’ home or office address information. The latest method allows for the automatic labelling of transient space locations such as hallways, lobbies, and stairs using unsupervised learning, without any additional location information. Testing in KAIST’s N5 building and the 7th floor of N1 building manifested the technology is capable of accuracy up to three or four meters given enough training data. The accuracy level is comparable to technology using manually-labeled location information. Google, Microsoft, and other multinational corporations collected tens of thousands of floor plans for their indoor localization projects. Indoor radio map construction was also attempted by the firms but proved more difficult. As a result, existing indoor localization services were often plagued by inaccuracies. In Korea, COEX, Lotte World Tower, and other landmarks provide comparatively accurate indoor localization, but most buildings suffer from the lack of radio maps, preventing indoor localization services. Professor Han said, “This technology allows the easy deployment of highly accurate indoor localization systems in any building in the world. In the near future, most indoor spaces will be able to provide localization services, just like outdoor spaces.” He further added that smartphone-collected Wi-Fi fingerprints have been unutilized and often discarded, but now they should be treated as invaluable resources, which create a new big data field of Wi-Fi fingerprints. This new indoor navigation technology is likely to be valuable to Google, Apple, or other global firms providing indoor positioning services globally. The technology will also be valuable for helping domestic firms provide positioning services. Professor Han added that “the new global indoor localization system deployment technology will be added to KAILOS, KAIST’s indoor localization system.” KAILOS was released in 2014 as KAIST’s open platform for indoor localization service, allowing anyone in the world to add floor plans to KAILOS, and collect the building’s Wi-Fi fingerprints for a universal indoor localization service. As localization accuracy improves in indoor environments, despite the absence of GPS signals, applications such as location-based SNS, location-based IoT, and location-based O2O are expected to take off, leading to various improvements in convenience and safety. Integrated indoor-outdoor navigation services are also visible on the horizon, fusing vehicular navigation technology with indoor navigation. Professor Han’s research was published in IEEE Transactions on Mobile Computing (TMC) in November in 2016. For more, please visit http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7349230http://ieeexplore.ieee.org/document/7805133/
마지막 페이지 4
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.