본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
deep-learning
by recently order
by view order
Research Day Highlights the Most Impactful Technologies of the Year
Technology Converting Full HD Image to 4-Times Higher UHD Via Deep Learning Cited as the Research of the Year The technology converting a full HD image into a four-times higher UHD image in real time via AI deep learning was recognized as the Research of the Year. Professor Munchurl Kim from the School of Electrical Engineering who developed the technology won the Research of the Year Grand Prize during the 2021 KAIST Research Day ceremony on May 25. Professor Kim was lauded for conducting creative research on machine learning and deep learning-based image processing. KAIST’s Research Day recognizes the most notable research outcomes of the year, while creating opportunities for researchers to immerse themselves into interdisciplinary research projects with their peers. The ceremony was broadcast online due to Covid-19 and announced the Ten R&D Achievement of the Year that are expected to make a significant impact. To celebrate the award, Professor Kim gave a lecture on “Computational Imaging through Deep Learning for the Acquisition of High-Quality Images.” Focusing on the fact that advancements in artificial intelligence technology can show superior performance when used to convert low-quality videos to higher quality, he introduced some of the AI technologies that are currently being applied in the field of image restoration and quality improvement. Professors Eui-Cheol Shin from the Graduate School of Medical Science and Engineering and In-Cheol Park from the School of Electrical Engineering each received Research Awards, and Professor Junyong Noh from the Graduate School of Culture Technology was selected for the Innovation Award. Professors Dong Ki Yoon from the Department of Chemistry and Hyungki Kim from the Department of Mechanical Engineering were awarded the Interdisciplinary Award as a team for their joint research. Meanwhile, out of KAIST’s ten most notable R&D achievements, those from the field of natural and biological sciences included research on rare earth element-platinum nanoparticle catalysts by Professor Ryong Ryoo from the Department of Chemistry, real-time observations of the locational changes in all of the atoms in a molecule by Professor Hyotcherl Ihee from the Department of Chemistry, and an investigation on memory retention mechanisms after synapse removal from an astrocyte by Professor Won-Suk Chung from the Department of Biological Sciences. Awardees from the engineering field were a wearable robot for paraplegics with the world’s best functionality and walking speed by Professor Kyoungchul Kong from the Department of Mechanical Engineering, fair machine learning by Professor Changho Suh from the School of Electrical Engineering, and a generative adversarial networks processing unit (GANPU), an AI semiconductor that can learn from even mobiles by processing multiple and deep networks by Professor Hoi-Jun Yoo from the School of Electrical Engineering. Others selected as part of the ten research studies were the development of epigenetic reprogramming technology in tumour by Professor Pilnam Kim from the Department of Bio and Brain Engineering, the development of an original technology for reverse cell aging by Professor Kwang-Hyun Cho from the Department of Bio and Brain Engineering, a heterogeneous metal element catalyst for atmospheric purification by Professor Hyunjoo Lee from the Department of Chemical and Biomolecular Engineering, and the Mobile Clinic Module (MCM): a negative pressure ward for epidemic hospitals by Professor Taek-jin Nam (reported at the Wall Street Journal) from the Department of Industrial Design.
2021.05.31
View 12610
Deep Learning-Based Cough Recognition Model Helps Detect the Location of Coughing Sounds in Real Time
The Center for Noise and Vibration Control at KAIST announced that their coughing detection camera recognizes where coughing happens, visualizing the locations. The resulting cough recognition camera can track and record information about the person who coughed, their location, and the number of coughs on a real-time basis. Professor Yong-Hwa Park from the Department of Mechanical Engineering developed a deep learning-based cough recognition model to classify a coughing sound in real time. The coughing event classification model is combined with a sound camera that visualizes their locations in public places. The research team said they achieved a best test accuracy of 87.4 %. Professor Park said that it will be useful medical equipment during epidemics in public places such as schools, offices, and restaurants, and to constantly monitor patients’ conditions in a hospital room. Fever and coughing are the most relevant respiratory disease symptoms, among which fever can be recognized remotely using thermal cameras. This new technology is expected to be very helpful for detecting epidemic transmissions in a non-contact way. The cough event classification model is combined with a sound camera that visualizes the cough event and indicates the location in the video image. To develop a cough recognition model, a supervised learning was conducted with a convolutional neural network (CNN). The model performs binary classification with an input of a one-second sound profile feature, generating output to be either a cough event or something else. In the training and evaluation, various datasets were collected from Audioset, DEMAND, ETSI, and TIMIT. Coughing and others sounds were extracted from Audioset, and the rest of the datasets were used as background noises for data augmentation so that this model could be generalized for various background noises in public places. The dataset was augmented by mixing coughing sounds and other sounds from Audioset and background noises with the ratio of 0.15 to 0.75, then the overall volume was adjusted to 0.25 to 1.0 times to generalize the model for various distances. The training and evaluation datasets were constructed by dividing the augmented dataset by 9:1, and the test dataset was recorded separately in a real office environment. In the optimization procedure of the network model, training was conducted with various combinations of five acoustic features including spectrogram, Mel-scaled spectrogram and Mel-frequency cepstrum coefficients with seven optimizers. The performance of each combination was compared with the test dataset. The best test accuracy of 87.4% was achieved with Mel-scaled Spectrogram as the acoustic feature and ASGD as the optimizer. The trained cough recognition model was combined with a sound camera. The sound camera is composed of a microphone array and a camera module. A beamforming process is applied to a collected set of acoustic data to find out the direction of incoming sound source. The integrated cough recognition model determines whether the sound is cough or not. If it is, the location of cough is visualized as a contour image with a ‘cough’ label at the location of the coughing sound source in a video image. A pilot test of the cough recognition camera in an office environment shows that it successfully distinguishes cough events and other events even in a noisy environment. In addition, it can track the location of the person who coughed and count the number of coughs in real time. The performance will be improved further with additional training data obtained from other real environments such as hospitals and classrooms. Professor Park said, “In a pandemic situation like we are experiencing with COVID-19, a cough detection camera can contribute to the prevention and early detection of epidemics in public places. Especially when applied to a hospital room, the patient's condition can be tracked 24 hours a day and support more accurate diagnoses while reducing the effort of the medical staff." This study was conducted in collaboration with SM Instruments Inc. Profile: Yong-Hwa Park, Ph.D. Associate Professor yhpark@kaist.ac.kr http://human.kaist.ac.kr/ Human-Machine Interaction Laboratory (HuMaN Lab.) Department of Mechanical Engineering (ME) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr/en/ Daejeon 34141, Korea Profile: Gyeong Tae Lee PhD Candidate hansaram@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Seong Hu Kim PhD Candidate tjdgnkim@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Hyeonuk Nam PhD Candidate frednam@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Young-Key Kim CEO sales@smins.co.kr http://en.smins.co.kr/ SM Instruments Inc. Daejeon 34109, Korea (END)
2020.08.13
View 13686
A Deep-Learned E-Skin Decodes Complex Human Motion
A deep-learning powered single-strained electronic skin sensor can capture human motion from a distance. The single strain sensor placed on the wrist decodes complex five-finger motions in real time with a virtual 3D hand that mirrors the original motions. The deep neural network boosted by rapid situation learning (RSL) ensures stable operation regardless of its position on the surface of the skin. Conventional approaches require many sensor networks that cover the entire curvilinear surfaces of the target area. Unlike conventional wafer-based fabrication, this laser fabrication provides a new sensing paradigm for motion tracking. The research team, led by Professor Sungho Jo from the School of Computing, collaborated with Professor Seunghwan Ko from Seoul National University to design this new measuring system that extracts signals corresponding to multiple finger motions by generating cracks in metal nanoparticle films using laser technology. The sensor patch was then attached to a user’s wrist to detect the movement of the fingers. The concept of this research started from the idea that pinpointing a single area would be more efficient for identifying movements than affixing sensors to every joint and muscle. To make this targeting strategy work, it needs to accurately capture the signals from different areas at the point where they all converge, and then decoupling the information entangled in the converged signals. To maximize users’ usability and mobility, the research team used a single-channeled sensor to generate the signals corresponding to complex hand motions. The rapid situation learning (RSL) system collects data from arbitrary parts on the wrist and automatically trains the model in a real-time demonstration with a virtual 3D hand that mirrors the original motions. To enhance the sensitivity of the sensor, researchers used laser-induced nanoscale cracking. This sensory system can track the motion of the entire body with a small sensory network and facilitate the indirect remote measurement of human motions, which is applicable for wearable VR/AR systems. The research team said they focused on two tasks while developing the sensor. First, they analyzed the sensor signal patterns into a latent space encapsulating temporal sensor behavior and then they mapped the latent vectors to finger motion metric spaces. Professor Jo said, “Our system is expandable to other body parts. We already confirmed that the sensor is also capable of extracting gait motions from a pelvis. This technology is expected to provide a turning point in health-monitoring, motion tracking, and soft robotics.” This study was featured in Nature Communications. Publication: Kim, K. K., et al. (2020) A deep-learned skin sensor decoding the epicentral human motions. Nature Communications. 11. 2149. https://doi.org/10.1038/s41467-020-16040-y29 Link to download the full-text paper: https://www.nature.com/articles/s41467-020-16040-y.pdf Profile: Professor Sungho Jo shjo@kaist.ac.kr http://nmail.kaist.ac.kr Neuro-Machine Augmented Intelligence Lab School of Computing College of Engineering KAIST
2020.06.10
View 10764
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1