본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
interaction
by recently order
by view order
KAIST leads AI-based analysis on drug-drug interactions involving Paxlovid
KAIST (President Kwang Hyung Lee) announced on the 16th that an advanced AI-based drug interaction prediction technology developed by the Distinguished Professor Sang Yup Lee's research team in the Department of Biochemical Engineering that analyzed the interaction between the PaxlovidTM ingredients that are used as COVID-19 treatment and other prescription drugs was published as a thesis. This paper was published in the online edition of 「Proceedings of the National Academy of Sciences of America」 (PNAS), an internationally renowned academic journal, on the 13th of March. * Thesis Title: Computational prediction of interactions between Paxlovid and prescription drugs (Authored by Yeji Kim (KAIST, co-first author), Jae Yong Ryu (Duksung Women's University, co-first author), Hyun Uk Kim (KAIST, co-first author), and Sang Yup Lee (KAIST, corresponding author)) In this study, the research team developed DeepDDI2, an advanced version of DeepDDI, an AI-based drug interaction prediction model they developed in 2018. DeepDDI2 is able to compute for and process a total of 113 drug-drug interaction (DDI) types, more than the 86 DDI types covered by the existing DeepDDI. The research team used DeepDDI2 to predict possible interactions between the ingredients (ritonavir, nirmatrelvir) of Paxlovid*, a COVID-19 treatment, and other prescription drugs. The research team said that while among COVID-19 patients, high-risk patients with chronic diseases such as high blood pressure and diabetes are likely to be taking other drugs, drug-drug interactions and adverse drug reactions for Paxlovid have not been sufficiently analyzed, yet. This study was pursued in light of seeing how continued usage of the drug may lead to serious and unwanted complications. * Paxlovid: Paxlovid is a COVID-19 treatment developed by Pfizer, an American pharmaceutical company, and received emergency use approval (EUA) from the US Food and Drug Administration (FDA) in December 2021. The research team used DeepDDI2 to predict how Paxrovid's components, ritonavir and nirmatrelvir, would interact with 2,248 prescription drugs. As a result of the prediction, ritonavir was predicted to interact with 1,403 prescription drugs and nirmatrelvir with 673 drugs. Using the prediction results, the research team proposed alternative drugs with the same mechanism but low drug interaction potential for prescription drugs with high adverse drug events (ADEs). Accordingly, 124 alternative drugs that could reduce the possible adverse DDI with ritonavir and 239 alternative drugs for nirmatrelvir were identified. Through this research achievement, it became possible to use an deep learning technology to accurately predict drug-drug interactions (DDIs), and this is expected to play an important role in the digital healthcare, precision medicine and pharmaceutical industries by providing useful information in the process of developing new drugs and making prescriptions. Distinguished Professor Sang Yup Lee said, "The results of this study are meaningful at times like when we would have to resort to using drugs that are developed in a hurry in the face of an urgent situations like the COVID-19 pandemic, that it is now possible to identify and take necessary actions against adverse drug reactions caused by drug-drug interactions very quickly.” This research was carried out with the support of the KAIST New-Deal Project for COVID-19 Science and Technology and the Bio·Medical Technology Development Project supported by the Ministry of Science and ICT. Figure 1. Results of drug interaction prediction between Paxlovid ingredients and representative approved drugs using DeepDDI2
2023.03.16
View 5182
Scientists re-writes FDA-recommended equation to improve estimation of drug-drug interaction
Drugs absorbed into the body are metabolized and thus removed by enzymes from several organs like the liver. How fast a drug is cleared out of the system can be affected by other drugs that are taken together because added substance can increase the amount of enzyme secretion in the body. This dramatically decreases the concentration of a drug, reducing its efficacy, often leading to the failure of having any effect at all. Therefore, accurately predicting the clearance rate in the presence of drug-drug interaction* is critical in the process of drug prescription and development of a new drug in order to ensure its efficacy and/or to avoid unwanted side-effects. *Drug-drug interaction: In terms of metabolism, drug-drug interaction is a phenomenon in which one drug changes the metabolism of another drug to promote or inhibit its excretion from the body when two or more drugs are taken together. As a result, it increases the toxicity of medicines or causes loss of efficacy. Since it is practically impossible to evaluate all interactions between new drug candidates and all marketed drugs during the development process, the FDA recommends indirect evaluation of drug interactions using a formula suggested in their guidance, first published in 1997, revised in January of 2020, in order to evaluate drug interactions and minimize side effects of having to use more than one type of drugs at once. The formula relies on the 110-year-old Michaelis-Menten (MM) model, which has a fundamental limit of making a very broad and groundless assumption on the part of the presence of the enzymes that metabolizes the drug. While MM equation has been one of the most widely known equations in biochemistry used in more than 220,000 published papers, the MM equation is accurate only when the concentration of the enzyme that metabolizes the drug is almost non-existent, causing the accuracy of the equation highly unsatisfactory – only 38 percent of the predictions had less than two-fold errors. “To make up for the gap, researcher resorted to plugging in scientifically unjustified constants into the equation,” Professor Jung-woo Chae of Chungnam National University College of Pharmacy said. “This is comparable to having to have the epicyclic orbits introduced to explain the motion of the planets back in the days in order to explain the now-defunct Ptolemaic theory, because it was 'THE' theory back then.” < (From left) Ph.D. student Yun Min Song (KAIST, co-first authors), Professor Sang Kyum Kim (Chungnam National University, co-corresponding author), Jae Kyoung Kim, CI (KAIST, co-corresponding author), Professor Jung-woo Chae (Chungnam National University, co-corresponding author), Ph.D. students Quyen Thi Tran and Ngoc-Anh Thi Vu (Chungnam National University, co-first authors) > A joint research team composed of mathematicians from the Biomedical Mathematics Group within the Institute for Basic Science (IBS) and the Korea Advanced Institute of Science and Technology (KAIST) and pharmacological scientists from the Chungnam National University reported that they identified the major causes of the FDA-recommended equation’s inaccuracies and presented a solution. When estimating the gut bioavailability (Fg), which is the key parameter of the equation, the fraction absorbed from the gut lumen (Fa) is usually assumed to be 1. However, many experiments have shown that Fa is less than 1, obviously since it can’t be expected that all of the orally taken drugs to be completely absorbed by the intestines. To solve this problem, the research team used an “estimated Fa” value based on factors such as the drug’s transit time, intestine radius, and permeability values and used it to re-calculate Fg. Also, taking a different approach from the MM equation, the team used an alternative model they derived in a previous study back in 2020, which can more accurately predict the drug metabolism rate regardless of the enzyme concentration. Combining these changes, the modified equation with re-calculated Fg had a dramatically increased accuracy of the resulting estimate. The existing FDA formula predicted drug interactions within a 2-fold margin of error at the rate of 38%, whereas the accuracy rate of the revised formula reached 80%. “Such drastic improvement in drug-drug interaction prediction accuracy is expected to make great contribution to increasing the success rate of new drug development and drug efficacy in clinical trials. As the results of this study were published in one of the top clinical pharmacology journal, it is expected that the FDA guidance will be revised according to the results of this study.” said Professor Sang Kyum Kim from Chungnam National University College of Pharmacy. Furthermore, this study highlights the importance of collaborative research between research groups in vastly different disciplines, in a field that is as dynamic as drug interactions. “Thanks to the collaborative research between mathematics and pharmacy, we were able to recify the formula that we have accepted to be the right answer for so long to finally grasp on the leads toward healthier life for mankind.,” said Professor Jae Kyung Kim. He continued, “I hope seeing a ‘K-formula’ entered into the US FDA guidance one day.” The results of this study were published in the online edition of Clinical Pharmacology and Therapeutics (IF 7.051), an authoritative journal in the field of clinical pharmacology, on December 15, 2022 (Korean time). Thesis Title: Beyond the Michaelis-Menten: Accurate Prediction of Drug Interactions through Cytochrome P450 3A4 Induction (doi: 10.1002/cpt.2824) < Figure 1. The formula proposed by the FDA guidance for predicting drug-drug interactions (top) and the formula newly derived by the researchers (bottom). AUCR (the ratio of substrate area under the plasma concentration-time curve) represents the rate of change in drug concentration due to drug interactions. The research team more than doubled the accuracy of drug interaction prediction compared to the existing formula. > < Figure 2. Existing FDA formulas tend to underestimate the extent of drug-drug interactions (gray dots) than the actual measured values. On the other hand, the newly derived equation (red dot) has a prediction rate that is within the error range of 2 times (0.5 to 2 times) of the measured value, and is more than twice as high as the existing equation. The solid line in the figure represents the predicted value that matches the measured value. The dotted line represents the predicted value with an error of 0.5 to 2 times. > For further information or to request media assistance, please contact Jae Kyoung Kim at Biomedical Mathematics Group, Institute for Basic Science (IBS) (jaekkim@ibs.re.kr) or William I. Suh at the IBS Communications Team (willisuh@ibs.re.kr). - About the Institute for Basic Science (IBS) IBS was founded in 2011 by the government of the Republic of Korea with the sole purpose of driving forward the development of basic science in South Korea. IBS has 4 research institutes and 33 research centers as of January 2023. There are eleven physics, three mathematics, five chemistry, nine life science, two earth science, and three interdisciplinary research centers.
2023.01.18
View 9769
Professor Juho Kim’s Team Wins Best Paper Award at ACM CHI 2022
The research team led by Professor Juho Kim from the KAIST School of Computing won a Best Paper Award and an Honorable Mention Award at the Association for Computing Machinery Conference on Human Factors in Computing Systems (ACM CHI) held between April 30 and May 6. ACM CHI is the world’s most recognized conference in the field of human computer interactions (HCI), and is ranked number one out of all HCI-related journals and conferences based on Google Scholar’s h-5 index. Best paper awards are given to works that rank in the top one percent, and honorable mention awards are given to the top five percent of the papers accepted by the conference. Professor Juho Kim presented a total of seven papers at ACM CHI 2022, and tied for the largest number of papers. A total of 19 papers were affiliated with KAIST, putting it fifth out of all participating institutes and thereby proving KAIST’s competence in research. One of Professor Kim’s research teams composed of Jeongyeon Kim (first author, MS graduate) from the School of Computing, MS candidate Yubin Choi from the School of Electrical Engineering, and Dr. Meng Xia (post-doctoral associate in the School of Computing, currently a post-doctoral associate at Carnegie Mellon University) received a best paper award for their paper, “Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities”. The study analyzed the difficulties experienced by learners watching video-based educational content in a mobile environment and suggests guidelines for solutions. The research team analyzed 134 survey responses and 21 interviews, and revealed that texts that are too small or overcrowded are what mainly brings down the legibility of video contents. Additionally, lighting, noise, and surrounding environments that change frequently are also important factors that may disturb a learning experience. Based on these findings, the team analyzed the aptness of 41,722 frames from 101 video lectures for mobile environments, and confirmed that they generally show low levels of adequacy. For instance, in the case of text sizes, only 24.5% of the frames were shown to be adequate for learning in mobile environments. To overcome this issue, the research team suggested a guideline that may improve the legibility of video contents and help overcome the difficulties arising from mobile learning environments. The importance of and dependency on video-based learning continue to rise, especially in the wake of the pandemic, and it is meaningful that this research suggested a means to analyze and tackle the difficulties of users that learn from the small screens of mobile devices. Furthermore, the paper also suggested technology that can solve problems related to video-based learning through human-AI collaborations, enhancing existing video lectures and improving learning experiences. This technology can be applied to various video-based platforms and content creation. Meanwhile, a research team composed of Ph.D. candidate Tae Soo Kim (first author), MS candidate DaEun Choi, and Ph.D. candidate Yoonseo Choi from the School of Computing received an honorable mention award for their paper, “Stylette: styling the Web with Natural Language”. The research team developed a novel interface technology that allows nonexperts who are unfamiliar with technical jargon to edit website features through speech. People often find it difficult to use or find the information they need from various websites due to accessibility issues, device-related constraints, inconvenient design, style preferences, etc. However, it is not easy for laymen to edit website features without expertise in programming or design, and most end up just putting up with the inconveniences. But what if the system could read the intentions of its users from their everyday language like “emphasize this part a little more”, or “I want a more modern design”, and edit the features automatically? Based on this question, Professor Kim’s research team developed ‘Stylette’, a system in which AI analyses its users’ speech expressed in their natural language and automatically recommends a new style that best fits their intentions. The research team created a new system by putting together language AI, visual AI, and user interface technologies. On the linguistic side, a large-scale language model AI converts the intentions of the users expressed through their everyday language into adequate style elements. On the visual side, computer vision AI compares 1.7 million existing web design features and recommends a style adequate for the current website. In an experiment where 40 nonexperts were asked to edit a website design, the subjects that used this system showed double the success rate in a time span that was 35% shorter compared to the control group. It is meaningful that this research proposed a practical case in which AI technology constructs intuitive interactions with users. The developed technology can be applied to existing design applications and web browsers in a plug-in format, and can be utilized to improve websites or for advertisements by collecting the natural intention data of users on a large scale.
2022.06.13
View 6178
Neuromorphic Memory Device Simulates Neurons and Synapses
Simultaneous emulation of neuronal and synaptic properties promotes the development of brain-like artificial intelligence Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices. Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices. Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory. Professor Keon Jae Lee explained, "Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.” This result entitled “Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse” was published in the May 19, 2022 issue of Nature Communications. -Publication:Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im, and Keon Jae Lee (2022) “Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse,” Nature Communications May 19, 2022 (DOI: 10.1038/s41467-022-30432-2) -Profile:Professor Keon Jae Leehttp://fand.kaist.ac.kr Department of Materials Science and EngineeringKAIST
2022.05.20
View 9873
Professor Sung-Ju Lee’s Team Wins the Best Paper and the Methods Recognition Awards at the ACM CSCW
A research team led by Professor Sung-Ju Lee at the School of Electrical Engineering won the Best Paper Award and the Methods Recognition Award from ACM CSCW (International Conference on Computer-Supported Cooperative Work and Social Computing) 2021 for their paper “Reflect, not Regret: Understanding Regretful Smartphone Use with App Feature-Level Analysis”. Founded in 1986, CSCW has been a premier conference on HCI (Human Computer Interaction) and Social Computing. This year, 340 full papers were presented and the best paper awards are given to the top 1% papers of the submitted. Methods Recognition, which is a new award, is given “for strong examples of work that includes well developed, explained, or implemented methods, and methodological innovation.” Hyunsung Cho (KAIST alumus and currently a PhD candidate at Carnegie Mellon University), Daeun Choi (KAIST undergraduate researcher), Donghwi Kim (KAIST PhD Candidate), Wan Ju Kang (KAIST PhD Candidate), and Professor Eun Kyoung Choe (University of Maryland and KAIST alumna) collaborated on this research. The authors developed a tool that tracks and analyzes which features of a mobile app (e.g., Instagram’s following post, following story, recommended post, post upload, direct messaging, etc.) are in use based on a smartphone’s User Interface (UI) layout. Utilizing this novel method, the authors revealed which feature usage patterns result in regretful smartphone use. Professor Lee said, “Although many people enjoy the benefits of smartphones, issues have emerged from the overuse of smartphones. With this feature level analysis, users can reflect on their smartphone usage based on finer grained analysis and this could contribute to digital wellbeing.”
2021.11.22
View 5502
How Stingrays Became the Most Efficient Swimmers in Nature
Study shows the hydrodynamic benefits of protruding eyes and mouth in a self-propelled flexible stingray With their compressed bodies and flexible pectoral fins, stingrays have evolved to become one of nature’s most efficient swimmers. Scientists have long wondered about the role played by their protruding eyes and mouths, which one might expect to be hydrodynamic disadvantages. Professor Hyung Jin Sung and his colleagues have discovered how such features on simulated stingrays affect a range of forces involved in propulsion, such as pressure and vorticity. Despite what one might expect, their research team found these protruding features actually help streamline the stingrays. ‘The influence of the 3D protruding eyes and mouth on a self-propelled flexible stingray and its underlying hydrodynamic mechanism are not yet fully understood,” said Professor Sung. “In the present study, the hydrodynamic benefit of protruding eyes and mouth was explored for the first time, revealing their hydrodynamic role.” To illustrate the complex interplay between hydrodynamic forces, the researchers set to work creating a computer model of a self-propelled flexible plate. They clamped the front end of the model and then forced it to mimic the up-and-down harmonic oscillations stingrays use to propel themselves. To re-create the effect of the eyes and mouth on the surrounding water, the team simulated multiple rigid plates on the model. They compared this model to one without eyes and a mouth using a technique called the penalty immersed boundary method. “Managing random fish swimming and isolating the desired purpose of the measurements from numerous factors was difficult,” Sung said. “To overcome these limitations, the penalty immersed boundary method was adopted to find the hydrodynamic benefits of the protruding eyes and mouth.” The team discovered that the eyes and mouth generated a vortex of flow in the forward-backward , which increased negative pressure at the simulated animal’s front, and a side-to-side vortex that increased the pressure difference above and below the stingray. The result was increased thrust and accelerated cruising. Further analysis showed that the eyes and mouth increased overall propulsion efficiency by more than 20.5% and 10.6%, respectively. Researchers hope their work, driven by curiosity, further stokes interest in exploring fluid phenomena in nature. They are hoping to find ways to adapt this for next-generation water vehicle designs based more closely on marine animals. This study was supported by the National Research Foundation of Korea and the State Scholar Fund from the China Scholarship Council. -ProfileProfessor Hyung Jin SungDepartment of Mechanical EngineeringKAIST -PublicationHyung Jin Sung, Qian Mao, Ziazhen Zhao, Yingzheng Liu, “Hydrodynamic benefits of protruding eyes and mouth in a self-propelled flexible stingray,” Aug.31, 2021, Physics of Fluids (https://doi.org/10.1063/5.0061287) -News release from the American Institute of Physics, Aug.31, 2021
2021.09.06
View 5894
‘Game&Art: Auguries of Fantasy’ Features Future of the Metaverse
‘Game & Art: Auguries of Fantasy,’ a special exhibition combining art and technology will feature the new future of metaverse fantasy. The show will be hosted at the Daejeon Creative Center at the Daejeon Museum of Art through September 5. This show exhibits a combination of science and technology with culture and arts, and introduces young artists whose creativity will lead to new opportunities in games and art. The Graduate School of Culture Technology was designated as a leading culture content academy in 2020 by the Ministry of Culture, Sports & Tourism and the Korea Creative Content Agency for fostering the R&D workforce in creative culture technology. NCsoft sponsored the show and also participated as an artist. It combined its game-composing elements and technologies with other genres, including data for game construction, scenarios for forming a worldview, and game art and sound. All of the contents can be experienced online in a virtual space as well as offline, and can be easily accessed through personal devices. Characterized by the themes ‘timeless’ and ‘spaceless’ which connect the past, present, and future, and space created in the digital world. The exhibition gives audience members an opportunity to experience freedom beyond the constraints of time and space under the theme of a fantasy reality created by games and art. "Computer games, which began in the 1980s, have become cultural content that spans generations, and games are now the fusion field for leading-edge technologies including computer graphics, sound, human-computer interactions, big data, and AI. They are also the best platform for artistic creativity by adding human imagination to technology," said Professor Joo-Han Nam from the Graduate School of Culture Technology, who led the project. "Our artists wanted to convey various messages to our society through works that connect the past, present, and future through games." Ju-young Oh's "Unexpected Scenery V2" and "Hope for Rats V2" display game-type media work that raises issues surrounding technology, such as the lack of understanding behind various scientific achievements, the history of accidental achievements, and the side effects of new conveniences. Tae-Wan Kim, in his work themed ‘healing’ combined the real-time movement of particles which follows the movements of people recorded as digital data. Metadata is collected by sensors in the exhibition space, and floating particle forms are evolved into abstract graphic designs according to audio-visual responses. Meanwhile, ‘SOS’ is a collaboration work from six KAIST researchers (In-Hwa Yeom, Seung-Eon Lee, Seong-Jin Jeon, Jin-Seok Hong, Hyung-Seok Yoon, and Sang-Min Lee). SOS is based on diverse perspectives embracing phenomena surrounding contemporary natural resources. Audience members follow a gamified path between the various media-elements composing the art’s environment. Through this process, the audience can experience various emotions such as curiosity, suspicion, and recovery. ‘Diversity’ by Sung-Hyun Kim uses devices that recognize the movements of hands and fingers to provide experiences exploring the latent space of game play images learned by deep neural networks. Image volumes generated by neural networks are visualized through physics-based, three-dimensional, volume-rendering algorithms, and a series of processes were implemented based on the self-written code.
2021.06.21
View 7237
Defining the Hund Physics Landscape of Two-Orbital Systems
Researchers identify exotic metals in unexpected quantum systems Electrons are ubiquitous among atoms, subatomic tokens of energy that can independently change how a system behaves—but they also can change each other. An international research collaboration found that collectively measuring electrons revealed unique and unanticipated findings. The researchers published their results on May 17 in Physical Review Letters. “It is not feasible to obtain the solution just by tracing the behavior of each individual electron,” said paper author Myung Joon Han, professor of physics at KAIST. “Instead, one should describe or track all the entangled electrons at once. This requires a clever way of treating this entanglement.” Professor Han and the researchers used a recently developed “many-particle” theory to account for the entangled nature of electrons in solids, which approximates how electrons locally interact with one another to predict their global activity. Through this approach, the researchers examined systems with two orbitals — the space in which electrons can inhabit. They found that the electrons locked into parallel arrangements within atom sites in solids. This phenomenon, known as Hund’s coupling, results in a Hund’s metal. This metallic phase, which can give rise to such properties as superconductivity, was thought only to exist in three-orbital systems. “Our finding overturns a conventional viewpoint that at least three orbitals are needed for Hund’s metallicity to emerge,” Professor Han said, noting that two-orbital systems have not been a focus of attention for many physicists. “In addition to this finding of a Hund’s metal, we identified various metallic regimes that can naturally occur in generic, correlated electron materials.” The researchers found four different correlated metals. One stems from the proximity to a Mott insulator, a state of a solid material that should be conductive but actually prevents conduction due to how the electrons interact. The other three metals form as electrons align their magnetic moments — or phases of producing a magnetic field — at various distances from the Mott insulator. Beyond identifying the metal phases, the researchers also suggested classification criteria to define each metal phase in other systems. “This research will help scientists better characterize and understand the deeper nature of so-called ‘strongly correlated materials,’ in which the standard theory of solids breaks down due to the presence of strong Coulomb interactions between electrons,” Professor Han said, referring to the force with which the electrons attract or repel each other. These interactions are not typically present in solid materials but appear in materials with metallic phases. The revelation of metals in two-orbital systems and the ability to determine whole system electron behavior could lead to even more discoveries, according to Professor Han. “This will ultimately enable us to manipulate and control a variety of electron correlation phenomena,” Professor Han said. Co-authors include Siheon Ryee from KAIST and Sangkook Choi from the Condensed Matter Physics and Materials Science Department, Brookhaven National Laboratory in the United States. Korea’s National Research Foundation and the U.S. Department of Energy’s (DOE) Office of Science, Basic Energy Sciences, supported this work. -PublicationSiheon Ryee, Myung Joon Han, and SangKook Choi, 2021.Hund Physics Landscape of Two-Orbital Systems, Physical Review Letters, DOI: 10.1103/PhysRevLett.126.206401 -ProfileProfessor Myung Joon HanDepartment of PhysicsCollege of Natural ScienceKAIST
2021.06.17
View 7049
Deep Learning-Based Cough Recognition Model Helps Detect the Location of Coughing Sounds in Real Time
The Center for Noise and Vibration Control at KAIST announced that their coughing detection camera recognizes where coughing happens, visualizing the locations. The resulting cough recognition camera can track and record information about the person who coughed, their location, and the number of coughs on a real-time basis. Professor Yong-Hwa Park from the Department of Mechanical Engineering developed a deep learning-based cough recognition model to classify a coughing sound in real time. The coughing event classification model is combined with a sound camera that visualizes their locations in public places. The research team said they achieved a best test accuracy of 87.4 %. Professor Park said that it will be useful medical equipment during epidemics in public places such as schools, offices, and restaurants, and to constantly monitor patients’ conditions in a hospital room. Fever and coughing are the most relevant respiratory disease symptoms, among which fever can be recognized remotely using thermal cameras. This new technology is expected to be very helpful for detecting epidemic transmissions in a non-contact way. The cough event classification model is combined with a sound camera that visualizes the cough event and indicates the location in the video image. To develop a cough recognition model, a supervised learning was conducted with a convolutional neural network (CNN). The model performs binary classification with an input of a one-second sound profile feature, generating output to be either a cough event or something else. In the training and evaluation, various datasets were collected from Audioset, DEMAND, ETSI, and TIMIT. Coughing and others sounds were extracted from Audioset, and the rest of the datasets were used as background noises for data augmentation so that this model could be generalized for various background noises in public places. The dataset was augmented by mixing coughing sounds and other sounds from Audioset and background noises with the ratio of 0.15 to 0.75, then the overall volume was adjusted to 0.25 to 1.0 times to generalize the model for various distances. The training and evaluation datasets were constructed by dividing the augmented dataset by 9:1, and the test dataset was recorded separately in a real office environment. In the optimization procedure of the network model, training was conducted with various combinations of five acoustic features including spectrogram, Mel-scaled spectrogram and Mel-frequency cepstrum coefficients with seven optimizers. The performance of each combination was compared with the test dataset. The best test accuracy of 87.4% was achieved with Mel-scaled Spectrogram as the acoustic feature and ASGD as the optimizer. The trained cough recognition model was combined with a sound camera. The sound camera is composed of a microphone array and a camera module. A beamforming process is applied to a collected set of acoustic data to find out the direction of incoming sound source. The integrated cough recognition model determines whether the sound is cough or not. If it is, the location of cough is visualized as a contour image with a ‘cough’ label at the location of the coughing sound source in a video image. A pilot test of the cough recognition camera in an office environment shows that it successfully distinguishes cough events and other events even in a noisy environment. In addition, it can track the location of the person who coughed and count the number of coughs in real time. The performance will be improved further with additional training data obtained from other real environments such as hospitals and classrooms. Professor Park said, “In a pandemic situation like we are experiencing with COVID-19, a cough detection camera can contribute to the prevention and early detection of epidemics in public places. Especially when applied to a hospital room, the patient's condition can be tracked 24 hours a day and support more accurate diagnoses while reducing the effort of the medical staff." This study was conducted in collaboration with SM Instruments Inc. Profile: Yong-Hwa Park, Ph.D. Associate Professor yhpark@kaist.ac.kr http://human.kaist.ac.kr/ Human-Machine Interaction Laboratory (HuMaN Lab.) Department of Mechanical Engineering (ME) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr/en/ Daejeon 34141, Korea Profile: Gyeong Tae Lee PhD Candidate hansaram@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Seong Hu Kim PhD Candidate tjdgnkim@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Hyeonuk Nam PhD Candidate frednam@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Young-Key Kim CEO sales@smins.co.kr http://en.smins.co.kr/ SM Instruments Inc. Daejeon 34109, Korea (END)
2020.08.13
View 13741
Image Analysis to Automatically Quantify Gender Bias in Movies
Many commercial films worldwide continue to express womanhood in a stereotypical manner, a recent study using image analysis showed. A KAIST research team developed a novel image analysis method for automatically quantifying the degree of gender bias in films. The ‘Bechdel Test’ has been the most representative and general method of evaluating gender bias in films. This test indicates the degree of gender bias in a film by measuring how active the presence of women is in a film. A film passes the Bechdel Test if the film (1) has at least two female characters, (2) who talk to each other, and (3) their conversation is not related to the male characters. However, the Bechdel Test has fundamental limitations regarding the accuracy and practicality of the evaluation. Firstly, the Bechdel Test requires considerable human resources, as it is performed subjectively by a person. More importantly, the Bechdel Test analyzes only a single aspect of the film, the dialogues between characters in the script, and provides only a dichotomous result of passing the test, neglecting the fact that a film is a visual art form reflecting multi-layered and complicated gender bias phenomena. It is also difficult to fully represent today’s various discourse on gender bias, which is much more diverse than in 1985 when the Bechdel Test was first presented. Inspired by these limitations, a KAIST research team led by Professor Byungjoo Lee from the Graduate School of Culture Technology proposed an advanced system that uses computer vision technology to automatically analyzes the visual information of each frame of the film. This allows the system to more accurately and practically evaluate the degree to which female and male characters are discriminatingly depicted in a film in quantitative terms, and further enables the revealing of gender bias that conventional analysis methods could not yet detect. Professor Lee and his researchers Ji Yoon Jang and Sangyoon Lee analyzed 40 films from Hollywood and South Korea released between 2017 and 2018. They downsampled the films from 24 to 3 frames per second, and used Microsoft’s Face API facial recognition technology and object detection technology YOLO9000 to verify the details of the characters and their surrounding objects in the scenes. Using the new system, the team computed eight quantitative indices that describe the representation of a particular gender in the films. They are: emotional diversity, spatial staticity, spatial occupancy, temporal occupancy, mean age, intellectual image, emphasis on appearance, and type and frequency of surrounding objects. Figure 1. System Diagram Figure 2. 40 Hollywood and Korean Films Analyzed in the Study According to the emotional diversity index, the depicted women were found to be more prone to expressing passive emotions, such as sadness, fear, and surprise. In contrast, male characters in the same films were more likely to demonstrate active emotions, such as anger and hatred. Figure 3. Difference in Emotional Diversity between Female and Male Characters The type and frequency of surrounding objects index revealed that female characters and automobiles were tracked together only 55.7 % as much as that of male characters, while they were more likely to appear with furniture and in a household, with 123.9% probability. In cases of temporal occupancy and mean age, female characters appeared less frequently in films than males at the rate of 56%, and were on average younger in 79.1% of the cases. These two indices were especially conspicuous in Korean films. Professor Lee said, “Our research confirmed that many commercial films depict women from a stereotypical perspective. I hope this result promotes public awareness of the importance of taking prudence when filmmakers create characters in films.” This study was supported by KAIST College of Liberal Arts and Convergence Science as part of the Venture Research Program for Master’s and PhD Students, and will be presented at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on November 11 to be held in Austin, Texas. Publication: Ji Yoon Jang, Sangyoon Lee, and Byungjoo Lee. 2019. Quantification of Gender Representation Bias in Commercial Films based on Image Analysis. In Proceedings of the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW). ACM, New York, NY, USA, Article 198, 29 pages. https://doi.org/10.1145/3359300 Link to download the full-text paper: https://files.cargocollective.com/611692/cscw198-jangA--1-.pdf Profile: Prof. Byungjoo Lee, MD, PhD byungjoo.lee@kaist.ac.kr http://kiml.org/ Assistant Professor Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea Profile: Ji Yoon Jang, M.S. yoone3422@kaist.ac.kr Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea Profile: Sangyoon Lee, M.S. Candidate sl2820@kaist.ac.kr Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea (END)
2019.10.17
View 22689
Flexible User Interface Distribution for Ubiquitous Multi-Device Interaction
< Research Group of Professor Insik Shin (center) > KAIST researchers have developed mobile software platform technology that allows a mobile application (app) to be executed simultaneously and more dynamically on multiple smart devices. Its high flexibility and broad applicability can help accelerate a shift from the current single-device paradigm to a multiple one, which enables users to utilize mobile apps in ways previously unthinkable. Recent trends in mobile and IoT technologies in this era of 5G high-speed wireless communication have been hallmarked by the emergence of new display hardware and smart devices such as dual screens, foldable screens, smart watches, smart TVs, and smart cars. However, the current mobile app ecosystem is still confined to the conventional single-device paradigm in which users can employ only one screen on one device at a time. Due to this limitation, the real potential of multi-device environments has not been fully explored. A KAIST research team led by Professor Insik Shin from the School of Computing, in collaboration with Professor Steve Ko’s group from the State University of New York at Buffalo, has developed mobile software platform technology named FLUID that can flexibly distribute the user interfaces (UIs) of an app to a number of other devices in real time without needing any modifications. The proposed technology provides single-device virtualization, and ensures that the interactions between the distributed UI elements across multiple devices remain intact. This flexible multimodal interaction can be realized in diverse ubiquitous user experiences (UX), such as using live video steaming and chatting apps including YouTube, LiveMe, and AfreecaTV. FLUID can ensure that the video is not obscured by the chat window by distributing and displaying them separately on different devices respectively, which lets users enjoy the chat function while watching the video at the same time. In addition, the UI for the destination input on a navigation app can be migrated into the passenger’s device with the help of FLUID, so that the destination can be easily and safely entered by the passenger while the driver is at the wheel. FLUID can also support 5G multi-view apps – the latest service that allows sports or games to be viewed from various angles on a single device. With FLUID, the user can watch the event simultaneously from different viewpoints on multiple devices without switching between viewpoints on a single screen. PhD candidate Sangeun Oh, who is the first author, and his team implemented the prototype of FLUID on the leading open-source mobile operating system, Android, and confirmed that it can successfully deliver the new UX to 20 existing legacy apps. “This new technology can be applied to next-generation products from South Korean companies such as LG’s dual screen phone and Samsung’s foldable phone and is expected to embolden their competitiveness by giving them a head-start in the global market.” said Professor Shin. This study will be presented at the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019) October 21 through 25 in Los Cabos, Mexico. The research was supported by the National Science Foundation (NSF) (CNS-1350883 (CAREER) and CNS-1618531). Figure 1. Live video streaming and chatting app scenario Figure 2. Navigation app scenario Figure 3. 5G multi-view app scenario Publication: Sangeun Oh, Ahyeon Kim, Sunjae Lee, Kilho Lee, Dae R. Jeong, Steven Y. Ko, and Insik Shin. 2019. FLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction. To be published in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019). ACM, New York, NY, USA. Article Number and DOI Name TBD. Video Material: https://youtu.be/lGO4GwH4enA Profile: Prof. Insik Shin, MS, PhD ishin@kaist.ac.kr https://cps.kaist.ac.kr/~ishin Professor Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Sangeun Oh, PhD Candidate ohsang1213@kaist.ac.kr https://cps.kaist.ac.kr/ PhD Candidate Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Prof. Steve Ko, PhD stevko@buffalo.edu https://nsr.cse.buffalo.edu/?page_id=272 Associate Professor Networked Systems Research Group Department of Computer Science and Engineering State University of New York at Buffalo http://www.buffalo.edu/ Buffalo 14260, USA (END)
2019.07.20
View 37662
Play Games With No Latency
One of the most challenging issues for game players looks to be resolved soon with the introduction of a zero-latency gaming environment. A KAIST team developed a technology that helps game players maintain zero-latency performance. The new technology transforms the shapes of game design according to the amount of latency. Latency in human-computer interactions is often caused by various factors related to the environment and performance of the devices, networks, and data processing. The term ‘lag’ is used to refer to any latency during gaming which impacts the user’s performance. Professor Byungjoo Lee at the Graduate School of Culture Technology in collaboration with Aalto University in Finland presented a mathematical model for predicting players' behavior by understanding the effects of latency on players. This cognitive model is capable of predicting the success rate of a user when there is latency in a 'moving target selection' task which requires button input in a time constrained situation. The model predicts the players’ task success rate when latency is added to the gaming environment. Using these predicted success rates, the design elements of the game are geometrically modified to help players maintain similar success rates as they would achieve in a zero-latency environment. In fact, this research succeeded in modifying the pillar heights of the Flappy Bird game, allowing the players to maintain their gaming performance regardless of the added latency. Professor Lee said, "This technique is unique in the sense that it does not interfere with a player's gaming flow, unlike traditional methods which manipulate the game clock by the amount of latency. This study can be extended to various games such as reducing the size of obstacles in the latent computing environment.” This research, in collaboration with Dr. Sunjun Kim from Aalto University and led by PhD candidate Injung Lee, was presented during the 2019 CHI Conference on Human Factors in Computing Systems last month in Glasgow in the UK. This research was supported by the National Research Foundation of Korea (NRF) (2017R1C1B2002101, 2018R1A5A7025409), and the Aalto University Seed Funding Granted to the GamerLab respectively. Figure 1. Overview of Geometric Compensation Publication: Injung Lee, Sunjun Kim, and Byungjoo Lee. 2019. Geometrically Compensating Effect of End-to-End Latency in Moving-Target Selection Games. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19) . ACM, New York, NY, USA, Article 560, 12 pages. https://doi.org/10.1145/3290605.3300790 Video Material: https://youtu.be/TTi7dipAKJs Profile: Prof. Byungjoo Lee, MD, PhD byungjoo.lee@kaist.ac.kr http://kiml.org/ Assistant Professor Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Injung Lee, PhD Candidate edndn@kaist.ac.kr PhD Candidate Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Postdoc. Sunjun Kim, MD, PhD kuaa.net@gmail.com Postdoctoral Researcher User Interfaces Group Aalto University https://www.aalto.fi Espoo 02150, Finland (END)
2019.06.11
View 44806
<<
첫번째페이지
<
이전 페이지
1
2
3
>
다음 페이지
>>
마지막 페이지 3