Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
T-GPS Processes a Graph with Trillion Edges on a Single Computer
Trillion-scale graph processing simulation on a single computer presents a new concept of graph processing A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer. Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect. Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine. The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible. The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high. The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach. The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated. The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers. Professor Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, “T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm.” This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP). -Publication: Park, H., et al. (2021) “Trillion-scale Graph Processing Simulation based on Top-Down Graph Upscaling,” IEEE ICDE 2021, Chania, Greece, Apr. 19-22, 2021. Available online at https://conferences.computer.org/icdepub -Profile: Min-Soo Kim Associate Professor http://infolab.kaist.ac.kr School of Computing KAIST
To Talk or Not to Talk: Smart Speaker Determines Optimal Timing to Talk
A KAIST research team has developed a new context-awareness technology that enables AI assistants to determine when to talk to their users based on user circumstances. This technology can contribute to developing advanced AI assistants that can offer pre-emptive services such as reminding users to take medication on time or modifying schedules based on the actual progress of planned tasks. Unlike conventional AI assistants that used to act passively upon users’ commands, today’s AI assistants are evolving to provide more proactive services through self-reasoning of user circumstances. This opens up new opportunities for AI assistants to better support users in their daily lives. However, if AI assistants do not talk at the right time, they could rather interrupt their users instead of helping them. The right time for talking is more difficult for AI assistants to determine than it appears. This is because the context can differ depending on the state of the user or the surrounding environment. A group of researchers led by Professor Uichin Lee from the KAIST School of Computing identified key contextual factors in user circumstances that determine when the AI assistant should start, stop, or resume engaging in voice services in smart home environments. Their findings were published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) in September. The group conducted this study in collaboration with Professor Jae-Gil Lee’s group in the KAIST School of Computing, Professor Sangsu Lee’s group in the KAIST Department of Industrial Design, and Professor Auk Kim’s group at Kangwon National University. After developing smart speakers equipped with AI assistant function for experimental use, the researchers installed them in the rooms of 40 students who live in double-occupancy campus dormitories and collected a total of 3,500 in-situ user response data records over a period of a week. The smart speakers repeatedly asked the students a question, “Is now a good time to talk?” at random intervals or whenever a student’s movement was detected. Students answered with either “yes” or “no” and then explained why, describing what they had been doing before being questioned by the smart speakers. Data analysis revealed that 47% of user responses were “no” indicating they did not want to be interrupted. The research team then created 19 home activity categories to cross-analyze the key contextual factors that determine opportune moments for AI assistants to talk, and classified these factors into ‘personal,’ ‘movement,’ and ‘social’ factors respectively. Personal factors, for instance, include: 1. the degree of concentration on or engagement in activities, 2. the degree urgency and busyness, 3. the state of user’s mental or physical condition, and 4. the state of being able to talk or listen while multitasking. While users were busy concentrating on studying, tired, or drying hair, they found it difficult to engage in conversational interactions with the smart speakers. Some representative movement factors include departure, entrance, and physical activity transitions. Interestingly, in movement scenarios, the team found that the communication range was an important factor. Departure is an outbound movement from the smart speaker, and entrance is an inbound movement. Users were much more available during inbound movement scenarios as opposed to outbound movement scenarios. In general, smart speakers are located in a shared place at home, such as a living room, where multiple family members gather at the same time. In Professor Lee’s group’s experiment, almost half of the in-situ user responses were collected when both roommates were present. The group found social presence also influenced interruptibility. Roommates often wanted to minimize possible interpersonal conflicts, such as disturbing their roommates' sleep or work. Narae Cha, the lead author of this study, explained, “By considering personal, movement, and social factors, we can envision a smart speaker that can intelligently manage the timing of conversations with users.” She believes that this work lays the foundation for the future of AI assistants, adding, “Multi-modal sensory data can be used for context sensing, and this context information will help smart speakers proactively determine when it is a good time to start, stop, or resume conversations with their users.” This work was supported by the National Research Foundation (NRF) of Korea. Publication: Cha, N, et al. (2020) “Hello There! Is Now a Good Time to Talk?”: Opportune Moments for Proactive Interactions with Smart Speakers. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 4, No. 3, Article No. 74, pp. 1-28. Available online at https://doi.org/10.1145/3411810 Link to Introductory Video: https://youtu.be/AA8CTi2hEf0 Profile: Uichin Lee Associate Professor email@example.com http://ic.kaist.ac.kr Interactive Computing Lab. School of Computing https://www.kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
Professor Kyu-Young Whang Donates Toward the 50th Anniversary Memorial Building
Distinguished Professor Kyu-Young Whang from the School of Computing made a gift of 100 million KRW toward the construction of the 50th Anniversary Memorial Building during a ceremony on November 3 at the Daejeon campus. "As a member of the first class of KAIST, I feel very delighted to play a part in the fundraising campaign for the 50th anniversary celebration. This is also a token of appreciation to my alma mater and I look forward to alumni and the KAIST community joining this campaign," said Professor Emeritus Whang. KAIST will name the Kyu-Young Whang and Jonghae Song Christian Seminar Room at the 50th Anniversary Memorial Building. The ground will be broken in 2022 for construction of the building.
Taesik Gong Named Google PhD Fellow
PhD candidate Taesik Gong from the School of Computing was named a 2020 Google PhD Fellow in the field of machine learning. The Google PhD Fellowship Program has recognized and supported outstanding graduate students in computer science and related fields since 2009. Gong is one of two Korean students chosen as the recipients of Google Fellowships this year. A total of 53 students across the world in 12 fields were awarded this fellowship. Gong’s research on condition-independent mobile sensing powered by machine learning earned him this year’s fellowship. He has published and presented his work through many conferences including ACM SenSys and ACM UbiComp, and has worked at Microsoft Research Asia and Nokia Bell Labs as a research intern. Gong was also the winner of the NAVER PhD Fellowship Award in 2018. (END)
Professor Alice Haeyun Oh to Join GPAI Expert Group
Professor Alice Haeyun Oh will participate in the Global Partnership on Artificial Intelligence (GPAI), an international and multi-stakeholder initiative hosted by the OECD to guide the responsible development and use of AI. In collaboration with partners and international organizations, GPAI will bring together leading experts from industry, civil society, government, and academia. The Korean Ministry of Science and ICT (MSIT) officially announced that South Korea will take part in GPAI as one of the 15 founding members that include Canada, France, Japan, and the United States. Professor Oh has been appointed as a new member of the Responsible AI Committee, one of the four committees that GPAI established along with the Data Governance Committee, Future of Work Committee, and Innovation and Commercialization Committee. (END)
Research on the Million Follower Fallacy Receives the Test of Time Award
Professor Meeyoung Cha’s research investigating the correlation between the number of followers on social media and its influence was re-highlighted after 10 years of publication of the paper. Saying that her research is still as relevant today as the day it was published 10 years ago, the Association for the Advancement of Artificial Intelligence (AAAI) presented Professor Cha from the School of Computing with the Test of Time Award during the 14th International Conference on Web and Social Media (ICWSM) held online June 8 through 11. In her 2010 paper titled ‘Measuring User Influence in Twitter: The Million Follower Fallacy,’ Professor Cha proved that number of followers does not match the influential power. She investigated the data including 54,981,152 user accounts, 1,963,263,821 social links, and 1,755,925,520 Tweets, collected with 50 servers. The research compares and illustrates the limitations of various methods used to measure the influence a user has on a social networking platform. These results provided new insights and interpretations to the influencer selection algorithm used to maximize the advertizing impact on big social networking platforms. The research also looked at how long an influential user was active for, and whether the user could freely cross the borders between fields and be influential on different topics as well. By analyzing cases of who becomes an influencer when new events occur, it was shown that a person could quickly become an influencer using several key tactics, unlike what was previously claimed by the ‘accidental influential theory’. Professor Cha explained, “At the time, data from social networking platforms did not receive much attention in computer science, but I remember those all-nighters I pulled to work on this project, fascinated by the fact that internet data could be used to solve difficult social science problems. I feel so grateful that my research has been endeared for such a long time.” Professor Cha received both her undergraduate and graduate degrees from KAIST, and conducted this research during her postdoctoral course at the Max Planck Institute in Germany. She now also serves as a chief investigator of a data science group at the Institute for Basic Science (IBS). (END)
A Deep-Learned E-Skin Decodes Complex Human Motion
A deep-learning powered single-strained electronic skin sensor can capture human motion from a distance. The single strain sensor placed on the wrist decodes complex five-finger motions in real time with a virtual 3D hand that mirrors the original motions. The deep neural network boosted by rapid situation learning (RSL) ensures stable operation regardless of its position on the surface of the skin. Conventional approaches require many sensor networks that cover the entire curvilinear surfaces of the target area. Unlike conventional wafer-based fabrication, this laser fabrication provides a new sensing paradigm for motion tracking. The research team, led by Professor Sungho Jo from the School of Computing, collaborated with Professor Seunghwan Ko from Seoul National University to design this new measuring system that extracts signals corresponding to multiple finger motions by generating cracks in metal nanoparticle films using laser technology. The sensor patch was then attached to a user’s wrist to detect the movement of the fingers. The concept of this research started from the idea that pinpointing a single area would be more efficient for identifying movements than affixing sensors to every joint and muscle. To make this targeting strategy work, it needs to accurately capture the signals from different areas at the point where they all converge, and then decoupling the information entangled in the converged signals. To maximize users’ usability and mobility, the research team used a single-channeled sensor to generate the signals corresponding to complex hand motions. The rapid situation learning (RSL) system collects data from arbitrary parts on the wrist and automatically trains the model in a real-time demonstration with a virtual 3D hand that mirrors the original motions. To enhance the sensitivity of the sensor, researchers used laser-induced nanoscale cracking. This sensory system can track the motion of the entire body with a small sensory network and facilitate the indirect remote measurement of human motions, which is applicable for wearable VR/AR systems. The research team said they focused on two tasks while developing the sensor. First, they analyzed the sensor signal patterns into a latent space encapsulating temporal sensor behavior and then they mapped the latent vectors to finger motion metric spaces. Professor Jo said, “Our system is expandable to other body parts. We already confirmed that the sensor is also capable of extracting gait motions from a pelvis. This technology is expected to provide a turning point in health-monitoring, motion tracking, and soft robotics.” This study was featured in Nature Communications. Publication: Kim, K. K., et al. (2020) A deep-learned skin sensor decoding the epicentral human motions. Nature Communications. 11. 2149. https://doi.org/10.1038/s41467-020-16040-y29 Link to download the full-text paper: https://www.nature.com/articles/s41467-020-16040-y.pdf Profile: Professor Sungho Jo firstname.lastname@example.org http://nmail.kaist.ac.kr Neuro-Machine Augmented Intelligence Lab School of Computing College of Engineering KAIST
A Global Campaign of ‘Facts before Rumors’ on COVID-19 Launched
- A KAIST data scientist group responds to facts and rumors on COVID-19 for global awareness of the pandemic. - Like the novel coronavirus, rumors have no borders. The world is fighting to contain the pandemic, but we also have to deal with the appalling spread of an infodemic that is as contagious as the virus. This infodemic, a pandemic of false information, is bringing chaos and extreme fear to the general public. Professor Meeyoung Cha’s group at the School of Computing started a global campaign called ‘Facts before Rumors,’ to prevent the spread of false information from crossing borders. She explained, “We saw many rumors that had already been fact-checked long before in China and South Korea now begin to circulate in other countries, sometimes leading to detrimental results. We launched an official campaign, Facts before Rumors, to deliver COVID-19-related facts to countries where the number of cases is now increasing.” She released the first set of facts on March 26 via her Twitter account @nekozzang. Professor Cha, a data scientist who has focused on detecting global fake news, is now part of the COVID-19 AI Task Force at the Global Strategy Institute at KAIST. She is also leading the Data Science Group at the Institute for Basic Science (IBS) as Chief Investigator. Her research group worked in collaboration with the College of Nursing at Ewha Woman’s University to identify 15 claims about COVID-19 that circulated on social networks (SNS) and among the general public. The team fact-checked these claims based on information from the WHO and CDCs of Korea and the US. The research group is now working on translating the list of claims into Portuguese, Spanish, Persian, Chinese, Amharic, Hindi, and Vietnamese. Delivering facts before rumors, the team says, will help contain the disease and prevent any harm caused by misinformation. The pandemic, which spread in China and South Korea before arriving in Europe and the US, is now moving into South America, Africa, and Southeast Asia. “We would like to play a part in preventing the further spread of the disease with the provision of only scientifically vetted, truthful facts,” said the team. For this campaign, Professor Cha’s team investigated more than 200 rumored claims on COVID-19 in China during the early days of the pandemic. These claims spread in different levels: while some were only relevant locally or in larger regions of China, others propagated in Asia and are now spreading to countries that are currently most affected by the disease. For example, the false claim which publicized that ‘Fireworks can help tame the virus in the air’ only spread in China. Other claims such as ‘Eating garlic helps people overcome the disease’ or ‘Gargling with salt water prevents the contraction of the disease,’ spread around the world even after being proved groundless. The team noted, however, that the times at which these claims propagate are different from one country to another. “This opens up an opportunity to debunk rumors in some countries, even before they start to emerge,” said Professor Cha. Kun-Woo Kim, a master’s candidate in the Department of Industrial Design who joined this campaign and designed the Facts before Rumors chart also expressed his hope that this campaign will help reduce the number of victims. He added, “I am very grateful to our scientists who quickly responded to the Fact Check in these challenging times.”
COVID-19 Map Shows How the Global Pandemic Moves
- A School of Computing team facilitated the data from COVID-19 to show the global spread of the virus. - The COVID-19 map made by KAIST data scientists shows where and how the virus is spreading from China, reportedly the epicenter of the disease. Professor Meeyoung Cha from the School of Computing and her group facilitated data based on the number of confirmed cases from January 22 to March 22 to analyze the trends of this global epidemic. The statistics include the number of confirmed cases, recoveries, and deaths across major continents based on the number of confirmed case data during that period. The moving dot on the map strikingly shows how the confirmed cases are moving across the globe. According to their statistics, the centroid of the disease starts from near Wuhan in China and moved to Korea, then through the European region via Italy and Iran. The data is collected by a graduate student from the School of Computing, Geng Sun, who started the process during the time he was quarantined since coming back from his home in China. An undergraduate colleague of Geng's, Gabriel Camilo Lima who made the map, is now working remotely from his home in Brazil since all undergraduate students were required to move out of the dormitory last week. The university closed all undergraduate housing and advised the undergraduate students to go back home in a preventive measure to stop the virus from spreading across the campus. Gabriel said he calculated the centroid of all confirmed cases up to a given day. He explained, “I weighed each coordinate by the number of cases in that region and country and calculated an approximate center of gravity.” “The Earth is round, so the shortest path from Asia to Europe is often through Russia. In early March, the center of gravity of new cases was moving from Asia to Europe. Therefore, the centroid is moving to the west and goes through Russia, even though Russia has not reported many cases,” he added. Professor Cha, who is also responsible for the Data Science Group at the Institute for Basic Science (IBS) as the Chief Investigator, said their group will continue to update the map using public data at https://ds.ibs.re.kr/index.php/covid-19/. (END)
KAIST Alumnus NYU Professor Supports Female AI Researchers
A KAIST alumnus and an associate professor at New York University (NYU), Dr. Kyunghyun Cho donated 3,000 USD to the KAIST Graduate School of AI to support female AI researchers. Professor Cho spoke as a guest lecturer at the 2019 Samsung AI Forum on November 4 and received 3,000 USD as an honorarium. He donated this honorarium to the KAIST Graduate School of AI with a special request to support the school’s female PhD students attending the 2020 International Conference on Learning Representations (ICLR), where he serves as a program co-chair. Professor Cho received his BS degree from KAIST’s School of Computing in 2009 and is now serving as an associate professor at NYU’s Computer Science Department and Center for Data Science. His research mainly covers machine learning and natural language processing. Professor Cho said that he decided to make this donation because “In Korea and even in the US, women in science, technology, engineering, and mathematics (STEM) lack opportunities and environments that allow them to excel.” Professor Song Chong, the Head of the KAIST Graduate School of AI, responded, “We are so grateful for Professor Kyunghyun Cho’s contribution and we will also use funds from the school in addition to the donation to support our female PhD students who will attend the ICLR.” (END)
Object Identification and Interaction with a Smartphone Knock
(Professor Lee (far right) demonstrate 'Knocker' with his students.) A KAIST team has featured a new technology, “Knocker”, which identifies objects and executes actions just by knocking on it with the smartphone. Software powered by machine learning of sounds, vibrations, and other reactions will perform the users’ directions. What separates Knocker from existing technology is the sensor fusion of sound and motion. Previously, object identification used either computer vision technology with cameras or hardware such as RFID (Radio Frequency Identification) tags. These solutions all have their limitations. For computer vision technology, users need to take pictures of every item. Even worse, the technology will not work well in poor lighting situations. Using hardware leads to additional costs and labor burdens. Knocker, on the other hand, can identify objects even in dark environments only with a smartphone, without requiring any specialized hardware or using a camera. Knocker utilizes the smartphone’s built-in sensors such as a microphone, an accelerometer, and a gyroscope to capture a unique set of responses generated when a smartphone is knocked against an object. Machine learning is used to analyze these responses and classify and identify objects. The research team under Professor Sung-Ju Lee from the School of Computing confirmed the applicability of Knocker technology using 23 everyday objects such as books, laptop computers, water bottles, and bicycles. In noisy environments such as a busy café or on the side of a road, it achieved 83% identification accuracy. In a quiet indoor environment, the accuracy rose to 98%. The team believes Knocker will open a new paradigm of object interaction. For instance, by knocking on an empty water bottle, a smartphone can automatically order new water bottles from a merchant app. When integrated with IoT devices, knocking on a bed’s headboard before going to sleep could turn off the lights and set an alarm. The team suggested and implemented 15 application cases in the paper, presented during the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2019) held in London last month. Professor Sung-Ju Lee said, “This new technology does not require any specialized sensor or hardware. It simply uses the built-in sensors on smartphones and takes advantage of the power of machine learning. It’s a software solution that everyday smartphone users could immediately benefit from.” He continued, “This technology enables users to conveniently interact with their favorite objects.” The research was supported in part by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea funded by the Ministry of Science and ICT and an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Ministry of Science and ICT. Figure: An example knock on a bottle. Knocker identifies the object by analyzing a unique set of responses from the knock, and automatically launches a proper application or service.
Sungjoon Park Named Google PhD Fellow
PhD candidate Sungjoon Park from the School of Computing was named a 2019 Google PhD Fellow in the field of natural language processing. The Google PhD fellowship program has recognized and supported outstanding graduate students in computer science and related fields since 2009. Park is one of three Korean students chosen as the recipients of Google Fellowships this year. A total of 54 students across the world in 12 fields were awarded this fellowship. Park’s research on computational psychotherapy using natural language processing (NLP) powered by machine learning earned him this year’s fellowship. He presented of learning distributed representations in Korean and their interpretations during the 2017 Annual Conference of the Association for Computational Linguistics and the 2018 Conference on Empirical Methods in Natural Language Processing. He also applied machine learning-based natural language processing into computational psychotherapy so that a trained machine learning model could categorize client's verbal responses in a counseling dialogue. This was presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. More recently, he has been developing on neural response generation model and the prediction and extraction of complex emotion in text, and computational psychotherapy applications.
마지막 페이지 5
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.