본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Mobile
by recently order
by view order
AI to Determine When to Intervene with Your Driving
(Professor Uichin Lee (left) and PhD candidate Auk Kim) Can your AI agent judge when to talk to you while you are driving? According to a KAIST research team, their in-vehicle conservation service technology will judge when it is appropriate to contact you to ensure your safety. Professor Uichin Lee from the Department of Industrial and Systems Engineering at KAIST and his research team have developed AI technology that automatically detects safe moments for AI agents to provide conversation services to drivers. Their research focuses on solving the potential problems of distraction created by in-vehicle conversation services. If an AI agent talks to a driver at an inopportune moment, such as while making a turn, a car accident will be more likely to occur. In-vehicle conversation services need to be convenient as well as safe. However, the cognitive burden of multitasking negatively influences the quality of the service. Users tend to be more distracted during certain traffic conditions. To address this long-standing challenge of the in-vehicle conversation services, the team introduced a composite cognitive model that considers both safe driving and auditory-verbal service performance and used a machine-learning model for all collected data. The combination of these individual measures is able to determine the appropriate moments for conversation and most appropriate types of conversational services. For instance, in the case of delivering simple-context information, such as a weather forecast, driver safety alone would be the most appropriate consideration. Meanwhile, when delivering information that requires a driver response, such as a “Yes” or “No,” the combination of driver safety and auditory-verbal performance should be considered. The research team developed a prototype of an in-vehicle conversation service based on a navigation app that can be used in real driving environments. The app was also connected to the vehicle to collect in-vehicle OBD-II/CAN data, such as the steering wheel angle and brake pedal position, and mobility and environmental data such as the distance between successive cars and traffic flow. Using pseudo-conversation services, the research team collected a real-world driving dataset consisting of 1,388 interactions and sensor data from 29 drivers who interacted with AI conversational agents. Machine learning analysis based on the dataset demonstrated that the opportune moments for driver interruption could be correctly inferred with 87% accuracy. The safety enhancement technology developed by the team is expected to minimize driver distractions caused by in-vehicle conversation services. This technology can be directly applied to current in-vehicle systems that provide conversation services. It can also be extended and applied to the real-time detection of driver distraction problems caused by the use of a smartphone while driving. Professor Lee said, “In the near future, cars will proactively deliver various in-vehicle conversation services. This technology will certainly help vehicles interact with their drivers safely as it can fairly accurately determine when to provide conversation services using only basic sensor data generated by cars.” The researchers presented their findings at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp’19) in London, UK. This research was supported in part by Hyundai NGV and by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT. (Figure: Visual description of safe enhancement technology for in-vehicle conversation services)
2019.11.13
View 14793
Object Identification and Interaction with a Smartphone Knock
(Professor Lee (far right) demonstrate 'Knocker' with his students.) A KAIST team has featured a new technology, “Knocker”, which identifies objects and executes actions just by knocking on it with the smartphone. Software powered by machine learning of sounds, vibrations, and other reactions will perform the users’ directions. What separates Knocker from existing technology is the sensor fusion of sound and motion. Previously, object identification used either computer vision technology with cameras or hardware such as RFID (Radio Frequency Identification) tags. These solutions all have their limitations. For computer vision technology, users need to take pictures of every item. Even worse, the technology will not work well in poor lighting situations. Using hardware leads to additional costs and labor burdens. Knocker, on the other hand, can identify objects even in dark environments only with a smartphone, without requiring any specialized hardware or using a camera. Knocker utilizes the smartphone’s built-in sensors such as a microphone, an accelerometer, and a gyroscope to capture a unique set of responses generated when a smartphone is knocked against an object. Machine learning is used to analyze these responses and classify and identify objects. The research team under Professor Sung-Ju Lee from the School of Computing confirmed the applicability of Knocker technology using 23 everyday objects such as books, laptop computers, water bottles, and bicycles. In noisy environments such as a busy café or on the side of a road, it achieved 83% identification accuracy. In a quiet indoor environment, the accuracy rose to 98%. The team believes Knocker will open a new paradigm of object interaction. For instance, by knocking on an empty water bottle, a smartphone can automatically order new water bottles from a merchant app. When integrated with IoT devices, knocking on a bed’s headboard before going to sleep could turn off the lights and set an alarm. The team suggested and implemented 15 application cases in the paper, presented during the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2019) held in London last month. Professor Sung-Ju Lee said, “This new technology does not require any specialized sensor or hardware. It simply uses the built-in sensors on smartphones and takes advantage of the power of machine learning. It’s a software solution that everyday smartphone users could immediately benefit from.” He continued, “This technology enables users to conveniently interact with their favorite objects.” The research was supported in part by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea funded by the Ministry of Science and ICT and an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Ministry of Science and ICT. Figure: An example knock on a bottle. Knocker identifies the object by analyzing a unique set of responses from the knock, and automatically launches a proper application or service.
2019.10.02
View 25338
Flexible User Interface Distribution for Ubiquitous Multi-Device Interaction
< Research Group of Professor Insik Shin (center) > KAIST researchers have developed mobile software platform technology that allows a mobile application (app) to be executed simultaneously and more dynamically on multiple smart devices. Its high flexibility and broad applicability can help accelerate a shift from the current single-device paradigm to a multiple one, which enables users to utilize mobile apps in ways previously unthinkable. Recent trends in mobile and IoT technologies in this era of 5G high-speed wireless communication have been hallmarked by the emergence of new display hardware and smart devices such as dual screens, foldable screens, smart watches, smart TVs, and smart cars. However, the current mobile app ecosystem is still confined to the conventional single-device paradigm in which users can employ only one screen on one device at a time. Due to this limitation, the real potential of multi-device environments has not been fully explored. A KAIST research team led by Professor Insik Shin from the School of Computing, in collaboration with Professor Steve Ko’s group from the State University of New York at Buffalo, has developed mobile software platform technology named FLUID that can flexibly distribute the user interfaces (UIs) of an app to a number of other devices in real time without needing any modifications. The proposed technology provides single-device virtualization, and ensures that the interactions between the distributed UI elements across multiple devices remain intact. This flexible multimodal interaction can be realized in diverse ubiquitous user experiences (UX), such as using live video steaming and chatting apps including YouTube, LiveMe, and AfreecaTV. FLUID can ensure that the video is not obscured by the chat window by distributing and displaying them separately on different devices respectively, which lets users enjoy the chat function while watching the video at the same time. In addition, the UI for the destination input on a navigation app can be migrated into the passenger’s device with the help of FLUID, so that the destination can be easily and safely entered by the passenger while the driver is at the wheel. FLUID can also support 5G multi-view apps – the latest service that allows sports or games to be viewed from various angles on a single device. With FLUID, the user can watch the event simultaneously from different viewpoints on multiple devices without switching between viewpoints on a single screen. PhD candidate Sangeun Oh, who is the first author, and his team implemented the prototype of FLUID on the leading open-source mobile operating system, Android, and confirmed that it can successfully deliver the new UX to 20 existing legacy apps. “This new technology can be applied to next-generation products from South Korean companies such as LG’s dual screen phone and Samsung’s foldable phone and is expected to embolden their competitiveness by giving them a head-start in the global market.” said Professor Shin. This study will be presented at the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019) October 21 through 25 in Los Cabos, Mexico. The research was supported by the National Science Foundation (NSF) (CNS-1350883 (CAREER) and CNS-1618531). Figure 1. Live video streaming and chatting app scenario Figure 2. Navigation app scenario Figure 3. 5G multi-view app scenario Publication: Sangeun Oh, Ahyeon Kim, Sunjae Lee, Kilho Lee, Dae R. Jeong, Steven Y. Ko, and Insik Shin. 2019. FLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction. To be published in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019). ACM, New York, NY, USA. Article Number and DOI Name TBD. Video Material: https://youtu.be/lGO4GwH4enA Profile: Prof. Insik Shin, MS, PhD ishin@kaist.ac.kr https://cps.kaist.ac.kr/~ishin Professor Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Sangeun Oh, PhD Candidate ohsang1213@kaist.ac.kr https://cps.kaist.ac.kr/ PhD Candidate Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Prof. Steve Ko, PhD stevko@buffalo.edu https://nsr.cse.buffalo.edu/?page_id=272 Associate Professor Networked Systems Research Group Department of Computer Science and Engineering State University of New York at Buffalo http://www.buffalo.edu/ Buffalo 14260, USA (END)
2019.07.20
View 37636
Kimchi Toolkit by Costa Rican Summa Cum Laude Helps Make the Best Flavor
(Maria Jose Reyes Castro with her kimchi toolkit application) Every graduate feels a special attachment to their school, but for Maria Jose Reyes Castro who graduated summa cum laude in the Department of Industrial Design this year, KAIST will be remembered for more than just academics. She appreciates KAIST for not only giving her great professional opportunities, but also helping her find the love of her life. During her master’s course, she completed an electronic kimchi toolkit, which optimizes kimchi’s flavor. Her kit uses a mobile application and smart sensor to find the fermentation level of kimchi by measuring its pH level, which is closely related to its fermentation. A user can set a desired fermentation level or salinity on the mobile application, and it provides the best date to serve it. Under the guidance of Professor Daniel Saakes, she conducted research on developing a kimchi toolkit for beginners (Qualified Kimchi: Improving the experience of inexperienced kimchi makers by developing a monitoring toolkit for kimchi). “I’ve seen many foreigners saying it’s quite difficult to make kimchi. So I chose to study kimchi to help people, especially those who are first-experienced making kimchi more easily,” she said. She got recipes from YouTube and studied fermentation through academic journals. She also asked kimchi experts to have a more profound understanding of it. Extending her studies, she now works for a startup specializing in smart farms after starting last month. She conducts research on biology and applies it to designs that can be used practically in daily life. Her tie with KAIST goes back to 2011 when she attended an international science camp in Germany. She met Sunghan Ro (’19 PhD in Nanoscience and Technology), a student from KAIST and now her husband. He recommended for her to enroll at KAIST because the school offers an outstanding education and research infrastructure along with support for foreign students. At that time, Castro had just begun her first semester in electrical engineering at the University of Costa Rica, but she decided to apply to KAIST and seek a better opportunity in a new environment. One year later, she began her fresh start at KAIST in the fall semester of 2012. Instead of choosing her original major, electrical engineering, she decided to pursue her studies in the Department of Industrial Design, because it is an interdisciplinary field where students get to study design while learning business models and making prototypes. She said, “I felt encouraged by my professors and colleagues in my department to be creative and follow my passion. I never regret entering this major.” When Castro was pursuing her master’s program in the same department, she became interested in interaction designs with food and biological designs by Professor Saakes, who is her advisor specializing in these areas. After years of following her passion in design, she now graduates with academic honors in her department. It is a bittersweet moment to close her journey at KAIST, but “I want to thank KAIST for the opportunity to change my life for the better. I also thank my parents for being supportive and encouraging me. I really appreciate the professors from the Department of Industrial Design who guided and shaped who I am,” she said. Figure 1. The concept of the kimchi toolkit Figure 2. The scenario of the kimchi toolkit
2019.02.19
View 4508
Professor Dong Ho Cho Awarded at the Haedong Conference 2017
Professor Dong Ho Cho of the School of Electrical Engineering at KAIST received an award at the 13th Haedong Conference 2017 in Seoul on the first of December. The Korean Institute of Communications and Information Sciences recognized Professor Cho for his significant contributions in the field of mobile communication networks. He has carried out groundbreaking research on mobile systems, including architecture, protocols, algorithms, optimization, and efficiency analysis. As a result, he has produced 73 papers in renowned international journals, 138 papers at international conferences, and filed 52 international patents and 121 domestic patents. In addition, he transferred 14 of the patents he filed to Korean and international companies.
2017.12.07
View 5207
KAST Opened the Campus to the Public
KAIST hosted OPEN KAIST 2017 on the main campus from November 2 to 3, 2017. OPEN KAIST is a science and cultural event designed for students and the general public to experience and take a glance at research labs. More than 10,000 visitors came to KAIST this year. Groups of families and students came to KAIST to experience various programs related to science. Twenty departments, including Mechanical Engineering, Aerospace Engineering, the Graduate School of Cultural Technology, and Materials Science and Engineering participated in the event, along with three research centers and the Public Relations Office. The event was composed of a total of 70 programs in four sections: lab tour, research performance exhibition, department introduction, and special lectures. The kick off activity for the event was a trial game of the AI World Cup 2017 which will be hosted by KAIST in December 2017. Many people also visited the mobile health care showroom where they could experience what a future smart home and hospital would look like. It was also interesting to visit a futuristic living space for one-person households that provides virtual reality services. KAIST hopes that the event offers an opportunity for children and students to get to know about science better. Professor Jong-Hwan Kim, the Dean of the College of Engineering at KAIST said, “OPEN KAIST is the one and only opportunity to visit and experience our research labs. KAIST will make every effort to take a step closer to the public by focusing on research that contributes to human society.”
2017.11.06
View 6216
Sangeun Oh Recognized as a 2017 Google Fellow
Sangeun Oh, a Ph.D. candidate in the School of Computing was selected as a Google PhD Fellow in 2017. He is one of 47 awardees of the Google PhD Fellowship in the world. The Google PhD Fellowship awards students showing outstanding performance in the field of computer science and related research. Since being established in 2009, the program has provided various benefits, including scholarships worth $10,000 USD and one-to-one research discussion with mentors from Google. His research work on a mobile system that allows interactions among various kinds of smart devices was recognized in the field of mobile computing. He developed a mobile platform that allows smart devices to share diverse functions, including logins, payments, and sensors. This technology provides numerous user experiences that existing mobile platforms could not offer. Through cross-device functionality sharing, users can utilize multiple smart devices in a more convenient manner. The research was presented at The Annual International Conference on Mobile Systems, Applications, and Services (MobiSys) of the Association for Computing Machinery in July, 2017. Oh said, “I would like to express my gratitude to my advisor, the professors in the School of Computing, and my lab colleagues. I will devote myself to carrying out more research in order to contribute to society.” His advisor, Insik Shin, a professor in the School of Computing said, “Being recognized as a Google PhD Fellow is an honor to both the student as well as KAIST. I strongly anticipate and believe that Oh will make the next step by carrying out good quality research.”
2017.09.27
View 10129
Multi-Device Mobile Platform for App Functionality Sharing
Case 1. Mr. Kim, an employee, logged on to his SNS account using a tablet PC at the airport while traveling overseas. However, a malicious virus was installed on the tablet PC and some photos posted on his SNS were deleted by someone else. Case 2. Mr. and Mrs. Brown are busy contacting credit card and game companies, because his son, who likes games, purchased a million dollars worth of game items using his smartphone. Case 3. Mr. Park, who enjoys games, bought a sensor-based racing game through his tablet PC. However, he could not enjoy the racing game on his tablet because it was not comfortable to tilt the device for game control. The above cases are some of the various problems that can arise in modern society where diverse smart devices, including smartphones, exist. Recently, new technology has been developed to easily solve these problems. Professor Insik Shin from the School of Computing has developed ‘Mobile Plus,’ which is a mobile platform that can share the functionalities of applications between smart devices. This is a novel technology that allows applications to easily share their functionalities without needing any modifications. Smartphone users often use Facebook to log in to another SNS account like Instagram, or use a gallery app to post some photos on their SNS. These examples are possible, because the applications share their login and photo management functionalities. The functionality sharing enables users to utilize smartphones in various and convenient ways and allows app developers to easily create applications. However, current mobile platforms such as Android or iOS only support functionality sharing within a single mobile device. It is burdensome for both developers and users to share functionalities across devices because developers would need to create more complex applications and users would need to install the applications on each device. To address this problem, Professor Shin’s research team developed platform technology to support functionality sharing between devices. The main concept is using virtualization to give the illusion that the applications running on separate devices are on a single device. They succeeded in this virtualization by extending a RPC (Remote Procedure Call) scheme to multi-device environments. This virtualization technology enables the existing applications to share their functionalities without needing any modifications, regardless of the type of applications. So users can now use them without additional purchases or updates. Mobile Plus can support hardware functionalities like cameras, microphones, and GPS as well as application functionalities such as logins, payments, and photo sharing. Its greatest advantage is its wide range of possible applications. Professor Shin said, "Mobile Plus is expected to have great synergy with smart home and smart car technologies. It can provide novel user experiences (UXs) so that users can easily utilize various applications of smart home/vehicle infotainment systems by using a smartphone as their hub." This research was published at ACM MobiSys, an international conference on mobile computing that was hosted in the United States on June 21. Figure1. Users can securely log on to SNS accounts by using their personal devices Figure 2. Parents can control impulse shopping of their children. Figure 3. Users can enjoy games more and more by using the smartphone as a controller.
2017.08.09
View 8761
Crowdsourcing-Based Global Indoor Positioning System
Research team of Professor Dong-Soo Han of the School of Computing Intelligent Service Lab at KAIST developed a system for providing global indoor localization using Wi-Fi signals. The technology uses numerous smartphones to collect fingerprints of location data and label them automatically, significantly reducing the cost of constructing an indoor localization system while maintaining high accuracy. The method can be used in any building in the world, provided the floor plan is available and there are Wi-Fi fingerprints to collect. To accurately collect and label the location information of the Wi-Fi fingerprints, the research team analyzed indoor space utilization. This led to technology that classified indoor spaces into places used for stationary tasks (resting spaces) and spaces used to reach said places (transient spaces), and utilized separate algorithms to optimally and automatically collect location labelling data. Years ago, the team implemented a way to automatically label resting space locations from signals collected in various contexts such as homes, shops, and offices via the users’ home or office address information. The latest method allows for the automatic labelling of transient space locations such as hallways, lobbies, and stairs using unsupervised learning, without any additional location information. Testing in KAIST’s N5 building and the 7th floor of N1 building manifested the technology is capable of accuracy up to three or four meters given enough training data. The accuracy level is comparable to technology using manually-labeled location information. Google, Microsoft, and other multinational corporations collected tens of thousands of floor plans for their indoor localization projects. Indoor radio map construction was also attempted by the firms but proved more difficult. As a result, existing indoor localization services were often plagued by inaccuracies. In Korea, COEX, Lotte World Tower, and other landmarks provide comparatively accurate indoor localization, but most buildings suffer from the lack of radio maps, preventing indoor localization services. Professor Han said, “This technology allows the easy deployment of highly accurate indoor localization systems in any building in the world. In the near future, most indoor spaces will be able to provide localization services, just like outdoor spaces.” He further added that smartphone-collected Wi-Fi fingerprints have been unutilized and often discarded, but now they should be treated as invaluable resources, which create a new big data field of Wi-Fi fingerprints. This new indoor navigation technology is likely to be valuable to Google, Apple, or other global firms providing indoor positioning services globally. The technology will also be valuable for helping domestic firms provide positioning services. Professor Han added that “the new global indoor localization system deployment technology will be added to KAILOS, KAIST’s indoor localization system.” KAILOS was released in 2014 as KAIST’s open platform for indoor localization service, allowing anyone in the world to add floor plans to KAILOS, and collect the building’s Wi-Fi fingerprints for a universal indoor localization service. As localization accuracy improves in indoor environments, despite the absence of GPS signals, applications such as location-based SNS, location-based IoT, and location-based O2O are expected to take off, leading to various improvements in convenience and safety. Integrated indoor-outdoor navigation services are also visible on the horizon, fusing vehicular navigation technology with indoor navigation. Professor Han’s research was published in IEEE Transactions on Mobile Computing (TMC) in November in 2016. For more, please visit http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7349230http://ieeexplore.ieee.org/document/7805133/
2017.04.06
View 8663
GSIS Graduates Its First Doctor
The Graduate School of Information Security at KAIST (GSIS) granted its first doctoral degree to Il-Goo Lee at the university’s 2016 commencement on February 19, 2016. Lee received the degree for his dissertation entitled “Interference-Aware Secure Communications for Wireless LANs.” He explained the background of his research: “As we use wireless technology more and more in areas of the Internet of Things (IoT), unmanned vehicles, and drones, information security will become an issue of major concern. I would like to contribute to the advancement of communications technology to help minimize wireless interference between devices while ensuring their optimal performance.” Based on his research, he developed a communications technique to increase wireless devices’ energy efficiency and the level of their security, and created a prototype to showcase that technique. He plans to continue his research in the development of the next generation WiFi chip sets to protect the information security of IoT and wireless devices. Since its establishment in March 2011, KAIST’s GSIS has conferred 50 master’s and one doctoral degrees.
2016.02.18
View 7672
KAIST and Four Science and Technology Universities Host a Start-up Competition
KAIST and four other science and technology universities, such as Gwangju Institute of Science and Technology (GIST), Ulsan National Institute of Science and Technology (UNIST), Daegu Gyeongbuk Institute of Science and Technology (DGIST), and Pohang University of Science and Technology (POSTECH), hosted a startup competition on November 27, 2015 at the Dongdaemun Design Plaza in Seoul. Approximately 150 participants including students from the five universities, "angel" investors, and entrepreneurs attended the competition. The competition was held to promote startups that are based on research achievements in science and technology and to foster entrepreneurs with great potential. Two hundred and sixty applicants from 81 teams competed this year. Only ten teams made it to the finals. KAIST students presented two business plans: an experience-centered education platform and mobile taxi-pooling service. Students from other universities presented a brain-stimulating simulation software (GIST), handy smart health trainer (GIST), real-time reporting system for luggage (DGIST), a flower delivery system (UNIST), surveillance and alarm system for stock-related events via machinery studies (UNIST), augmented emotion toys using augmented reality (POSTECH), and a nasal spray for fine dust prevention (POSTECH). KAIST also displayed an exhibition of “wearable haptic device for multimedia contents” and “next generation recommendation service platform based on one-on-one matching system with high expandability and improved user experience system.” The winning team received an award from the Minister of Science, ICT and Future Planning of Korea, as well as an opportunity to participate in overseas startup programs over the course of ten days. Joongmyeon Bae, Director of the KAIST Industry and University Cooperation, who organized the contest, said, “The alumni of Stanford University (USA) has annually created over 5.4 million jobs through startup activities. Likewise, we hope that our event will contribute to job creation by fostering innovative entrepreneurs.”
2015.11.26
View 9472
Professor Junehwa Song Appointed as the General Chair of the Organizing Committee of ACM SenSys
Professor Junehwa Song from the Schooling of Computing at KAIST has been appointed the general chair of the organizing committee of ACM SenSys—the American Computing Machine (ACM) Conference on Embedded Networked Sensor Systems. ACM SenSys held its first conference in 2003 to promote research on wireless sensor networks and embedded systems. Since then, it has expanded into an influential international conference especially with the increasing importance in sensor technologies. Recently the committee has expanded its field of interest to mobile sensors, the Internet of Things, smart device system, and security. Professor Song is considered a world-renown researcher in mobile and ubiquitous computing system. He presented numerous research papers at various conferences organized by ACM. He is also a member of the editorial committee of the Institute of Electrical and Electronics Engineers (IEEE) Transactions on Mobile Computing journal. For his achievements in the field and flair for coordinating and planning conferences, he is now the first Korean researcher to be appointed the chair of ACM SenSys. Professor Song said that, as the chair, he would help discover new technology in and applications of networked, wireless sensors that would meet the demands of our modern society. The 13th ACM SenSys will take place in Seoul—the first one to be held in Asia. The event will begin on November 1, 2015 and last four days. More information about this year’s event can be found at http://sensys.acm.org/2015/.
2015.10.02
View 6488
<<
첫번째페이지
<
이전 페이지
1
2
3
4
>
다음 페이지
>>
마지막 페이지 4