본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
ACM
by recently order
by view order
Sound-based Touch Input Technology for Smart Tables and Mirrors
(from left: MS candidate Anish Byanjankar, Research Assistant Professor Hyosu Kim and Professor Insik Shin) Time passes so quickly, especially in the morning. Your hands are so busy brushing your teeth and checking the weather on your smartphone. You might wish that your mirror could turn into a touch screen and free up your hands. That wish can be achieved very soon. A KAIST team has developed a smartphone-based touch sound localization technology to facilitate ubiquitous interactions, turning objects like furniture and mirrors into touch input tools. This technology analyzes touch sounds generated from a user’s touch on a surface and identifies the location of the touch input. For instance, users can turn surrounding tables or walls into virtual keyboards and write lengthy e-mails much more conveniently by using only the built-in microphone on their smartphones or tablets. Moreover, family members can enjoy a virtual chessboard or enjoy board games on their dining tables. Additionally, traditional smart devices such as smart TVs or mirrors, which only provide simple screen display functions, can play a smarter role by adding touch input function support (see the image below). Figure 1.Examples of using touch input technology: By using only smartphone, you can use surrounding objects as a touch screen anytime and anywhere. The most important aspect of enabling the sound-based touch input method is to identify the location of touch inputs in a precise manner (within about 1cm error). However, it is challenging to meet these requirements, mainly because this technology can be used in diverse and dynamically changing environments. Users may use objects like desks, walls, or mirrors as touch input tools and the surrounding environments (e.g. location of nearby objects or ambient noise level) can be varied. These environmental changes can affect the characteristics of touch sounds. To address this challenge, Professor Insik Shin from the School of Computing and his team focused on analyzing the fundamental properties of touch sounds, especially how they are transmitted through solid surfaces. On solid surfaces, sound experiences a dispersion phenomenon that makes different frequency components travel at different speeds. Based on this phenomenon, the team observed that the arrival time difference (TDoA) between frequency components increases in proportion to the sound transmission distance, and this linear relationship is not affected by the variations of surround environments. Based on these observations, Research Assistant Professor Hyosu Kim proposed a novel sound-based touch input technology that records touch sounds transmitted through solid surfaces, then conducts a simple calibration process to identify the relationship between TDoA and the sound transmission distance, finally achieving accurate touch input localization. The accuracy of the proposed system was then measured. The average localization error was lower than about 0.4 cm on a 17-inch touch screen. Particularly, it provided a measurement error of less than 1cm, even with a variety of objects such as wooden desks, glass mirrors, and acrylic boards and when the position of nearby objects and noise levels changed dynamically. Experiments with practical users have also shown positive responses to all measurement factors, including user experience and accuracy. Professor Shin said, “This is novel touch interface technology that allows a touch input system just by installing three to four microphones, so it can easily turn nearby objects into touch screens.” The proposed system was presented at ACM SenSys, a top-tier conference in the field of mobile computing and sensing, and was selected as a best paper runner-up in November 2018. (The demonstration video of the sound-based touch input technology)
2018.12.26
View 7597
It's Time to 3D Sketch with Air Scaffolding
People often use their hands when describing an object, while pens are great tools for describing objects in detail. Taking this idea, a KAIST team introduced a new 3D sketching workflow, combining the strengths of hand and pen input. This technique will ease the way for ideation in three dimensions, leading to efficient product design in terms of time and cost. For a designer's drawing to become a product in reality, one has to transform a designer's 2D drawing into a 3D shape; however, it is difficult to infer accurate 3D shapes that match the original intention from an inaccurate 2D drawing made by hand. When creating a 3D shape from a planar 2D drawing, unobtainable information is required. On the other hand, loss of depth information occurs when a 3D shape is expressed as a 2D drawing using perspective drawing techniques. To fill in these “missing links” during the conversion, "3D sketching" techniques have been actively studied. Their main purpose is to help designers naturally provide missing 3D shape information in a 2D drawing. For example, if a designer draws two symmetric curves from a single point of view or draws the same curves from different points of view, the geometric clues that are left in this process are collected and mathematically interpreted to define the proper 3D curve. As a result, designers can use 3D sketching to directly draw a 3D shape as if using pen and paper. Among 3D sketching tools, sketching with hand motions, in VR environments in particular, has drawn attention because it is easy and quick. But the biggest limitation is that they cannot articulate the design solely using rough hand motions, hence they are difficult to be applied to product designs. Moreover, users may feel tired after raising their hands in the air during the entire drawing process. Using hand motions but to elaborate designs, Professor Seok-Hyung Bae and his team from the Department of Industrial Design integrated hand motions and pen-based sketching, allocating roles according to their strengths. This new technique is called Agile 3D Sketching with Air Scaffolding. Designers use their hand motions in the air to create rough 3D shapes which will be used as scaffolds, and then they can add details with pen-based 3D sketching on a tablet (Figure 1). Figure 1. In the agile 3D sketching workflow with air scaffolding, the user (a) makes unconstrained hand movements in the air to quickly generate rough shapes to be used as scaffolds, (b) uses the scaffolds as references and draws finer details with them, (c) produces a high-fidelity 3D concept sketch of a steering wheel in an iterative and progressive manner. The team came up with an algorithm to identify descriptive hand motions from transitory hand motions and extract only the intended shapes from unconstrained hand motions, based on air scaffolds from the identified motions. Through user tests, the team identified that this technique is easy to learn and use, and demonstrates good applicability. Most importantly, the users can reduce time, yet enhance the accuracy of defining the proportion and scale of products. Eventually, this tool will be able to be applied to various fields including the automobile industry, home appliances, animations and the movie making industry, and robotics. It also can be linked to smart production technology, such as 3D printing, to make manufacturing process faster and more flexible. PhD candidate Yongkwan Kim, who led the research project, said, “I believe the system will enhance product quality and work efficiency because designers can express their 3D ideas quickly yet accurately without using complex 3D CAD modeling software. I will make it into a product that every designer wants to use in various fields.” “There have been many attempts to encourage creative activities in various fields by using advanced computer technology. Based on in-depth understanding of designers, we will take the lead in innovating the design process by applying cutting-edge technology,” Professor Bae added. Professor Bae and his team from the Department of Industrial Design has been delving into developing better 3D sketching tools. They started with a 3D curve sketching system for professional designers called ILoveSketch and moved on to SketchingWithHands for designing a handheld product with first-person hand postures captured by a hand-tracking sensor. They then took their project to the next level and introduced Agile 3D Sketching with Air Scaffolding, a new 3D sketching workflow combining hand motion and pen drawing which was chosen as one of the CHI (Conference on Human Factors in Computing Systems) 2018 Best Papers by the Association for Computing Machinery. - Click the link to watch video clip of SketchingWithHands
2018.07.25
View 8786
A New Theory Improves Button Designs
Pressing a button appears effortless. People easily dismisses how challenging it is. Researchers at KAIST and Aalto University in Finland, created detailed simulations of button-pressing with the goal of producing human-like presses. The researchers argue that the key capability of the brain is a probabilistic model. The brain learns a model that allows it to predict a suitable motor command for a button. If a press fails, it can pick a very good alternative and try it out. "Without this ability, we would have to learn to use every button like it was new," tells Professor Byungjoo Lee from the Graduate School of Culture Technology at KAIST. After successfully activating the button, the brain can tune the motor command to be more precise, use less energy and to avoid stress or pain. "These factors together, with practice, produce the fast, minimum-effort, elegant touch people are able to perform." The brain uses probabilistic models also to extract information optimally from the sensations that arise when the finger moves and its tip touches the button. It "enriches" the ephemeral sensations optimally based on prior experience to estimate the time the button was impacted. For example, tactile sensation from the tip of the finger a better predictor for button activation than proprioception (angle position) and visual feedback. Best performance is achieved when all sensations are considered together. To adapt, the brain must fuse their information using prior experiences. Professor Lee explains, "We believe that the brain picks up these skills over repeated button pressings that start already as a child. What appears easy for us now has been acquired over years." The research was triggered by admiration of our remarkable capability to adapt button-pressing. Professor Antti Oulasvirta at Aalto University said, "We push a button on a remote controller differently than a piano key. The press of a skilled user is surprisingly elegant when looked at terms of timing, reliability, and energy use. We successfully press buttons without ever knowing the inner workings of a button. It is essentially a black box to our motor system. On the other hand, we also fail to activate buttons, and some buttons are known to be worse than others." Previous research has shown that touch buttons are worse than push-buttons, but there has not been adequate theoretical explanation. "In the past, there has been very little attention to buttons, although we use them all the time" says Dr. Sunjun Kim from Aalto University. The new theory and simulations can be used to design better buttons. "One exciting implication of the theory is that activating the button at the moment when the sensation is strongest will help users better rhythm their keypresses." To test this hypothesis, the researchers created a new method for changing the way buttons are activated. The technique is called Impact Activation. Instead of activating the button at first contact, it activates it when the button cap or finger hits the floor with maximum impact. The technique was 94% better in rapid tapping than the regular activation method for a push-button (Cherry MX switch) and 37% than a regular touchscreen button using a capacitive touch sensor. The technique can be easily deployed in touchscreens. However, regular physical keyboards do not offer the required sensing capability, although special products exist (e.g., the Wooting keyboard) on which it can be implemented. The simulations shed new light on what happens during a button press. One problem the brain must overcome is that muscles do not activate as perfectly as we will, but every press is slightly different. Moreover, a button press is very fast, occurring within 100 milliseconds, and is too fast for correcting movement. The key to understanding button-pressing is therefore to understand how the brain adapts based on the limited sensations that are the residue of the brief press event. The researchers also used the simulation to explain differences among physical and touchscreen-based button types. Both physical and touch buttons provide clear tactile signals from the impact of the tip with the button floor. However, with the physical button this signal is more pronounced and longer. "Where the two button types also differ is the starting height of the finger, and this makes a difference," explains Professor Lee. "When we pull up the finger from the touchscreen, it will end up at different height every time. Its down-press cannot be as accurately controlled in time as with a push-button where the finger can rest on top of the key cap." Three scientific articles, "Neuromechanics of a Button Press", "Impact activation improves rapid button pressing", and "Moving target selection: A cue integration model", will be presented at the CHI Conference on Human Factors in Computing Systems in Montréal, Canada, in April 2018.
2018.03.22
View 6882
Multi-Device Mobile Platform for App Functionality Sharing
Case 1. Mr. Kim, an employee, logged on to his SNS account using a tablet PC at the airport while traveling overseas. However, a malicious virus was installed on the tablet PC and some photos posted on his SNS were deleted by someone else. Case 2. Mr. and Mrs. Brown are busy contacting credit card and game companies, because his son, who likes games, purchased a million dollars worth of game items using his smartphone. Case 3. Mr. Park, who enjoys games, bought a sensor-based racing game through his tablet PC. However, he could not enjoy the racing game on his tablet because it was not comfortable to tilt the device for game control. The above cases are some of the various problems that can arise in modern society where diverse smart devices, including smartphones, exist. Recently, new technology has been developed to easily solve these problems. Professor Insik Shin from the School of Computing has developed ‘Mobile Plus,’ which is a mobile platform that can share the functionalities of applications between smart devices. This is a novel technology that allows applications to easily share their functionalities without needing any modifications. Smartphone users often use Facebook to log in to another SNS account like Instagram, or use a gallery app to post some photos on their SNS. These examples are possible, because the applications share their login and photo management functionalities. The functionality sharing enables users to utilize smartphones in various and convenient ways and allows app developers to easily create applications. However, current mobile platforms such as Android or iOS only support functionality sharing within a single mobile device. It is burdensome for both developers and users to share functionalities across devices because developers would need to create more complex applications and users would need to install the applications on each device. To address this problem, Professor Shin’s research team developed platform technology to support functionality sharing between devices. The main concept is using virtualization to give the illusion that the applications running on separate devices are on a single device. They succeeded in this virtualization by extending a RPC (Remote Procedure Call) scheme to multi-device environments. This virtualization technology enables the existing applications to share their functionalities without needing any modifications, regardless of the type of applications. So users can now use them without additional purchases or updates. Mobile Plus can support hardware functionalities like cameras, microphones, and GPS as well as application functionalities such as logins, payments, and photo sharing. Its greatest advantage is its wide range of possible applications. Professor Shin said, "Mobile Plus is expected to have great synergy with smart home and smart car technologies. It can provide novel user experiences (UXs) so that users can easily utilize various applications of smart home/vehicle infotainment systems by using a smartphone as their hub." This research was published at ACM MobiSys, an international conference on mobile computing that was hosted in the United States on June 21. Figure1. Users can securely log on to SNS accounts by using their personal devices Figure 2. Parents can control impulse shopping of their children. Figure 3. Users can enjoy games more and more by using the smartphone as a controller.
2017.08.09
View 8761
Students from Science Academies Shed a Light on KAIST
Recent KAIST statistics show that graduates from science academies distinguish themselves not only by their academic performance at KAIST but also in various professional careers after graduation. Every year, approximately 20% of newly-enrolled students of KAIST are from science academies. In the case of the class of 2017, 170 students from science academies accounted for 22% of the newly-enrolled students. Moreover, they are forming a top-tier student group on campus. As shown in the table below, the ratio of students graduating early for either enrolling in graduate programs or landing a job indicates their excellent performance at KAIST. There are eight science academies in Korea: Korea Science Academy of KAIST located in Busan, Seoul Science High School, Gyeonggi Science High School, Gwangju Science High School, Daejeon Science High School, Sejong Academy of Science and Arts, and Incheon Arts and Sciences Academy. Recently, KAIST analyzed 532 university graduates from the class of 2012. It was found that 23 out of 63 graduates with the alma mater of science academies finished their degree early; as a result, the early graduation ratio of the class of 2012 stood at 36.5%. This percentage was significantly higher than that of students from other high schools. Among the notable graduates, there was a student who made headlines with donation of 30 million KRW to KAIST. His donation was the largest donation from an enrolled student on record. His story goes back when Android smartphones were about to be distributed. Seung-Gyu Oh, then a student in the School of Electrical Engineering felt that existing subway apps were inconvenient, so he invented his own subway app that navigated the nearest subway lines in 2015. His app hit the market and ranked second in the subway app category. It had approximately five million users, which led to it generating advertising revenue. After the successful launch of the app, Oh accepted the takeover offered by Daum Kakao. He then donated 30 million KRW to his alma mater. “Since high school, I’ve always been thinking that I have received many benefits from my country and felt heavily responsible for it,” the alumnus of Korea Science of Academy and KAIST said. “I decided to make a donation to my alma mater, KAIST because I wanted to return what I had received from my country.” After graduation, Oh is now working for the web firm, Daum Kakao. In May 24, 2017, the 41st International Collegiate Programming Contest, hosted by Association for Computing Machinery (ACM) and sponsored by IBM, was held in Rapid City, South Dakota in the US. It is a prestigious contest that has been held annually since 1977. College students from around the world participate in this contest; and in 2017, a total of 50,000 students from 2,900 universities in 104 countries participated in regional competitions, and approximately 400 students made it to the final round, entering into a fierce competition. KAIST students also participated in this contest. The team was comprised of Ji-Hoon Ko, Jong-Won Lee, and Han-Pil Kang from the School of Computing. They are also alumni of Gyeonggi Science High School. They received the ‘First Problem Solver’ award and a bronze medal which came with a 3,000 USD cash prize. Sung-Jin Oh, who also graduated from Korea Science Academy of KAIST, is a research professor at the Korea Institute of Advanced Study (KIAS). He is the youngest recipient of the ‘Young Scientist Award’, which he received by proving a hypothesis from Einstein’s Theory of General Relativity mathematically at the age of 27. After graduating from KAIST, Oh earned his master’s and doctorate degrees from Princeton University, completed his post-doctoral fellow at UC Berkeley, and is now immersing himself in research at KIAS. Heui-Kwang Noh from the Department of Chemistry and Kang-Min Ahn from the School of Computing, who were selected to receive the presidential scholarship for science in 2014, both graduated from Gyeonggi Science High School. Noh was recognized for his outstanding academic capacity and was also chosen for the ‘GE Foundation Scholar-Leaders Program’ in 2015. The ‘GE Foundation Scholar-Leaders Program’, established in 1992 by the GE Foundation, aims at fostering talented students. This program is for post-secondary students who have both creativity and leadership. It selects five outstanding students and provides 3 million KRW per annum for a maximum of three years. The grantees of this program have become influential people in various fields, including professors, executives, staff members of national/international firms, and researchers. And they are making a huge contribution to the development of engineering and science. Noh continues doing various activities, including the completion of his internship at ‘Harvard-MIT Biomedical Optics’ and the publication of a paper (3rd author) for the ACS Omega of American Chemical Society (ACS). Ahn, a member of the Young Engineers Honor Society (YEHS) of the National Academy of Engineering of Korea, had an interest in startup businesses. In 2015, he founded DataStorm, a firm specializing in developing data solution, and merged with a cloud back-office, Jobis & Villains, in 2016. Ahn is continuing his business activities and this year he founded, and is successfully running, cocKorea. “KAIST students whose alma mater are science academies form a top-tier group on campus and produce excellent performance,” said Associate Vice President for Admissions, Hayong Shin. “KAIST is making every effort to assist these students so that they can perform to the best of their ability.” (Clockwise from top left: Seung-Gyu Oh, Sung-Jin Oh, Heui-Kwang Noh and Kang-Min Ahn)
2017.08.09
View 7996
KAIST Team Wins Bronze Medal at Int'l Programming Contest
A KAIST Team consisting of undergraduate students from the School of Computing and Department of Mathematical Science received a bronze medal and First Problem Solver award at an international undergraduate programming competition, The Association for Computing Machinery-International Collegiate Programming Contest (ACM-ICPC) World Finals. The 41st ACM-ICPC hosted by ACM and funded by IBM was held in South Dakota in the US on May 25. The competition, first held in 1977, is aimed at undergraduate students from around the world. A total of 50,000 students from 2900 universities and 103 countries participated in the regional competition and 400 students competed in the finals. The competition required teams of three to solve 12 problems. The KAIST team was coached by Emeritus Professor Sung-Yong Shin and Professor Taisook Han. The student contestants were Jihoon Ko and Hanpil Kang from the School of Computing and Jongwoon Lee from the Department of Mathematical Science. The team finished ranked 9th, receiving a bronze medal and a $3000 prize. Additionally, the team was the first to solve all the problems and received the First Problem Solver award. Detailed score information can be found on. https://icpc.baylor.edu/scoreboard/ (Photo caption: Professor Taisook Han and his students)
2017.06.12
View 8114
Professor Otfried Cheong Named as Distinguished Scientist by ACM
Professor Otfried Cheong (Schwarzkopf) of the School of Computing was named as a Distinguished Scientist of 2016 by the Association for Computing Machinery (ACM). The ACM recognized 45 Distinguished Members in the category of Distinguished Scientist, Educator, and Engineer for their individual contributions to the field of computing. Professor Cheong is the sole recipient from a Korean institution. The recipients were selected among the top 10 percent of ACM members with at least 15 years of professional experience and five years of continuous professional membership. He is known as one of the authors of the widely used computational geometry textbook Computational Geometry: Algorithms and Applications and as the developer of Ipe, a vector graphics editor. Professor Cheong joined KAIST in 2005, after earning his doctorate from the Free University of Berlin in 1992. He previously taught at Ultrecht University, Pohang University of Science and Technology, Hong Kong University of Science and Technology, and the Eindhoven University of Technology.
2017.04.17
View 6702
Improving Traffic Safety with a Crowdsourced Traffic Violation Reporting App
KAIST researchers revealed that crowdsourced traffic violation reporting with smartphone-based continuous video capturing can dramatically change the current practice of policing activities on the road and will significantly improve traffic safety. Professor Uichin Lee of the Department of Industrial and Systems Engineering and the Graduate School of Knowledge Service Engineering at KAIST and his research team designed and evaluated Mobile Roadwatch, a mobile app that helps citizen record traffic violation with their smartphones and report the recorded videos to the police. This app supports continuous video recording just like onboard vehicle dashboard cameras. Mobile Roadwatch allows drivers to safely capture traffic violations by simply touching a smartphone screen while driving. The captured videos are automatically tagged with contextual information such as location and time. This information will be used as important evidence for the police to ticket the violators. All of the captured videos can be conveniently reviewed, allowing users to decide which events to report to the police. The team conducted a two-week field study to understand how drivers use Mobile Roadwatch. They found that the drivers tended to capture all traffic risks regardless of the level of their involvement and the seriousness of the traffic risks. However, when it came to actual reporting, they tended to report only serious traffic violations, which could have led to car accidents, such as traffic signal violations and illegal U-turns. After receiving feedback about their reports from the police, drivers typically felt very good about their contributions to traffic safety. At the same time, some drivers felt pleased to know that the offenders received tickets since they thought these offenders deserved to be ticketed. While participating in the Mobile Roadwatch campaign, drivers reported that they tried to drive as safely as possible and abide by traffic laws. This was because they wanted to be as fair as possible so that they could capture others’ violations without feeling guilty. They were also afraid that other drivers might capture their violations. Professor Lee said, “Our study participants answered that Mobile Roadwatch served as a very useful tool for reporting traffic violations, and they were highly satisfied with its features. Beyond simple reporting, our tool can be extended to support online communities, which help people actively discuss various local safety issues and work with the police and local authorities to solve these safety issues.” Korea and India were the early adaptors supporting video-based reporting of traffic violations to the police. In recent years, the number of reports has dramatically increased. For example, Korea’s ‘Looking for a Witness’ (released in April 2015) received more than half million reported violations as of November 2016. In the US, authorities started tapping into smartphone recordings by releasing video-based reporting apps such as ICE Blackbox and Mobile Justice. Professor Lee said that the existing services cannot be used while driving, because none of the existing services support continuous video recording and safe event capturing behind the wheel. Professor Lee’s team has been incorporating advanced computer vision techniques into Mobile Roadwatch for automatically capturing traffic violations and safety risks, including potholes and obstacles. The researchers will present their results in May at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO, USA. Their research was supported by the KAIST-KUSTAR fund. (Caption: A driver is trying to capture an event by touching a screen. The Mobile Radwatch supports continuous video recording and safe event captureing behind the wheel.)
2017.04.10
View 9794
Furniture That Learns to Move by Itself
A novel strategy for displacing large objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect an object's pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position. Displacements of large objects induced by vibration are a common occurrence, but generally result in unpredictable motion. Think, for instance, of an unbalanced front-loading washing machine. For controlled movement, wheels or legs are usually preferred. Professor Daniel Saakes of the Department of Industrial Design and his team explored a strategy for moving everyday objects by harvesting external vibration rather than using a mechanical system with wheels. This principle may be useful for displacing large objects in situations where attaching wheels or complete lifting is impossible – assuming the speed of the process is not a concern. His team designed vibration modules that can be easily attached to furniture and objects, and this could be a welcomed creation for people with limited mobility, including the elderly. Embedding these vibration modules as part of mass-produced objects may provide a low-cost way to make almost any object mobile. Vibration as a principle for directed locomotion has been previously applied in micro-robots. For instance, the three-legged Kilobots move thanks to centrifugal forces alternatively generated by a pair of vibrations on two of its legs. The unbalanced weight transforms the robot into a ratchet and the resulting motion is deterministic with respect to the input vibration. To the best of our knowledge, we are the first to add vibratory actuators to deterministically steer large objects regardless of their structural properties. The perturbation resulting from a particular pattern of vibration depends on a myriad of parameters, including but not limited to the microscopic properties of the contact surfaces. The key challenge is to empirically discover and select the sequence of vibration patterns to bring the object to the target pose. Their approach is as follows. In the first step we systematically explore the object’s response by manipulating the amplitudes of the motors. This generates a pool of available moves (translations and rotations). We then calculate from this pool the most efficient way (either in terms of length or number of moves) to go from pose A to pose B using optimization strategies, such as genetic algorithms. The learning process may be repeated from time to time to account for changes in the mechanical response, at least for the patterns of vibration that contribute more to the change. Prototype modules are made with eccentric rotating motors (type 345-002 Precision Microdrive) with a nominal force of 115g, which proved sufficient to shake (and eventually locomote) four-legged IKEA chairs and small furniture such as tables and stools. The motors are powered by NiMH batteries and communicate wirelessly with a low-cost ESP8266 WiFi module. The team designed modules that are externally attached using straps as well as motors embedded in furniture. To study the general method, the team employed an overhead camera to track the chair and generate the pool of available moves. The team demonstrated that the system discovered pivot-like gaits and others. However, as one can imagine, using a pre-computed sequence to move to a target pose does not end up providing perfect matches. This is because the contact properties vary with location. Although this can be considered a secondary disturbance, it may in certain cases be mandatory to recompute the matrix of moves every now and then. The chair could, for instance, move into a wet area, over plastic carpet, etc. The principle and application in furniture is called “ratchair” as a portmanteau combining “Ratchet” and “Chair”. Ratchair was demonstrated at the 2016 ACM Siggraph Emerging Technologies and won the DC-EXPO award jointly organized by the Japanese Ministry of Economy, Trade and Industry (METI) and the Digital Content Association of Japan (DCAJ). At the DCEXPO Exhibition, Fall 2016, the work was one of 20 Innovative Technologies and the only non-Japanese contribution. *This article is from the KAIST Breakthroughs, research newsletter from the College of Engineering. For more stories of the KAIST Breakthroughs, please visit http://breakthroughs.kaist.ac.kr http://mid.kaist.ac.kr/projects/ratchair/ http://s2016.siggraph.org/content/emerging-technologies https://www.dcexpo.jp/ko/15184 Figure 1. The vibration modules embedded and attached to furniture. Figure 2. A close-up of the vibration module. Figure 3. A close-up of the embedded modules. Figure 4. A close-up of the vibration motor.
2017.03.23
View 8716
Professor Naehyuck Chang Appointed a 2015 Fellow by the ACM
The Association for Computing Machinery (ACM), the world’s largest educational and scientific computing society, released a list of its new fellows on December 8, 2015, the 2015 ACM Fellows. Professor Naehyuck Chang of the School of Electrical Engineering at KAIST was among the 42 new members who became ACM Fellows in recognition of their contributions to the development and application of computing in areas from data management and spoken-language processing to robotics and cryptography. Professor Chang is known for his leading research in power and energy optimization from embedded systems applications to large-scale energy systems such as device- and system-level power and energy measurement and estimation, liquid crystal display power reduction, dynamic voltage scaling, hybrid electrical energy storage systems, and photovoltaic cell arrays. He is the fourth Korean to be nominated an ACM Fellow. Professor Chang is also a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the Editor-in-Chief of the journal, ACM Transactions on Design Automation of Electronic Systems (TODAES). He served as the President of the ACM Special Interest Group on Design Automation in 2012. Additional information about the ACM 2015 Fellows, go to http://www.acm.org/press-room/news-releases/2015/fellows-2015:
2015.12.11
View 8792
Professor Junehwa Song Appointed as the General Chair of the Organizing Committee of ACM SenSys
Professor Junehwa Song from the Schooling of Computing at KAIST has been appointed the general chair of the organizing committee of ACM SenSys—the American Computing Machine (ACM) Conference on Embedded Networked Sensor Systems. ACM SenSys held its first conference in 2003 to promote research on wireless sensor networks and embedded systems. Since then, it has expanded into an influential international conference especially with the increasing importance in sensor technologies. Recently the committee has expanded its field of interest to mobile sensors, the Internet of Things, smart device system, and security. Professor Song is considered a world-renown researcher in mobile and ubiquitous computing system. He presented numerous research papers at various conferences organized by ACM. He is also a member of the editorial committee of the Institute of Electrical and Electronics Engineers (IEEE) Transactions on Mobile Computing journal. For his achievements in the field and flair for coordinating and planning conferences, he is now the first Korean researcher to be appointed the chair of ACM SenSys. Professor Song said that, as the chair, he would help discover new technology in and applications of networked, wireless sensors that would meet the demands of our modern society. The 13th ACM SenSys will take place in Seoul—the first one to be held in Asia. The event will begin on November 1, 2015 and last four days. More information about this year’s event can be found at http://sensys.acm.org/2015/.
2015.10.02
View 6488
KAIST Undergraduate Students Volunteer in Ethiopia
World Friends (WF), one of the undergraduate student clubs at KAIST, offer students opportunities to volunteer in underdeveloped regions and countries. This year the World Friends team travels to Ethiopia from July 9 to August 17, 2015. The aim of this trip is to help Ethiopian students fill gaps in their knowledge of information technology and encourage KAIST students build leadership skills through volunteer activities. Twenty-eight students will make the trip. KAIST students will visit the Addis Ababa Institute of Technology and the Adama Science and Technology University, as well as some local high and elementary schools in Addis Ababa, where they will run computer classes related to the basics of information technology such as C Language, Java Programming, Photoshop, MS Office, and Windows. The volunteers will offer Adama Science and Technology University students an advanced computer course to prepare them to participate in the ACM-ICPC, an international computer programming competition for university students. KAIST students will also introduce Korean culture to Ethiopian students including K-pop, Korean cuisine and fashion, Korean language lessons, and traditional Korean art. The Dean of Student Affairs and Policy at KAIST, Professor Young-Hee Kim said, “I hope the students from two very different cultures will cherish this opportunity to interact with each other and contribute to narrowing down the regional disparities in the IT field.”
2015.07.10
View 7300
<<
첫번째페이지
<
이전 페이지
1
2
3
4
>
다음 페이지
>>
마지막 페이지 4