본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.28
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Platform
by recently order
by view order
Military Combatants Usher in an Era of Personalized Training with New Materials
< Photo 1. (From left) Professor Steve Park of Materials Science and Engineering, Kyusoon Pak, Ph.D. Candidate (Army Major) > Traditional military training often relies on standardized methods, which has limited the provision of optimized training tailored to individual combatants' characteristics or specific combat situations. To address this, our research team developed an e-textile platform, securing core technology that can reflect the unique traits of individual combatants and various combat scenarios. This technology has proven robust enough for battlefield use and is economical enough for widespread distribution to a large number of troops. On June 25th, Professor Steve Park's research team at KAIST's Department of Materials Science and Engineering announced the development of a flexible, wearable electronic textile (E-textile) platform using an innovative technology that 'draws' electronic circuits directly onto fabric. The wearable e-textile platform developed by the research team combines 3D printing technology with new materials engineering design to directly print flexible and highly durable sensors and electrodes onto textile substrates. This enables the collection of precise movement and human body data from individual combatants, which can then be used to propose customized training models. Existing e-textile fabrication methods were often complex or limited in their ability to provide personalized customization. To overcome these challenges, the research team adopted an additive manufacturing technology called 'Direct Ink Writing (DIW)' 3D printing. < Figure 1. Schematic diagram of e-textile manufactured with Direct Ink Writing (DIW) printing technology on various textiles, including combat uniforms > This technology involves directly dispensing and printing special ink, which functions as sensors and electrodes, onto textile substrates in desired patterns. This allows for flexible implementation of various designs without the complex process of mask fabrication. This is expected to be an effective technology that can be easily supplied to hundreds of thousands of military personnel. The core of this technology lies in the development of high-performance functional inks based on advanced materials engineering design. The research team combined styrene-butadiene-styrene (SBS) polymer, which provides flexibility, with multi-walled carbon nanotubes (MWCNT) for electrical conductivity. They developed a tensile/bending sensor ink that can stretch up to 102% and maintain stable performance even after 10,000 repetitive tests. This means that accurate data can be consistently obtained even during the strenuous movements of combatants. < Figure 2. Measurement of human movement and breathing patterns using e-textile > Furthermore, new material technology was applied to implement 'interconnect electrodes' that electrically connect the upper and lower layers of the fabric. The team developed an electrode ink combining silver (Ag) flakes with rigid polystyrene (PS) polymer, precisely controlling the impregnation level (how much the ink penetrates the fabric) to effectively connect both sides or multiple layers of the fabric. This secures the technology for producing multi-layered wearable electronic systems integrating sensors and electrodes. < Figure 3. Experimental results of recognizing unknown objects after machine learning six objects using a smart glove > The research team proved the platform's performance through actual human movement monitoring experiments. They printed the developed e-textile on major joint areas of clothing (shoulders, elbows, knees) and measured movements and posture changes during various exercises such as running, jumping jacks, and push-ups in real-time. Additionally, they demonstrated the potential for applications such as monitoring breathing patterns using a smart mask and recognizing objects through machine learning and perceiving complex tactile information by printing multiple sensors and electrodes on gloves. These results show that the developed e-textile platform is effective in precisely understanding the movement dynamics of combatants. This research is an important example demonstrating how cutting-edge new material technology can contribute to the advancement of the defense sector. Major Kyusoon Pak of the Army, who participated in this research, considered required objectives such as military applicability and economic feasibility for practical distribution from the research design stage. < Figure 4. Experimental results showing that a multi-layered e-textile glove connected with interconnect electrodes can measure tensile/bending signals and pressure signals at a single point > Major Pak stated, "Our military is currently facing both a crisis and an opportunity due to the decrease in military personnel resources caused by the demographic cliff and the advancement of science and technology. Also, respect for life in the battlefield is emerging as a significant issue. This research aims to secure original technology that can provide customized training according to military branch/duty and type of combat, thereby enhancing the combat power and ensuring the survivability of our soldiers." He added, "I hope this research will be evaluated as a case that achieved both scientific contribution and military applicability." This research, where Kyusoon Pak, Ph.D. Candidate (Army Major) from KAIST's Department of Materials Science and Engineering, participated as the first author and Professor Steve Park supervised, was published on May 27, 2025, in `npj Flexible Electronics (top 1.8% in JCR field)', an international academic journal in the electrical, electronic, and materials engineering fields. * Paper Title: Fabrication of Multifunctional Wearable Interconnect E-textile Platform Using Direct Ink Writing (DIW) 3D Printing * DOI: https://doi.org/10.1038/s41528-025-00414-7 This research was supported by the Ministry of Trade, Industry and Energy and the National Research Foundation of Korea.
2025.06.25
View 235
KAIST to Develop a Korean-style ChatGPT Platform Specifically Geared Toward Medical Diagnosis and Drug Discovery
On May 23rd, KAIST (President Kwang-Hyung Lee) announced that its Digital Bio-Health AI Research Center (Director: Professor JongChul Ye of KAIST Kim Jaechul Graduate School of AI) has been selected for the Ministry of Science and ICT's 'AI Top-Tier Young Researcher Support Program (AI Star Fellowship Project).' With a total investment of ₩11.5 billion from May 2025 to December 2030, the center will embark on the full-scale development of AI technology and a platform capable of independently inferring and determining the kinds of diseases, and discovering new drugs. < Photo. On May 20th, a kick-off meeting for the AI Star Fellowship Project was held at KAIST Kim Jaechul Graduate School of AI’s Yangjae Research Center with the KAIST research team and participating organizations of Samsung Medical Center, NAVER Cloud, and HITS. [From left to right in the front row] Professor Jaegul Joo (KAIST), Professor Yoonjae Choi (KAIST), Professor Woo Youn Kim (KAIST/HITS), Professor JongChul Ye (KAIST), Professor Sungsoo Ahn (KAIST), Dr. Haanju Yoo (NAVER Cloud), Yoonho Lee (KAIST), HyeYoon Moon (Samsung Medical Center), Dr. Su Min Kim (Samsung Medical Center) > This project aims to foster an innovative AI research ecosystem centered on young researchers and develop an inferential AI agent that can utilize and automatically expand specialized knowledge systems in the bio and medical fields. Professor JongChul Ye of the Kim Jaechul Graduate School of AI will serve as the lead researcher, with young researchers from KAIST including Professors Yoonjae Choi, Kimin Lee, Sungsoo Ahn, and Chanyoung Park, along with mid-career researchers like Professors Jaegul Joo and Woo Youn Kim, jointly undertaking the project. They will collaborate with various laboratories within KAIST to conduct comprehensive research covering the entire cycle from the theoretical foundations of AI inference to its practical application. Specifically, the main goals include: - Building high-performance inference models that integrate diverse medical knowledge systems to enhance the precision and reliability of diagnosis and treatment. - Developing a convergence inference platform that efficiently combines symbol-based inference with neural network models. - Securing AI technology for new drug development and biomarker discovery based on 'cell ontology.' Furthermore, through close collaboration with industry and medical institutions such as Samsung Medical Center, NAVER Cloud, and HITS Co., Ltd., the project aims to achieve: - Clinical diagnostic AI utilizing medical knowledge systems. - AI-based molecular target exploration for new drug development. - Commercialization of an extendible AI inference platform. Professor JongChul Ye, Director of KAIST's Digital Bio-Health AI Research Center, stated, "At a time when competition in AI inference model development is intensifying, it is a great honor for KAIST to lead the development of AI technology specialized in the bio and medical fields with world-class young researchers." He added, "We will do our best to ensure that the participating young researchers reach a world-leading level in terms of research achievements after the completion of this seven-year project starting in 2025." The AI Star Fellowship is a newly established program where post-doctoral researchers and faculty members within seven years of appointment participate as project leaders (PLs) to independently lead research. Multiple laboratories within a university and demand-side companies form a consortium to operate the program. Through this initiative, KAIST plans to nurture bio-medical convergence AI talent and simultaneously promote the commercialization of core technologies in collaboration with Samsung Medical Center, NAVER Cloud, and HITS.
2025.05.26
View 3107
KAIST Develops Neuromorphic Semiconductor Chip that Learns and Corrects Itself
< Photo. The research team of the School of Electrical Engineering posed by the newly deveoped processor. (From center to the right) Professor Young-Gyu Yoon, Integrated Master's and Doctoral Program Students Seungjae Han and Hakcheon Jeong and Professor Shinhyun Choi > - Professor Shinhyun Choi and Professor Young-Gyu Yoon’s Joint Research Team from the School of Electrical Engineering developed a computing chip that can learn, correct errors, and process AI tasks - Equipping a computing chip with high-reliability memristor devices with self-error correction functions for real-time learning and image processing Existing computer systems have separate data processing and storage devices, making them inefficient for processing complex data like AI. A KAIST research team has developed a memristor-based integrated system similar to the way our brain processes information. It is now ready for application in various devices including smart security cameras, allowing them to recognize suspicious activity immediately without having to rely on remote cloud servers, and medical devices with which it can help analyze health data in real time. KAIST (President Kwang Hyung Lee) announced on the 17th of January that the joint research team of Professor Shinhyun Choi and Professor Young-Gyu Yoon of the School of Electrical Engineering has developed a next-generation neuromorphic semiconductor-based ultra-small computing chip that can learn and correct errors on its own. < Figure 1. Scanning electron microscope (SEM) image of a computing chip equipped with a highly reliable selector-less 32×32 memristor crossbar array (left). Hardware system developed for real-time artificial intelligence implementation (right). > What is special about this computing chip is that it can learn and correct errors that occur due to non-ideal characteristics that were difficult to solve in existing neuromorphic devices. For example, when processing a video stream, the chip learns to automatically separate a moving object from the background, and it becomes better at this task over time. This self-learning ability has been proven by achieving accuracy comparable to ideal computer simulations in real-time image processing. The research team's main achievement is that it has completed a system that is both reliable and practical, beyond the development of brain-like components. The research team has developed the world's first memristor-based integrated system that can adapt to immediate environmental changes, and has presented an innovative solution that overcomes the limitations of existing technology. < Figure 2. Background and foreground separation results of an image containing non-ideal characteristics of memristor devices (left). Real-time image separation results through on-device learning using the memristor computing chip developed by our research team (right). > At the heart of this innovation is a next-generation semiconductor device called a memristor*. The variable resistance characteristics of this device can replace the role of synapses in neural networks, and by utilizing it, data storage and computation can be performed simultaneously, just like our brain cells. *Memristor: A compound word of memory and resistor, next-generation electrical device whose resistance value is determined by the amount and direction of charge that has flowed between the two terminals in the past. The research team designed a highly reliable memristor that can precisely control resistance changes and developed an efficient system that excludes complex compensation processes through self-learning. This study is significant in that it experimentally verified the commercialization possibility of a next-generation neuromorphic semiconductor-based integrated system that supports real-time learning and inference. This technology will revolutionize the way artificial intelligence is used in everyday devices, allowing AI tasks to be processed locally without relying on remote cloud servers, making them faster, more privacy-protected, and more energy-efficient. “This system is like a smart workspace where everything is within arm’s reach instead of having to go back and forth between desks and file cabinets,” explained KAIST researchers Hakcheon Jeong and Seungjae Han, who led the development of this technology. “This is similar to the way our brain processes information, where everything is processed efficiently at once at one spot.” The research was conducted with Hakcheon Jeong and Seungjae Han, the students of Integrated Master's and Doctoral Program at KAIST School of Electrical Engineering being the co-first authors, the results of which was published online in the international academic journal, Nature Electronics, on January 8, 2025. *Paper title: Self-supervised video processing with self-calibration on an analogue computing platform based on a selector-less memristor array ( https://doi.org/10.1038/s41928-024-01318-6 ) This research was supported by the Next-Generation Intelligent Semiconductor Technology Development Project, Excellent New Researcher Project and PIM AI Semiconductor Core Technology Development Project of the National Research Foundation of Korea, and the Electronics and Telecommunications Research Institute Research and Development Support Project of the Institute of Information & communications Technology Planning & Evaluation.
2025.01.17
View 7388
Research Finds Digital Music Streaming Consumption Dropped as a Result of Covid-19 and Lockdowns
Decline in human mobility has stunning consequences for content streaming The Covid-19 pandemic and lockdowns significantly reduced the consumption of audio music streaming in many countries as people turned to video platforms. On average, audio music consumption decreased by 12.5% after the World Health Organization’s (WHO) pandemic declaration in March 2020. Music streaming services were an unlikely area hit hard by the Covid-19 pandemic. New research in Marketing Science found that the drop in people’s mobility during the pandemic significantly reduced the consumption of audio music streaming. Instead, people turned more to video platforms. “On average, audio music consumption decreased by more than 12% after the World Health Organization’s (WHO) pandemic declaration on March 11, 2020. As a result, during the pandemic, Spotify lost 838 million dollars of revenue in the first three quarters of 2020,” said Jaeung Sim, a PhD candidate in management engineering at KAIST and one of the authors of the research study on this phenomenon. “Our results showed that human mobility plays a much larger role in the audio consumption of music than previously thought.” The study, “Frontiers: Virus Shook the Streaming Star: Estimating the Covid-19 Impact on Music Consumption,” conducted by Sim and Professor Daegon Cho of KAIST, Youngdeok Hwang of City University of New York, and Rahul Telang of Carnegie Mellon University, looked at online music streaming data for top songs for two years in 60 countries, as well as Covid-19 cases, lockdown statistics, and daily mobility data, to determine the nature of the changes. The study showed how the pandemic adversely impacted music streaming services despite the common expectation that the pandemic would universally benefit online medias platforms. This implies that the substantially changing media consumption environment can place streaming music in fiercer competition with other media forms that offer more dynamic and vivid experiences to consumers. The researchers found that music consumption through video platforms was positively associated with the severity of Covid-19, lockdown policies, and time spent at home. -PublicationJaeung Sim, Daegon Cho, Youngdeok Hwang, and Rahul Telang,“Frontiers: Virus Shook the Streaming Star: Estimating the Covid-19 Impact on Music Consumption,” November 30 in Marketing Science online (doi.org/10.1287/mksc.2021.1321) -Profile Professor Daegon ChoGraduate School of Information and Media ManagementCollege of BusinessKAIST
2022.02.15
View 10965
Startup Elice Donates 300 Million KRW to School of Computing
Elice hopes to create a virtuous circle that bridges the educational gap Elice, a student startup from the School of Computing has committed to donate 300 million KRW to KAIST. Jae-Won Kim, CEO of the coding education company, established the startup with his colleagues in 2015. Since then, more than 100 companies, including 17 of Korea’s top 20 companies such as SK and LG have used Elice' digital coding platform to educate employees. More than 200,000 employees have completed the online training with completion rates over 80%. Kim said during the donation ceremony that he hopes to fund the renovation of the School of Computing building and that he will continue to work on expanding platforms that will help make communication between educators and students more interactive. He explained, “We are making this contribution to create a virtuous circle that bridges the educational gap and improves the quality of education." President Kwang Hyung Lee was pleased to welcome the student startup’s donation, saying, "Software talent is one of the most precious resources we should foster for the nation’s future. I am thrilled to see that a startup that was founded here on the KAIST campus has grown into a great company that provides excellent coding education for our society.” Professor Alice Oh, who was the advisor for Kim and his colleagues when they launched the startup, joined the ceremony along with the founding members from KAIST including CPO Su-In Kim, CTO Chong-Kuk Park, and team leader Chang-Hyun Lee.
2021.12.13
View 5883
Metaverse Factory Center to Improve SME’s Competitiveness
The center is expected to enhance the manufacturing competitiveness of SMEs and root industry KAIST opened the ‘Metaverse Factory Experience Center for Manufacturing AI’ on November 1 at the KAIST Bigdata Center for Manufacturing AI. The AI-powered manufacturing metaverse factory will provide real-life experiences for the analysis and application of manufacturing data. Funded by the Ministry of SMEs and Startups, the center is a collaboration with Digiforet, which donated the software system to KAIST. The center allows users to experience the collection, analysis, and utilization process of manufacturing data equivalent to that of real manufacturing sites. Users can connect to the service from anywhere in the world using AR/VR/XR equipment and a metaverse solution, which allows small and middle-sized domestic manufacturing companies to overcome the challenges of entering and selling their production lines overseas in the post-COVID-19 era. The platform is an opportunity for such companies to introduce and export their excellent manufacturing techniques. With the same manufacturing and AI processes of real production sites, the injection molding metaverse factory for plastic screw production runs simulations of the products they will make. Based on the data collection parameters (temperature, pressure, speed, location, time, etc.) built into the Korea AI Manufacturing Platform, an AI-powered SME manufacturing platform, the metaverse factory can detect causes of defects, provide analysis, and guide improvements in productivity and product quality. Starting with the injection molding equipment metaverse factory, the platform aims to expand into plating, welding, molding, casting, forging, and annealing, and become a root industry to contribute greatly to enhancing the manufacturing competitiveness of Korea’s small and middle-sized root industries. Il-Joong Kim, head of the KAIST Manufacturing AI Bigdata Center where the metaverse factory is located, said, “To successfully incorporate manufacturing AI into production sites, it is indispensable that various AI algorithms are tested to optimize decisions. The platform allows users to collect manufacturing data and to experience and test AI analysis simultaneously without interrupting the production process, making it highly effective.” KAIST President Kwang Hyung Lee said, “We will support the close academic-industrial cooperation with Digiforet such as this collaborative for improving SMEs’ competitiveness.” Digiforet CEO Sunghoon Park, who donated a whole HW/SW interface for the construction of the Metaverse Factory Experience Center for Manufacturing AI, said, “I will do my best to realize the best “Metaverse Factory for Manufacturing AI” in the world by combining the AI and bigdata accumulated at KAIST and Digiforet’s XR metaverse technology.”
2021.11.03
View 8186
KAIST and Google Partner to Develop AI Curriculum
Two KAIST professors, Hyun Wook Ka from the School of Transdisciplinary Studies and Young Jae Jang from the Department of Industrial and Systems Engineering, were recipients of Google Education Grants that will support the development of new AI courses integrating the latest industrial technology. This collaboration is part of the KAIST-Google Partnership, which was established in July 2019 with the goal of nurturing AI talent at KAIST. The two proposals -- Professor Ka’s ‘Cloud AI-Empowered Multimodal Data Analysis for Human Affect Detection and Recognition’ and Professor Jang’s ‘Learning Smart Factory with AI’-- were selected by the KAIST Graduate School of AI through a school-wide competition held in July. The proposals then went through a final review by Google and were accepted. The two professors will receive $7,500 each for developing AI courses using Google technology for one year. Professor Ka’s curriculum aims to provide a rich learning experience for students by providing basic knowledge on data science and AI and helping them obtain better problem solving and application skills using practical and interdisciplinary data science and AI technology. Professor Jang’s curriculum is designed to solve real-world manufacturing problems using AI and it will be field-oriented. Professor Jang has been managing three industry-academic collaboration centers in manufacturing and smart factories within KAIST and plans to develop his courses to go beyond theory and be centered on case studies for solving real-world manufacturing problems using AI. Professor Jang said, “Data is at the core of smart factories and AI education, but there is often not enough of it for the education to be effective. The KAIST Advanced Manufacturing Laboratory has a testbed for directly acquiring data generated from real semiconductor automation equipment, analyzing it, and applying algorithms, which enables truly effective smart factory and AI education.” KAIST signed a partnership with Google in July 2019 to foster global AI talent and is operating various programs to train AI experts and support excellent AI research for two years. The Google AI Focused Research Award supports world-class faculty performing cutting-edge research and was previously awarded to professors Sung Ju Hwang from the Graduate School of AI and Steven Whang from the School of Electrical Engineering along with Google Cloud Platform (GCP) credits. These two professors have been collaborating with Google teams since October 2018 and recently extended their projects to continue through 2021. In addition, a Google Ph.D. Fellowship was awarded to Taesik Gong from the School of Computing in October this year, and three Student Travel Grants were awarded to Sejun Park from the School of Electrical Engineering, Chulhyung Lee from the Department of Mathematical Sciences, and Sangyun Lee from the School of Computing earlier in March. Five students were also recommended for the Google Internship program in March. (END)
2020.12.11
View 14080
Flexible User Interface Distribution for Ubiquitous Multi-Device Interaction
< Research Group of Professor Insik Shin (center) > KAIST researchers have developed mobile software platform technology that allows a mobile application (app) to be executed simultaneously and more dynamically on multiple smart devices. Its high flexibility and broad applicability can help accelerate a shift from the current single-device paradigm to a multiple one, which enables users to utilize mobile apps in ways previously unthinkable. Recent trends in mobile and IoT technologies in this era of 5G high-speed wireless communication have been hallmarked by the emergence of new display hardware and smart devices such as dual screens, foldable screens, smart watches, smart TVs, and smart cars. However, the current mobile app ecosystem is still confined to the conventional single-device paradigm in which users can employ only one screen on one device at a time. Due to this limitation, the real potential of multi-device environments has not been fully explored. A KAIST research team led by Professor Insik Shin from the School of Computing, in collaboration with Professor Steve Ko’s group from the State University of New York at Buffalo, has developed mobile software platform technology named FLUID that can flexibly distribute the user interfaces (UIs) of an app to a number of other devices in real time without needing any modifications. The proposed technology provides single-device virtualization, and ensures that the interactions between the distributed UI elements across multiple devices remain intact. This flexible multimodal interaction can be realized in diverse ubiquitous user experiences (UX), such as using live video steaming and chatting apps including YouTube, LiveMe, and AfreecaTV. FLUID can ensure that the video is not obscured by the chat window by distributing and displaying them separately on different devices respectively, which lets users enjoy the chat function while watching the video at the same time. In addition, the UI for the destination input on a navigation app can be migrated into the passenger’s device with the help of FLUID, so that the destination can be easily and safely entered by the passenger while the driver is at the wheel. FLUID can also support 5G multi-view apps – the latest service that allows sports or games to be viewed from various angles on a single device. With FLUID, the user can watch the event simultaneously from different viewpoints on multiple devices without switching between viewpoints on a single screen. PhD candidate Sangeun Oh, who is the first author, and his team implemented the prototype of FLUID on the leading open-source mobile operating system, Android, and confirmed that it can successfully deliver the new UX to 20 existing legacy apps. “This new technology can be applied to next-generation products from South Korean companies such as LG’s dual screen phone and Samsung’s foldable phone and is expected to embolden their competitiveness by giving them a head-start in the global market.” said Professor Shin. This study will be presented at the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019) October 21 through 25 in Los Cabos, Mexico. The research was supported by the National Science Foundation (NSF) (CNS-1350883 (CAREER) and CNS-1618531). Figure 1. Live video streaming and chatting app scenario Figure 2. Navigation app scenario Figure 3. 5G multi-view app scenario Publication: Sangeun Oh, Ahyeon Kim, Sunjae Lee, Kilho Lee, Dae R. Jeong, Steven Y. Ko, and Insik Shin. 2019. FLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction. To be published in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019). ACM, New York, NY, USA. Article Number and DOI Name TBD. Video Material: https://youtu.be/lGO4GwH4enA Profile: Prof. Insik Shin, MS, PhD ishin@kaist.ac.kr https://cps.kaist.ac.kr/~ishin Professor Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Sangeun Oh, PhD Candidate ohsang1213@kaist.ac.kr https://cps.kaist.ac.kr/ PhD Candidate Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Prof. Steve Ko, PhD stevko@buffalo.edu https://nsr.cse.buffalo.edu/?page_id=272 Associate Professor Networked Systems Research Group Department of Computer Science and Engineering State University of New York at Buffalo http://www.buffalo.edu/ Buffalo 14260, USA (END)
2019.07.20
View 41417
KAIST Develops IoT Platform for Food Safety
A research team led by the KAIST Auto-ID Labs developed a GS1 international standard-based IoTs infrastructure platform dubbed Oliot (Open Language of Internet of Things). This platform will be applied to Wanju Local Food, the nation’s largest cooperative, and will be in operation from April 5. A total of eleven organizations participated in the development of Oliot, with KAIST as the center. This consortium is based on the GS1 international standard-based Oliot platform, which allows collecting and sharing data along the entire process of agrifood from production to processing, distribution, and consumption. It aims at increasing farm incomes and establishing a global ecosystem of domestic agriculture and stockbreeding that provides safe food. Wanju Local Food is now the world’s first local food co-op with a traceability system from the initial stage of production planning to end sales based on GS1 international standards, which will ensure food safety. KAIST has been sharing Oliot data in order to apply it to industries around the world. As of April 2018, approximately 900 enterprises and developers from more than 100 countries have downloaded it. Professor Daeyoung Kim from the School of Computing, who is also Research Director of Auto-ID Labs said, “We are planning to disseminate Oliot to local food cooperatives throughout the nation. We will also cooperate with other countries, like China, Holland, and Hong Kong to create a better ecosystem for the global food industry. “We are currently collaborating with related business to converge Oliot with AI or blockchain technology that can be applied to various services, such as healthcare and smart factories. Its tangible outcome will be revealed soon,” he added. Auto-ID Labs are a global research consortium of six academic institutions that research and develop new technologies for advancing global commerce, partnering with GS1 (Global Standard 1), a non-profit organization that established standards for global commerce such as introducing barcodes to the retail industry. The Auto-ID Labs include MIT, University of Cambridge, Keio University, Fudan University, ETH Zurich/University of St. Gallen, and KAIST. The consortium was supported by the Ministry of Science and ICT as well as the Institute for Information and Communications Technology Promotion for three years from 2015. The launching of Oliot at Wanju Local Food will be held on April 5.
2018.04.03
View 10695
Strengthening Industry-Academia Cooperation with LG CNS
On November 20, KAIST signed an MoU with LG CNS for industry-academia partnership in education, research, and business in the fields of AI and Big Data. Rather than simply developing education programs or supporting industry-academia scholarships, both organizations agreed to carry out a joint research project on AI and Big Data that can be applied to practical business. KAIST will collaborate with LG CNS in the fields of smart factories, customer analysis, and supply chain management analysis. Not only will LG CNS offer internships to KAIST students, but it also will support professors and students who propose innovative startup ideas for AI and Big Data. Offering an industry-academia scholarship for graduate students is also being discussed. Together with LG CNS, KAIST will put its efforts into propose projects regarding AI and Big Data in the public sector. Furthermore, KAIST and LG CNS will jointly explore and carry out industry-academia projects that could be practically used in business. Both will carry out the project vigorously through strong cooperation; for instance, LG CNS employees can be assigned to KAIST, if necessary. Also, LG CNS’s AI and Big Data platform, called DAP (Data Analytics & AI Platform) will be used as a data analysis tool during the project and the joint outcomes will be installed in DAP. KAIST professors with expertise in AI deep learning have trained LG CNS employees since the Department of Industrial & Systems Engineering established ‘KAIST AI Academy’ in LG CNS last August. “With KAIST, the best research-centered university in Korea, we will continue to lead in developing the field of AI and Big Data and provide innovative services that create value by connecting them to customer business,” Yong Shub Kim, the CEO of LG CNS, highlighted.
2017.11.22
View 13155
Sangeun Oh Recognized as a 2017 Google Fellow
Sangeun Oh, a Ph.D. candidate in the School of Computing was selected as a Google PhD Fellow in 2017. He is one of 47 awardees of the Google PhD Fellowship in the world. The Google PhD Fellowship awards students showing outstanding performance in the field of computer science and related research. Since being established in 2009, the program has provided various benefits, including scholarships worth $10,000 USD and one-to-one research discussion with mentors from Google. His research work on a mobile system that allows interactions among various kinds of smart devices was recognized in the field of mobile computing. He developed a mobile platform that allows smart devices to share diverse functions, including logins, payments, and sensors. This technology provides numerous user experiences that existing mobile platforms could not offer. Through cross-device functionality sharing, users can utilize multiple smart devices in a more convenient manner. The research was presented at The Annual International Conference on Mobile Systems, Applications, and Services (MobiSys) of the Association for Computing Machinery in July, 2017. Oh said, “I would like to express my gratitude to my advisor, the professors in the School of Computing, and my lab colleagues. I will devote myself to carrying out more research in order to contribute to society.” His advisor, Insik Shin, a professor in the School of Computing said, “Being recognized as a Google PhD Fellow is an honor to both the student as well as KAIST. I strongly anticipate and believe that Oh will make the next step by carrying out good quality research.”
2017.09.27
View 13960
Multi-Device Mobile Platform for App Functionality Sharing
Case 1. Mr. Kim, an employee, logged on to his SNS account using a tablet PC at the airport while traveling overseas. However, a malicious virus was installed on the tablet PC and some photos posted on his SNS were deleted by someone else. Case 2. Mr. and Mrs. Brown are busy contacting credit card and game companies, because his son, who likes games, purchased a million dollars worth of game items using his smartphone. Case 3. Mr. Park, who enjoys games, bought a sensor-based racing game through his tablet PC. However, he could not enjoy the racing game on his tablet because it was not comfortable to tilt the device for game control. The above cases are some of the various problems that can arise in modern society where diverse smart devices, including smartphones, exist. Recently, new technology has been developed to easily solve these problems. Professor Insik Shin from the School of Computing has developed ‘Mobile Plus,’ which is a mobile platform that can share the functionalities of applications between smart devices. This is a novel technology that allows applications to easily share their functionalities without needing any modifications. Smartphone users often use Facebook to log in to another SNS account like Instagram, or use a gallery app to post some photos on their SNS. These examples are possible, because the applications share their login and photo management functionalities. The functionality sharing enables users to utilize smartphones in various and convenient ways and allows app developers to easily create applications. However, current mobile platforms such as Android or iOS only support functionality sharing within a single mobile device. It is burdensome for both developers and users to share functionalities across devices because developers would need to create more complex applications and users would need to install the applications on each device. To address this problem, Professor Shin’s research team developed platform technology to support functionality sharing between devices. The main concept is using virtualization to give the illusion that the applications running on separate devices are on a single device. They succeeded in this virtualization by extending a RPC (Remote Procedure Call) scheme to multi-device environments. This virtualization technology enables the existing applications to share their functionalities without needing any modifications, regardless of the type of applications. So users can now use them without additional purchases or updates. Mobile Plus can support hardware functionalities like cameras, microphones, and GPS as well as application functionalities such as logins, payments, and photo sharing. Its greatest advantage is its wide range of possible applications. Professor Shin said, "Mobile Plus is expected to have great synergy with smart home and smart car technologies. It can provide novel user experiences (UXs) so that users can easily utilize various applications of smart home/vehicle infotainment systems by using a smartphone as their hub." This research was published at ACM MobiSys, an international conference on mobile computing that was hosted in the United States on June 21. Figure1. Users can securely log on to SNS accounts by using their personal devices Figure 2. Parents can control impulse shopping of their children. Figure 3. Users can enjoy games more and more by using the smartphone as a controller.
2017.08.09
View 10820
<<
첫번째페이지
<
이전 페이지
1
2
>
다음 페이지
>>
마지막 페이지 2