Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
A Deep-Learned E-Skin Decodes Complex Human Motion
A deep-learning powered single-strained electronic skin sensor can capture human motion from a distance. The single strain sensor placed on the wrist decodes complex five-finger motions in real time with a virtual 3D hand that mirrors the original motions. The deep neural network boosted by rapid situation learning (RSL) ensures stable operation regardless of its position on the surface of the skin. Conventional approaches require many sensor networks that cover the entire curvilinear surfaces of the target area. Unlike conventional wafer-based fabrication, this laser fabrication provides a new sensing paradigm for motion tracking. The research team, led by Professor Sungho Jo from the School of Computing, collaborated with Professor Seunghwan Ko from Seoul National University to design this new measuring system that extracts signals corresponding to multiple finger motions by generating cracks in metal nanoparticle films using laser technology. The sensor patch was then attached to a user’s wrist to detect the movement of the fingers. The concept of this research started from the idea that pinpointing a single area would be more efficient for identifying movements than affixing sensors to every joint and muscle. To make this targeting strategy work, it needs to accurately capture the signals from different areas at the point where they all converge, and then decoupling the information entangled in the converged signals. To maximize users’ usability and mobility, the research team used a single-channeled sensor to generate the signals corresponding to complex hand motions. The rapid situation learning (RSL) system collects data from arbitrary parts on the wrist and automatically trains the model in a real-time demonstration with a virtual 3D hand that mirrors the original motions. To enhance the sensitivity of the sensor, researchers used laser-induced nanoscale cracking. This sensory system can track the motion of the entire body with a small sensory network and facilitate the indirect remote measurement of human motions, which is applicable for wearable VR/AR systems. The research team said they focused on two tasks while developing the sensor. First, they analyzed the sensor signal patterns into a latent space encapsulating temporal sensor behavior and then they mapped the latent vectors to finger motion metric spaces. Professor Jo said, “Our system is expandable to other body parts. We already confirmed that the sensor is also capable of extracting gait motions from a pelvis. This technology is expected to provide a turning point in health-monitoring, motion tracking, and soft robotics.” This study was featured in Nature Communications. Publication: Kim, K. K., et al. (2020) A deep-learned skin sensor decoding the epicentral human motions. Nature Communications. 11. 2149. https://doi.org/10.1038/s41467-020-16040-y29 Link to download the full-text paper: https://www.nature.com/articles/s41467-020-16040-y.pdf Profile: Professor Sungho Jo firstname.lastname@example.org http://nmail.kaist.ac.kr Neuro-Machine Augmented Intelligence Lab School of Computing College of Engineering KAIST
Professor YongKeun Park Wins the 2018 Fumio Okano Award
(Professor Park) Professor YongKeun Park from the Department of Physics won the 2018 Fumio Okano Award in recognition of his contributions to 3D display technology development during the annual conference of the International Society for Optics and Photonics (SPIE) held last month in Orlando, Florida in the US. The Fumio Okano Best 3D Paper Prize is presented annually in memory of Dr. Fumio Okano, a pioneer and innovator of 3D displays who passed away in 2013, for his contributions to the field of 3D TVs and displays. The award is sponsored by NHK-ES. Professor Park and his team are developing novel technology for measuring and visualizing 3D images by applying random light scattering. He has published numerous papers on 3D holographic camera technology and 3000x enhanced performance of 3D holographic displays in renowned international journals such as Nature Photonics, Nature Communications, and Science Advances. His technology has drawn international attention from renowned media outlets including Newsweek and Forbes. He has established two startups to commercialize his technology. Tomocube specializes in 3D imaging microscopes using holotomographic technology and the company exports their products to several countries including the US and Japan. The.Wave.Talk is exploring technology for examining pre-existing bacteria anywhere and anytime. Professor Park’s innovations have already been recognized in and out of KAIST. In February, he was selected as the KAISTian of the Year for his outstanding research, commercialization, and startups. He was also decorated with the National Science Award in April by the Ministry of Science and ICT and the Hong Jin-Ki Innovation Award later in May by the Yumin Cultural Foundation. Professor Park said, “3D holography is emerging as a significant technology with growing potential and positive impacts on our daily lives. However, the current technology lags far behind the levels displayed in SF movies. We will do our utmost to reach this level with more commercialization."
Humicotta Wins the Silver Prize at the 2017 IDEA
The 3D-printed ceramic humidifier made by the research team led by Professor Sang-Min Bae won the silver prize at the 2017 International Design Excellence Awards (IDEA). Professor Bae’s ID+IM team was also listed as winners of three more appropriate technology designs at the IDEA. The awards, sponsored by the Industrial Designers Society of America, are one of the three prestigious design awards including the Red Dot Design Award and the iF Design Award in Germany. The silver prize winner in the category of home and bath, Humicotta is an energy-efficient, bacteria free, and easy to clean humidifier. It includes a base module and filter. The base is a cylindrical pedestal with a built-in fan on which the filter is placed. The filter is a 3D-printed honeycomb structure made of diatomite. When water is added, the honeycomb structure and porous terracotta maximize natural humidification. It also offers an open platform service that customizes the filters or provides files that users can use their own 3D printer. Professor Bae’s team has worked on philanthropy design using appropriate technology as their main topic for years. Their designs have been recognized at prestigious global design awards events, winning more than 50 prizes with innovative designs made for addressing various global and social problems. The Light Funnel is a novel type of lighting device designed for off-grid areas of Africa. It helps to maximize the natural light effect in the daytime without any drastic home renovations. It consists of a transparent acrylic sphere and a reflective pathway. After filling the acrylic sphere with water and placing it on a rooftop, sunlight passes into the house through the water inside the sphere. It provides a lighted environment nine times brighter than without it. Also, once installed, it can be used almost permanently. The Maasai Smart Cane is made using wood sticks purchased through fair trade with the Maasai tribe. GPS is installed into the grip of the birch-tree cane, so that cane users can send a signal when in an emergency situation. All of the proceeds of this product go to the tribe. S.Cone is a first aid kit made in collaboration with Samsung Fire and Marine Insurance. The traffic cone-shaped kit is designed to help users handle an emergency situation intact and safe. The S.Cone has unique versions for fires, car accidents, and marine accidents. For example, the S.Cone for fires is equipped with a small fire extinguisher, smoke mask, and fire blanket. The cap of the S.Cone also functions as an IoT station connecting the fire and gas detector with smart phones. Professor Bae said of his team’s winning design products, “By making the data public, any person can design their own humidifier if they have access to a 3D-printer. We want it to be a very accessible product for the public. The Light Funnel and Maasai Smart Cane are designed for economically-marginalized populations and the elderly. We will continue to make the best designed products serving the marginalized 90% of the population around the world.”
Controlling 3D Behavior of Biological Cells Using Laser Holographic Techniques
A research team led by Professor YongKeun Park of the Physics Department at KAIST has developed an optical manipulation technique that can freely control the position, orientation, and shape of microscopic samples having complex shapes. The study has been published online in Nature Communications on May 22. Conventional optical manipulation techniques called “optical tweezers,” have been used as an invaluable tool for exerting micro-scale force on microscopic particles and manipulating three-dimensional (3-D) positions of particles. Optical tweezers employ a tightly-focused laser whose beam diameter is smaller than one micrometer (1/100 of hair thickness), which can generate attractive force on neighboring microscopic particles moving toward the beam focus. Controlling the positions of the beam focus enabled researchers to hold the particles and move them freely to other locations so they coined the name “optical tweezers,” and have been widely used in various fields of physical and biological studies. So far, most experiments using optical tweezers have been conducted for trapping spherical particles because physical principles can easily predict optical forces and the responding motion of microspheres. For trapping objects having complicated shapes, however, conventional optical tweezers induce unstable motion of such particles, and controllable orientation of such objects is limited, which hinder controlling the 3-D motion of microscopic objects having complex shapes such as living cells. The research team has developed a new optical manipulation technique that can trap complex objects of arbitrary shapes. This technique first measures 3-D structures of an object in real time using a 3-D holographic microscope, which shares the same physical principle of X-Ray CT imaging. Based on the measured 3-D shape of the object, the researchers precisely calculates the shape of light that can stably control the object. When the shape of light is the same as the shape of the object, the energy of the object is minimized, which provides the stable trapping of the object having the complicated shape. Moreover, by controlling the shape of light to have various positions, directions, and shapes of objects, it is possible to freely control the 3-D motion of the object and make the object have a desired shape. This process resembles the generation of a mold for casting a statue having desired shape so the researchers coined the name of the present technique “tomographic mold for optical trapping (TOMOTRAP).” The team succeeded in trapping individual human red blood cells stably, rotating them with desired orientations, folding them in an L-shape, and assembling two red blood cells together to form a new structure. In addition, colon cancer cells having a complex structure could be stably trapped and rotated at desired orientations. All of which have been difficult to be realized by the conventional optical techniques. Professor Park said, “Our technique has the advantage of controlling the 3-D motion of complex shaped objects without knowing prior information about their shape and optical characteristics, and can be applied in various fields including physics, optics, nanotechnology, and medical science.” Dr. Kyoohyun Kim, the lead author of this paper, noted that this technique can induce controlled deformation of biological cells with desired shapes. “This approach can be also applied to real-time monitoring of surgical prognosis of cellular-level surgeries for capturing and deforming cells as well as subcellular organelles,” added Kim. Figure 1. Concept of optical manipulation techniques Figure 2. Experimental setup Figure 3. Research results
Professor Jinah Park Received the Prime Minister's Award
Professor Jinah Park of the School of Computing received the Prime Minister’s Citation Ribbon on April 21 at a ceremony celebrating the Day of Science and ICT. The awardee was selected by the Ministry of Science, ICT and Future Planning and Korea Communications Commission. Professor Park was recognized for her convergence R&D of a VR simulator for dental treatment with haptic feedback, in addition to her research on understanding 3D interaction behavior in VR environments. Her major academic contributions are in the field of medical imaging, where she developed a computational technique to analyze cardiac motion from tagging data. Professor Park said she was very pleased to see her twenty-plus years of research on ways to converge computing into medical areas finally bear fruit. She also thanked her colleagues and students in her Computer Graphics and CGV Research Lab for working together to make this achievement possible.
Next-Generation Holographic Microscope for 3D Live Cell Imaging
KAIST researchers have developed a revolutionary bio-medical imaging tool, the HT-1, to view and analyze cells, which is commercially available. Professor YongKeun Park of the Physics Department at KAIST and his research team have developed a powerful method for 3D imaging of live cells without staining. The researchers announced the launch of their new microscopic tool, the holotomography (HT)-1, to the global marketplace through a Korean start-up that Professor Park co-founded, TomoCube (www.tomocube.com). Professor Park is a leading researcher in the field of biophotonics and has dedicated much of his research career to working on digital holographic microscopy technology. He collaborated with TomoCube’s R&D team to develop a state-of-the-art, 2D/3D/4D holographic microscope that would allow a real-time label-free visualization of biological cells and tissues. The HT is an optical analogy of X-ray computed tomography (CT). Both X-ray CT and HT share the same physical principle—the inverse of wave scattering. The difference is that HT uses laser illumination whereas X-ray CT uses X-ray beams. From the measurement of multiple 2D holograms of a cell, coupled with various angles of laser illuminations, the 3D refractive index (RI) distribution of the cell can be reconstructed. The reconstructed 3D RI map provides structural and chemical information of the cell including mass, morphology, protein concentration, and dynamics of the cellular membrane. The HT enables users to quantitatively and non-invasively investigate the intrinsic properties of biological cells, for example, dry mass and protein concentration. Some of the research team’s breakthroughs that have leveraged HT’s unique and special capabilities can be found in several recent publications, including a lead article on the simultaneous 3D visualization and position tracking of optically trapped particles which was published in Optica on April 20, 2015. Current fluorescence confocal microscopy techniques require the use of exogenous labeling agents to render high-contrast molecular information. Therefore, drawbacks include possible photo-bleaching, photo-toxicity, and interference with normal molecular activities. Immune or stem cells that need to be reinjected into the body are considered particularly difficult to employ with fluorescence microscopy. “As one of the two currently available, high-resolution tomographic microscopes in the world, I believe that the HT-1 is the best in class regarding specifications and functionality. Users can see 3D/4D live images of cells, without fixing, coating or staining cells. Sample preparation times are reduced from a few days or hours to just a few minutes,” said Professor Park. Two Korean hospitals, Seoul National University Hospital in Bundang and Boramae Hospital in Seoul, are using this microscope currently. The research team has also introduced the HT-1 at the Photonics West Exhibition 2016 that took place on February 16-18 in San Francisco, USA. Professor Park added, “Our technology has set a new paradigm for cell observation under a microscope. I expect that this tomographic microscopy will be more widely used in future in various areas of pharmaceuticals, neuroscience, immunology, hematology, and cell biology.” Figure 1: HT-1 and Its Specifications Figure 2: 3D Images of Representative Biological Cells Taken with the HT-1
Yong-Joon Park, doctoral student, receives the Korea Dow Chemical Award 2014
Yong-Joon Park, a Ph.D. candidate of Materials Science and Engineering at KAIST, received the Korea Dow Chemical Award 2014, a prestigious recognition of the year’s best paper produced by students in the field of chemistry and materials science. The award ceremony took place on April 18, 2014 at Ilsan Kintex, Republic of Korea. The Korea Dow Chemical Award is annually given by Korea Dow Chemical and the Korean Chemical Society to outstanding papers produced by graduate and postdoc students. This year, a total of nine papers were selected out of 148 papers submitted. The title of Park’s paper is “The Development of 3D Nano-structure-based New Concept Super-elastic Materials.” This material could be used in flexible electronic devices such as displays and wearable computers.
Book Announcement: Sound Visualization and Manipulation
The movie Gravity won seven Oscar awards this year, one of which was for its outstanding 3D sound mixing, immersing viewers in the full experience of the troubled space expedition. 3D audio effects are generated by manipulating the sound produced by speakers, speaker-arrays, or headphones to place a virtual sound source at a desired location in 3D space such as behind, above, or below the listener's head. Two professors from the Department of Mechanical Engineering at KAIST have recently published a book that explains two important technologies related to 3D sound effects: sound visualization and manipulation. Professor Yang-Hann Kim, an eminent scholar in sound engineering, and Professor Jung-Woo Choi collaborated to write Sound Visualization and Manipulation (Wily 2013), which uniquely addresses the two most important problems in the field in a unified way. The book introduces general concepts and theories and describes a number of techniques in sound visualization and manipulation, offering an interrelated approach to two very different topics: sound field visualization techniques based on microphone arrays and controlled sound field generation techniques using loudspeaker arrays. The authors also display a solid understanding of the associated physical and mathematical concepts applied to solve the visualization and manipulation problems and provide extensive examples demonstrating the benefits and drawbacks of various applications, including beamforming and acoustic holography technology. The book will be an excellent reference for graduate students, researchers, and professionals in acoustic engineering, as well as in audio and noise control system development. For detailed descriptions of the book: http://as.wiley.com/WileyCDA/WileyTitle/productCd-1118368479.html
Ultra-High Strength Metamaterial Developed Using Graphene
New metamaterial has been developed, exhibiting hundreds of times greater strength than pure metals. Professor Seung Min, Han and Yoo Sung, Jeong (Graduate School of Energy, Environment, Water, and Sustainability (EEWS)) and Professor Seok Woo, Jeon (Department of Material Science and Engineering) have developed a composite nanomaterial. The nanomaterial consists of graphene inserted in copper and nickel and exhibits strengths 500 times and 180 times, respectively, greater than that of pure metals. The result of the research was published on the July 2nd online edition in Nature Communications journal. Graphene displays strengths 200 times greater than that of steel, is stretchable, and is flexible. The U.S. Army Armaments Research, Development and Engineering Center developed a graphene-metal nanomaterial but failed to drastically improve the strength of the material. To maximize the strength increased by the addition of graphene, the KAIST research team created a layered structure of metal and graphene. Using CVD (Chemical Vapor Deposition), the team grew a single layer of graphene on a metal deposited substrate and then deposited another metal layer. They repeated this process to produce a metal-graphene multilayer composite material, utilizing a single layer of graphene. Micro-compression tests within Transmission Electronic Microscope and Molecular Dynamics simulations effectively showed the strength enhancing effect and the dislocation movement in grain boundaries of graphene on an atomic level. The mechanical characteristics of the graphene layer within the metal-graphene composite material successfully blocked the dislocations and cracks from external damage from traveling inwards. Therefore the composite material displayed strength beyond conventional metal-metal multilayer materials. The copper-graphene multilayer material with an interplanar distance of 70nm exhibited 500 times greater (1.5GPa) strength than pure copper. Nickel-graphene multilayer material with an interplanar distance of 100nm showed 180 times greater (4.0GPa) strength than pure nickel. It was found that there is a clear relationship between the interplanar distance and the strength of the multilayer material. A smaller interplanar distance made the dislocation movement more difficult and therefore increased the strength of the material. Professor Han, who led the research, commented, “the result is astounding as 0.00004% in weight of graphene increased the strength of the materials by hundreds of times” and “improvements based on this success, especially mass production with roll-to-roll process or metal sintering process in the production of ultra-high strength, lightweight parts for automobile and spacecraft, may become possible.” In addition, Professor Han mentioned that “the new material can be applied to coating materials for nuclear reactor construction or other structural materials requiring high reliability.” The research project received support from National Research Foundation, Global Frontier Program, KAIST EEWS-KINC Program and KISTI Supercomputer and was a collaborative effort with KISTI (Korea Institute of Science and Technology Information), KBSI (Korea Basic Science Institute), Stanford University, and Columbia University. A schematic diagram shows the structure of metal-graphene multi-layers. The metal-graphene multi-layered composite materials, containing a single-layered graphene, block the dislocation movement of graphene layers, resulting in a greater strength in the materials.
3D contents using our technology
Professor Noh Jun Yong’s research team from KAIST Graduate School of Culture Technology has successfully developed a software program that improves the semiautomatic conversation rate efficiency of 3D stereoscopic images by 3 times. This software, named ‘NAKiD’, was first presented at the renowned Computer Graphics conference/exhibition ‘Siggraph 2012’ in August and received intense interest from the participants. The ‘NAKiD’ technology is forecasted to replace the expensive imported equipment and technology used in 3D filming. For multi-viewpoint no-glasses 3D stereopsis, two cameras are needed to film the image. However, ‘NAKiD’ can easily convert images from a single camera into a 3D image, greatly decreasing the problems in the film production process as well as its cost. There are 2 methods commonly used in the production of 3D stereoscopic images; filming using two cameras and the 3D conversion using computer software. The use of two cameras requires expensive equipment and the filmed images need further processing after production. On the other hand, 3D conversion technology does not require extra devices in the production process and can also convert the existing 2D contents into 3D, a main reason why many countries are focusing on the development of stereoscopic technology. Stereoscopic conversion is largely divided in to 3 steps; object separation, formation of depth information and stereo rendering. Professor Noh’s teams focused on the optimization of each step to increase the efficiency of the conversion system. Professor Noh’s research team first increased the separation accuracy to the degree of a single hair and created an algorithm that automatically fills in the background originally covered by the separated object. The team succeeded in the automatic formation of depth information using the geographic or architectural characteristic and vanishing points. For the stereo rendering process, the team decreased the rendering time by reusing the rendered information of one side, rather than the traditional method of rendering the left and right images separately. Professor Noh said that ‘although 3D TVs are becoming more and more commercialized, there are not enough programs that can be watched in 3D’ and that ‘stereoscopic conversion technology is receiving high praise in the field of graphics because it allows the easy production of 3D contents with small cost’.
Professor Yoon Dong Ki becomes first Korean to Receive the Michi Nakata Prize
Professor Yoon Dong Ki (Graduate School of Nano Science and Technology) became the first Korean to receive the Michi Nakata Prize from the International Liquid Crystal Society. The Awards Ceremony was held on the 23rd of August in Mainz, Germany in the 24th Annual International Liquid Crystal Conference. The Michi Nakata Prize was initiated in 2008 and is rewarded every two years to a young scientist that made a ground breaking discovery or experimental result in the field of liquid crystal. Professor Yoon is the first Korean recipient of the Michi Nakata Prize. Professor Yoon is the founder of the patterning field that utilizes the defect structure formed by smectic displays. He succeeded in large scale patterning complex chiral nano structures that make up bent-core molecules. Professor Yoon’s experimental accomplishment was published in the Advanced Materials magazine and the Proc. Natl. Acad. Sci. U.S.A. and also as the cover dissertation of Liquid Crystals magazine. Professor Yoon is currently working on Three Dimensional Nano Patterning of Supermolecular Liquid Crystal and is part of the World Class University organization.
KAIST offers a new course on three-dimensional movies.
Registration for the class ends on February 18, 2010. The Graduate School of Culture Technology (GSCT) at KAIST created a special class entitled “Master Class for Three-Dimensional (3D) Film Production.” Applications for the class will be accepted by Thursday, February 18, 2010. The latest 3D movie, AVATAR, has become very popular upon its release in late 2009: An overwhelming visual and sensory experience provided by a 3D technology gave viewers real life feelings about a virtual reality built in the movie. People can almost reach out and touch an explosion, components of machines, and aliens appeared on the screen. “In response to growing interests in 3D movies, KAIST GSCT established a special session to teach students an overall process of 3D film production,” said Kwang-Yeon Won, Dean of GSTC. He also stressed that the 3D technology would serve as catalysts in developing the next generation of visual industry in the 21st century. “We have actively engaged in the development of 3D core technology and application contents. This class will be the first of our initiatives to launch a series of educational programs on 3D technology.” The class offers a complete road of 3D film production: an overview of stereography for 3D movies from planning, shooting, to post production. Many of film professionals (i.e., Director Yang-Hyun Choi and Shooting Director Byung-Il Kim), who are currently working in the field, will join the class so that students can have an opportunity to learn all ends of 3D film industry, both in terms of theoretical knowledge and practical work experience. The class is open to undergraduate/graduate students and to the public. For details, please refer to the website of http://ct.kaist.ac.kr/stereoclass2010 or call at 02-380-3698 (Industry-University Research Collaboration Center at KAIST Graduate School of Culture Technology).
마지막 페이지 1
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.