Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
KAIST’s Robo-Dog “RaiBo” runs through the sandy beach
KAIST (President Kwang Hyung Lee) announced on the 25th that a research team led by Professor Jemin Hwangbo of the Department of Mechanical Engineering developed a quadrupedal robot control technology that can walk robustly with agility even in deformable terrain such as sandy beach. < Photo. RAI Lab Team with Professor Hwangbo in the middle of the back row. > Professor Hwangbo's research team developed a technology to model the force received by a walking robot on the ground made of granular materials such as sand and simulate it via a quadrupedal robot. Also, the team worked on an artificial neural network structure which is suitable in making real-time decisions needed in adapting to various types of ground without prior information while walking at the same time and applied it on to reinforcement learning. The trained neural network controller is expected to expand the scope of application of quadrupedal walking robots by proving its robustness in changing terrain, such as the ability to move in high-speed even on a sandy beach and walk and turn on soft grounds like an air mattress without losing balance. This research, with Ph.D. Student Soo-Young Choi of KAIST Department of Mechanical Engineering as the first author, was published in January in the “Science Robotics”. (Paper title: Learning quadrupedal locomotion on deformable terrain). Reinforcement learning is an AI learning method used to create a machine that collects data on the results of various actions in an arbitrary situation and utilizes that set of data to perform a task. Because the amount of data required for reinforcement learning is so vast, a method of collecting data through simulations that approximates physical phenomena in the real environment is widely used. In particular, learning-based controllers in the field of walking robots have been applied to real environments after learning through data collected in simulations to successfully perform walking controls in various terrains. However, since the performance of the learning-based controller rapidly decreases when the actual environment has any discrepancy from the learned simulation environment, it is important to implement an environment similar to the real one in the data collection stage. Therefore, in order to create a learning-based controller that can maintain balance in a deforming terrain, the simulator must provide a similar contact experience. The research team defined a contact model that predicted the force generated upon contact from the motion dynamics of a walking body based on a ground reaction force model that considered the additional mass effect of granular media defined in previous studies. Furthermore, by calculating the force generated from one or several contacts at each time step, the deforming terrain was efficiently simulated. The research team also introduced an artificial neural network structure that implicitly predicts ground characteristics by using a recurrent neural network that analyzes time-series data from the robot's sensors. The learned controller was mounted on the robot 'RaiBo', which was built hands-on by the research team to show high-speed walking of up to 3.03 m/s on a sandy beach where the robot's feet were completely submerged in the sand. Even when applied to harder grounds, such as grassy fields, and a running track, it was able to run stably by adapting to the characteristics of the ground without any additional programming or revision to the controlling algorithm. In addition, it rotated with stability at 1.54 rad/s (approximately 90° per second) on an air mattress and demonstrated its quick adaptability even in the situation in which the terrain suddenly turned soft. The research team demonstrated the importance of providing a suitable contact experience during the learning process by comparison with a controller that assumed the ground to be rigid, and proved that the proposed recurrent neural network modifies the controller's walking method according to the ground properties. The simulation and learning methodology developed by the research team is expected to contribute to robots performing practical tasks as it expands the range of terrains that various walking robots can operate on. The first author, Suyoung Choi, said, “It has been shown that providing a learning-based controller with a close contact experience with real deforming ground is essential for application to deforming terrain.” He went on to add that “The proposed controller can be used without prior information on the terrain, so it can be applied to various robot walking studies.” This research was carried out with the support of the Samsung Research Funding & Incubation Center of Samsung Electronics. < Figure 1. Adaptability of the proposed controller to various ground environments. The controller learned from a wide range of randomized granular media simulations showed adaptability to various natural and artificial terrains, and demonstrated high-speed walking ability and energy efficiency. > < Figure 2. Contact model definition for simulation of granular substrates. The research team used a model that considered the additional mass effect for the vertical force and a Coulomb friction model for the horizontal direction while approximating the contact with the granular medium as occurring at a point. Furthermore, a model that simulates the ground resistance that can occur on the side of the foot was introduced and used for simulation. >
AI-based Digital Watermarking to Beat Fake News
(from left: PhD candidates Ji-Hyeon Kang, Seungmin Mun, Sangkeun Ji and Professor Heung-Kyu Lee) The illegal use of images has been a prevalent issue along with the rise of distributing fake news, which all create social and economic problems. Here, a KAIST team succeeded in embedding and detecting digital watermarks based on deep neural learning artificial intelligence, which adaptively responds to a variety of attack types, such as removing watermarks and hacking. Their research shows that this technology reached a level of reliability for technology commercialization. Conventional watermarking technologies show limitations in terms of practicality, technology scalability, and usefulness because they require a predetermined set of conditions, such as the attack type and intensity. They are designed and implemented in a way to satisfy specific conditions. In addition to those limitations, the technology itself is vulnerable to security issues because upgraded hacking technologies are constantly emerging, such as watermark removal, copying, and substitution. Professor Heung-Kyu Lee from the School of Computing and his team provided a web service that responds to new attacks through deep neural learning artificial intelligence. It also serves as a two-dimensional image watermarking technique based on neural networks with high security derived from the nonlinear characteristics of artificial neural networks. To protect images from varying viewpoints, the service offers a depth-image-based rendering (DIBR) three-dimensional image watermarking technique. Lastly, they provided a stereoscopic three-dimensional (S3D) image watermarking technique that minimizes visual fatigue due to the embedded watermarks. Their two-dimensional image watermarking technology is the first of its kind to be based upon artificial neural works. It acquires robustness through educating the artificial neural networking on various attack scenarios. At the same time, the team has greatly improved on existing security vulnerabilities by acquiring high security against watermark hacking through the deep structure of artificial neural networks. They have also developed a watermarking technique embedded whenever needed to provide proof during possible disagreements. Users can upload their images to the web service and insert the watermarks. When necessary, they can detect the watermarks for proof in any dispute. Moreover, this technology provides services, including simulation tools, watermark adjustment, and image quality comparisons before and after the watermark is embedded. This study maximized the usefulness of watermarking technology by facilitating additional editing and demonstrating robustness against hacking. Hence, this technology can be applied in a variety of contents for certification, authentication, distinction tracking, and copyrights. It can contribute to spurring the content industry and promoting a digital society by reducing the socio-economic losses caused by the use of various illegal image materials in the future. Professor Lee said, “Disputes related to images are now beyond the conventional realm of copyrights. Recently, their interest has rapidly expanded due to the issues of authentication, certification, integrity inspection, and distribution tracking because of the fake video problem. We will lead digital watermarking research that can overcome the technical limitations of conventional watermarking techniques.” This technology has only been conducted in labs thus far, but it is now open to the public after years of study. His team has been conducting a test run on the webpage (click).Moving forward from testing the technology under specific lab conditions, it will be applied to a real environment setting where constant changes pervade. 1. Figure. 2D image using the watermarking technique: a) original image b) watermark-embedded image c) signal from the embedded watermark Figure 2. Result of watermark detection according to the password Figure 3. Example of a center image using the DIBR 3D image watermarking technique: a) original image b) depth image c) watermark-embedded image d) signal from the embedded watermark Figure 4. Example of using the S3D image watermarking technique: a) original left image b) original right image c) watermark-embedded left image d) watermark-embedded right image e) signal from the embedded watermark (left) f) signal from the embedded watermark (right)
Mathematical Principle behind AI's 'Black Box'
(from left: Professor Jong Chul Ye, PhD candidates Yoseob Han and Eunju Cha) A KAIST research team identified the geometrical structure of artificial intelligence (AI) and discovered the mathematical principles of highly performing artificial neural networks, which can be applicable in fields such as medical imaging. Deep neural networks are an exemplary method of implementing deep learning, which is at the core of the AI technology, and have shown explosive growth in recent years. This technique has been used in various fields, such as image and speech recognition as well as image processing. Despite its excellent performance and usefulness, the exact working principles of deep neural networks has not been well understood, and they often suffer from unexpected results or errors. Hence, there is an increasing social and technical demand for interpretable deep neural network models. To address these issues, Professor Jong Chul Ye from the Department of Bio & Brain Engineering and his team attempted to find the geometric structure in a higher dimensional space where the structure of the deep neural network can be easily understood. They proposed a general deep learning framework, called deep convolutional framelets, to understand the mathematical principle of a deep neural network in terms of the mathematical tools in Harmonic analysis. As a result, it was found that deep neural networks’ structure appears during the process of decomposition of high dimensionally lifted signal via Hankel matrix, which is a high-dimensional structure formerly studied intensively in the field of signal processing. In the process of decomposing the lifted signal, two bases categorized as local and non-local basis emerge. The researchers found that non-local and local basis functions play a role in pooling and filtering operation in convolutional neural network, respectively. Previously, when implementing AI, deep neural networks were usually constructed through empirical trial and errors. The significance of the research lies in the fact that it provides a mathematical understanding on the neural network structure in high dimensional space, which guides users to design an optimized neural network. They demonstrated improved performance of the deep convolutional framelets’ neural networks in the applications of image denoising, image pixel in painting, and medical image restoration. Professor Ye said, “Unlike conventional neural networks designed through trial-and-error, our theory shows that neural network structure can be optimized to each desired application and are easily predictable in their effects by exploiting the high dimensional geometry. This technology can be applied to a variety of fields requiring interpretation of the architecture, such as medical imaging.” This research, led by PhD candidates Yoseob Han and Eunju Cha, was published in the April 26th issue of the SIAM Journal on Imaging Sciences. Figure 1. The design of deep neural network using mathematical principles Figure 2. The results of image noise cancelling Figure 3. The artificial neural network restoration results in the case where 80% of the pixels are lost
Dr. Demis Hassabis, the Developer of AlphaGo, Lectures at KAIST
AlphaGo, a computer program developed by Google DeepMind in London to play the traditional Chinese board game Go, had five matches against Se-Dol Lee, a professional Go player in Korea from March 8-15, 2016. AlphaGo won four out of the five games, a significant test result showcasing the advancement achieved in the field of general-purpose artificial intelligence (GAI), according to the company. Dr. Demis Hassabis, the Chief Executive Officer of Google DeepMind, visited KAIST on March 11, 2016 and gave an hour-long talk to students and faculty. In the lecture, which was entitled “Artificial Intelligence and the Future,” he introduced an overview of GAI and some of its applications in Atari video games and Go. He said that the ultimate goal of GAI was to become a useful tool to help society solve some of the biggest and most pressing problems facing humanity, from climate change to disease diagnosis.
Discovery of New Therapeutic Targets for Alzheimer's Disease
A Korean research team headed by Professor Dae-Soo Kim of Biological Sciences at KAIST and Dr. Chang-Jun Lee from the Korea Institute of Science and Technology (KIST) successfully identified that reactive astrocytes, commonly observed in brains affected by Alzheimer’s disease, produce abnormal amounts of inhibitory neurotransmitter gamma-Aminobutyric acid (GABA) in reaction to the enzyme Monoamine oxidase B (Mao-B) and release GABA through the Bestrophin-1 channel to suppress the normal signal transmission of brain nerve cells. By suppressing the GABA production or release from reactive astrocytes, the research team was able to restore the model mice's memory and learning impairment caused by Alzheimer’s disease. This discovery will allow the development of new drugs to treat Alzheimer’s and other related diseases. The research result was published in the June 29, 2014 edition of Nature Medicine (Title: GABA from Reactive Astrocytes Impairs Memory in Mouse Models of Alzheimer’s Disease). For details, please read the article below: Technology News, July 10, 2014 "Discovery of New Drug Targets for Memory Impairment in Alzheimer’s Disease" http://technews.tmcnet.com/news/2014/07/10/7917811.htm
마지막 페이지 1
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.