Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
Cyber MOU Signing with Zhejiang University
KAIST signed an MOU with Zhejiang University (ZJU) in China on March 25. This MOU signing ceremony took place via video conference due to the outbreak of COVID-19. The collaboration with ZJU had already started with the signing of an MOU for cooperation in technology commercialization last December. Possible cooperation initiatives included facilitating joint start-up businesses, patent portfolios, and technology marketing. With this general agreement signing, it is expected that the two institutes will expand mutual exchanges and collaborations at the institutional level for education and research. President Sung-Chul Shin said, “We will work together to devise measures for the systematic advancement of cooperation in various directions, including education, research, and the commercialization of technologies.” ZJU, a member of the C9 League known as China’s Ivy League, was established in 1897 and is located in the city of Hangzhou. Its population across 37 colleges and schools comprises 54,641 students and 3,741 faculty members. The university was ranked 6th in Asia and 54th in the world in the 2020 QS Rankings. (END)
Rise of the mimic-bots that act like we do: Human-machine teamwork.
An online magazine, Technology Marketing Corporation, based in the UK published an article, dated January 8, 2011, on a robot research project led by Professor Jong-Hwan Kim from the Electrical Engineering Department. The article follows below: Technology Marketing Corporation [January 08, 2011] Rise of the mimic-bots that act like we do Human-machine teamwork (New Scientist Via Acquire Media NewsEdge) Rise of the mimic-bots that act like we doA robot inspired by human mirror neurons can interpret human gestures to learn how it should actNow follow meA robot inspired by human mirror neurons can interpret human gestures to learn how it should actA HUMAN and a robot face each other across the room. The human picks up a ball, tosses it towards the robot, and then pushes a toy car in the same direction. Confused by two objects coming towards it at the same time, the robot flashes a question mark on a screen. Without speaking, the human makes a throwing gesture. The robot turns its attention to the ball and decides to throw it back. In this case the robot"s actions were represented by software commands, but it will be only a small step to adapt the system to enable a real robot to infer a human"s wishes from their gestures. Developed by Ji-Hyeong Han and Jong-Hwan Kim at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, the system is designed to respond to the actions of the person confronting it in the same way that our own brains do. The human brain contains specialised cells, called mirror neurons, that appear to fire in the same way when we watch an action being performed by others as they do when we perform the action ourselves. It is thought that this helps us to recognise or predict their intentions. To perform the same feat, the robot observes what the person is doing, breaks the action down into a simple verbal description, and stores it in its memory. It compares the action it observes with a database of its own actions, and generates a simulation based on the closest match. The robot also builds up a set of intentions or goals associated with an action. For example, a throwing gesture indicates that the human wants the robot to throw something back. The robot then connects the action "throw" with the object "ball" and adds this to its store of knowledge. When the memory bank contains two possible intentions that fit the available information, the robot considers them both and determines which results in the most positive feedback from the human?- a smile or a nod, for example. If the robot is confused by conflicting information, it can request another gesture from the human. It also remembers details of each interaction, allowing it to respond more quickly when it finds itself in a situation it has encountered before. The system should allow robots to interact more effectively with humans, using the same visual cues we use. "Of course, robots can recognise human intentions by understanding speech, but humans would have to make constant, explicit commands to the robot," says Han. "That would be pretty uncomfortable."Socially intelligent robots that can communicate with us through gesture and expression will need to develop a mental model of the person they are dealing with in order to understand their needs, says Chris Melhuish, director of the Bristol Robotics Laboratory in the UK. Using mirror neurons and humans" unique mimicking ability as an inspiration for building such robots could be quite interesting, he says. Han now plans to test the system on a robot equipped with visual and other sensors to detect people"s gestures. He presented his work at the Robio conference in Tianjin, China, in December. nAs the population of many countries ages, elderly people may share more of their workload with robotic helpers or colleagues. In an effort to make such interactions as easy as possible, Chris Melhuish and colleagues at the Bristol Robotics Laboratory in the UK are leading a Europe-wide collaboration called Cooperative Human Robotic Interaction Systems that is equipping robots with software that recognises an object they are picking up before they hand it to a person. They also have eye-tracking technology that they use to monitor what humans are paying attention to. The goal is to develop robots that can learn to safely perform shared tasks with people, such as stirring a cake mixture as a human adds milk. (c) 2011 Reed Business Information - UK. All Rights Reserved.
마지막 페이지 1
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.