본문 바로가기 대메뉴 바로가기

event

Taming AI: Engineering, Ethics, and Policy​
View : 5990 Date : 2018-06-26 Writer : ed_camnews


Seminar image
(Professor Lee, Professor Koene, Professor Walsh, and Professor Ema (from left))

Can AI-powered robotics could be adequate companions for humans? Will the good faith of users and developers work for helping AI-powered robots become the new tribe of the digital future?

AI’s efficiency is creating new socio-economic opportunities in the global market. Despite the opportunities, challenges still remain. It is said that efficiency-enforcing algorithms through deep learning will take an eventual toll on human dignity and safety, bringing out the disastrous fiascos featured in the Terminator movies.

A research group at the Korean Flagship AI Project for Emotional Digital Companionship at KAIST Institute for AI (KI4AI) and the Fourth Industrial Intelligence Center at KAIST Institute co-hosted a seminar, “Taming AI: Engineering, Ethics, and Policy” last week to discuss ways to better employ AI technologies in ways that upholds human values.

The KI4AI has been conducting this flagship project from the end of 2016 with the support of the Ministry of Science and ICT.

The seminar brought together three speakers from Australia, Japan, and the UK to better fathom the implications of the new technology emergence from the ethical perspectives of engineering and discuss policymaking for the responsible usage of technology.

Professor Toby Walsh, an anti-autonomous weapon activist from New South Wales University in Australia continued to argue the possible risk that AI poses to malfunction. He said that an independent ethics committee or group usually monitors academic institutions’ research activities in order to avoid any possible mishaps.

However, he said there is no independent group or committee monitoring the nature of corporations’ engagement of such technologies, while its possible threats against humanity are alleged to be growing. He mentioned that Google’s and Amazon’s information collecting also pose a potent threat. He said that ethical standards similar to academic research integrity should be established to avoid the possible restricting of the dignity of humans and mass destruction. He hoped that KAIST and Google would play a leading role in establishing an international norm toward this compelling issue.

Professor Arisa Ema from the University of Tokyo provided very compelling arguments for thinking about the duplicity of technology and how technology should serve the public interest without any bias against gender, race, and social stratum. She pointed out the information dominated by several Western corporations like Google. She said that such algorithms for deep learning of data provided by several Western corporations will create very biased information, only applicable to limited races and classes.

Meanwhile, Professor Ansgar Koene from the University of Nottingham presented the IEEE’s global initiative on the ethics of autonomous and intelligence systems. He shared the cases of industry standards and ethically-aligned designs made by the IEEE Standards Association. He said more than 250 global cross-disciplinary thought leaders from around the world joined to develop ethical guidelines called Ethically Aligned Design (EAD) V2. EAD V2 includes methodologies to guide ethical research and design, embedding values into autonomous intelligence systems among others. For the next step beyond EAD V2, the association is now working for IEEE P70xx Standards Projects, detailing more technical approaches.

Professor Soo Young Lee at KAIST argued that the eventual goal of complete AI is to have human-like emotions, calling it a new paradigm for the relationship between humans and AI-robots. According to Professor Lee, AI-powered robots will serve as a good companion for humans. “Especially in aging societies affecting the globe, this will be a very viable and practical option,” he said.

He pointed out, “Kids learn from parents’ morality and social behavior. Users should have AI-robots learn morality as well. Their relationships should be based on good faith and trust, no longer that of master and slave. He said that liability issues for any mishap will need to be discussed further, but basically each user and developer should have their own responsibility when dealing with these issues.

Releated news