본문 바로가기 대메뉴 바로가기

event

There Won't Be a Singularity: Professor Jerry Kaplan​
View : 3990 Date : 2018-09-10 Writer : ed_camnews

(Professor Jerry Kaplan gave a lecture titled, Artificial Intelligence: Think Again at KAIST)
(Professor Jerry Kaplan gave a lecture titled, Artificial Intelligence: Think Again at KAIST)
 
“People are so concerned about super intelligence, but the singularity will not happen,” said Professor Jerry Kaplan at Stanford University, an AI guru and Silicon Valley entrepreneur during a lecture at KAIST. He visited KAIST to give a lecture on Artificial Intelligence: Think Again on September 6.

Professor Kaplan said that some people argue that Korea’s AI research is behind the US and China but he doesn’t agree with that. “Korea is the most digitally connected one and has the world’s best engineers in the field. Korean companies are building products the consumers really like at reasonable prices. Those are attracting global consumers,” he added.

Instead of investing loads of money on AI research, he suggested three tasks for Korea taking a better position in the field of AI: Collecting and saving lots of data; training engineers, not the research talents in AI; and investing in AI infrastructure and relieving regulations by the government. 

Referring to AI hype, Professor Kaplan argued that machines are intelligent, but they do not think in the way humans can, and assured the audience that the singularity some futurists predict will not be coming. He said, “Machine learning is a tool extracting useful information, but it does not mean they are so smart that they are taking over the world.”

(Professor Jerry Kaplan gave a lecture titled, Artificial Intelligence: Think Again at KAIST)
(Professor Jerry Kaplan gave a lecture titled,  Artificial Intelligence: Think Again  at KAIST)

But what has made us believing AI myths? He first began pointing out how AI has been mythicized by three major drivers. Those are the entertainment industry, the popular media, and the AI community all wanting to attract more public attention and prestige. The abovementioned drivers are falsely making robots more human and are adding human characteristics. 

Instead of being captivated by those AI myths and thinking about how to save the world from robots, he strongly argued, “We need to develop standards for the unintended side effects from AI.” To provide machines socially and ethically mingling with the human world, he believed principles should be set as follows: Define the Safe Operating Envelope (SOE), “safe modes” when out of bounds, study human behavior programmatically, certification and licensing standards, limitations on machine “agency,” and basic computational ethics such as when it is okay to break the law.

Professor Kaplan gave a positive view of AI for humans. “The future will be bright, thanks to AI. They do difficult work and help us and that will drive wealth and quality of life. The rich might get richer, but the benefits will spread throughout the people. It is time to think of innovative ways for using AI for building better world,” he concluded.