Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
NYU-KAIST Global AI & Digital Governance Conference Held
< Photo 1. Opening of NYU-KAIST Global AI & Digital Governance Conference > In attendance of the Minister of Science and ICT Jong-ho Lee, NYU President Linda G. Mills, and KAIST President Kwang Hyung Lee, KAIST co-hosted the NYU-KAIST Global AI & Digital Governance Conference at the Paulson Center of New York University (NYU) in New York City, USA on September 21st, 9:30 pm. At the conference, KAIST and NYU discussed the direction and policies for ‘global AI and digital governance’ with participants of upto 300 people which includes scholars, professors, and students involved in the academic field of AI and digitalization from both Korea and the United States and other international backgrounds. This conference was a forum of an international discussion that sought new directions for AI and digital technology take in the future and gathered consensus on regulations. Following a welcoming address by KAIST President, Kwang Hyung Lee and a congratulatory message from the Minister of Science and ICT, Jong-ho Lee, a panel discussion was held, moderated by Professor Matthew Liao, a graduate of Princeton and Oxford University, currently serving as a professor at NYU and the director at the Center for Bioethics of the NYU School of Global Public Health. Six prominent scholars took part in the panel discussion. Prof. Kyung-hyun Cho of NYU Applied Mathematics and Data Science Center, a KAIST graduate who has joined the ranks of the world-class in AI language models and Professor Jong Chul Ye, the Director of Promotion Council for Digital Health at KAIST, who is leading innovative research in the field of medical AI working in collaboration with major hospitals at home and abroad was on the panel. Additionally, Professor Luciano Floridi, a founding member of the Yale University Center for Digital Ethics, Professor Shannon Vallor, the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence at the University of Edinburgh of the UK, Professor Stefaan Verhulst, a Co-Founder and the DIrector of GovLab‘s Data Program at NYU’s Tandon School of Engineering, and Professor Urs Gasser, who is in charge of public policy, governance and innovative technology at the Technical University of Munich, also participated. Professor Matthew Liao from NYU led the discussion on various topics such as the ways to to regulate AI and digital technologies; the concerns about how deep learning technology being developed in medicinal purposes could be used in warfare; the scope of responsibilities Al scientists' responsibility should carry in ensuring the usage of AI are limited to benign purposes only; the effects of external regulation on the AI model developers and the research they pursue; and on the lessons that can be learned from the regulations in other fields. During the panel discussion, there was an exchange of ideas about a system of standards that could harmonize digital development and regulatory and social ethics in today’s situation in which digital transformation accelerates technological development at a global level, there is a looming concern that while such advancements are bringing economic vitality it may create digital divides and probles like manipulation of public opinion. Professor Jong-cheol Ye of KAIST (Director of the Promotion Council for Digital Health), in particular, emphasized that it is important to find a point of balance that does not hinder the advancements rather than opting to enforcing strict regulations. < Photo 2. Panel Discussion in Session at NYU-KAIST Global AI & Digital Governance Conference > KAIST President Kwang Hyung Lee explained, “At the Digital Governance Forum we had last October, we focused on exploring new governance to solve digital challenges in the time of global digital transition, and this year’s main focus was on regulations.” “This conference served as an opportunity of immense value as we came to understand that appropriate regulations can be a motivation to spur further developments rather than a hurdle when it comes to technological advancements, and that it is important for us to clearly understand artificial intelligence and consider what should and can be regulated when we are to set regulations on artificial intelligence,” he continued. Earlier, KAIST signed a cooperation agreement with NYU to build a joint campus, June last year and held a plaque presentation ceremony for the KAIST NYU Joint Campus last September to promote joint research between the two universities. KAIST is currently conducting joint research with NYU in nine fields, including AI and digital research. The KAIST-NYU Joint Campus was conceived with the goal of building an innovative sandbox campus centering aroung science, technology, engineering, and mathematics (STEM) combining NYU's excellent humanities and arts as well as basic science and convergence research capabilities with KAIST's science and technology. KAIST has contributed to the development of Korea's industry and economy through technological innovation aiding in the nation’s transformation into an innovative nation with scientific and technological prowess. KAIST will now pursue an anchor/base strategy to raise KAIST's awareness in New York through the NYU Joint Campus by establishing a KAIST campus within the campus of NYU, the heart of New York.
KAIST debuts “DreamWaQer” - a quadrupedal robot that can walk in the dark
- The team led by Professor Hyun Myung of the School of Electrical Engineering developed “DreamWaQ”, a deep reinforcement learning-based walking robot control technology that can walk in an atypical environment without visual and/or tactile information - Utilization of “DreamWaQ” technology can enable mass production of various types of “DreamWaQers” - Expected to be used in exploration of atypical environment involving unique circumstances such as disasters by fire. A team of Korean engineering researchers has developed a quadrupedal robot technology that can climb up and down the steps and moves without falling over in uneven environments such as tree roots without the help of visual or tactile sensors even in disastrous situations in which visual confirmation is impeded due to darkness or thick smoke from the flames. KAIST (President Kwang Hyung Lee) announced on the 29th of March that Professor Hyun Myung's research team at the Urban Robotics Lab in the School of Electrical Engineering developed a walking robot control technology that enables robust 'blind locomotion' in various atypical environments. < (From left) Prof. Hyun Myung, Doctoral Candidates I Made Aswin Nahrendra, Byeongho Yu, and Minho Oh. In the foreground is the DreamWaQer, a quadrupedal robot equipped with DreamWaQ technology. > The KAIST research team developed "DreamWaQ" technology, which was named so as it enables walking robots to move about even in the dark, just as a person can walk without visual help fresh out of bed and going to the bathroom in the dark. With this technology installed atop any legged robots, it will be possible to create various types of "DreamWaQers". Existing walking robot controllers are based on kinematics and/or dynamics models. This is expressed as a model-based control method. In particular, on atypical environments like the open, uneven fields, it is necessary to obtain the feature information of the terrain more quickly in order to maintain stability as it walks. However, it has been shown to depend heavily on the cognitive ability to survey the surrounding environment. In contrast, the controller developed by Professor Hyun Myung's research team based on deep reinforcement learning (RL) methods can quickly calculate appropriate control commands for each motor of the walking robot through data of various environments obtained from the simulator. Whereas the existing controllers that learned from simulations required a separate re-orchestration to make it work with an actual robot, this controller developed by the research team is expected to be easily applied to various walking robots because it does not require an additional tuning process. DreamWaQ, the controller developed by the research team, is largely composed of a context estimation network that estimates the ground and robot information and a policy network that computes control commands. The context-aided estimator network estimates the ground information implicitly and the robot’s status explicitly through inertial information and joint information. This information is fed into the policy network to be used to generate optimal control commands. Both networks are learned together in the simulation. While the context-aided estimator network is learned through supervised learning, the policy network is learned through an actor-critic architecture, a deep RL methodology. The actor network can only implicitly infer surrounding terrain information. In the simulation, the surrounding terrain information is known, and the critic, or the value network, that has the exact terrain information evaluates the policy of the actor network. This whole learning process takes only about an hour in a GPU-enabled PC, and the actual robot is equipped with only the network of learned actors. Without looking at the surrounding terrain, it goes through the process of imagining which environment is similar to one of the various environments learned in the simulation using only the inertial sensor (IMU) inside the robot and the measurement of joint angles. If it suddenly encounters an offset, such as a staircase, it will not know until its foot touches the step, but it will quickly draw up terrain information the moment its foot touches the surface. Then the control command suitable for the estimated terrain information is transmitted to each motor, enabling rapidly adapted walking. The DreamWaQer robot walked not only in the laboratory environment, but also in an outdoor environment around the campus with many curbs and speed bumps, and over a field with many tree roots and gravel, demonstrating its abilities by overcoming a staircase with a difference of a height that is two-thirds of its body. In addition, regardless of the environment, the research team confirmed that it was capable of stable walking ranging from a slow speed of 0.3 m/s to a rather fast speed of 1.0 m/s. The results of this study were produced by a student in doctorate course, I Made Aswin Nahrendra, as the first author, and his colleague Byeongho Yu as a co-author. It has been accepted to be presented at the upcoming IEEE International Conference on Robotics and Automation (ICRA) scheduled to be held in London at the end of May. (Paper title: DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning) The videos of the walking robot DreamWaQer equipped with the developed DreamWaQ can be found at the address below. Main Introduction: https://youtu.be/JC1_bnTxPiQ Experiment Sketches: https://youtu.be/mhUUZVbeDA0 Meanwhile, this research was carried out with the support from the Robot Industry Core Technology Development Program of the Ministry of Trade, Industry and Energy (MOTIE). (Task title: Development of Mobile Intelligence SW for Autonomous Navigation of Legged Robots in Dynamic and Atypical Environments for Real Application) < Figure 1. Overview of DreamWaQ, a controller developed by this research team. This network consists of an estimator network that learns implicit and explicit estimates together, a policy network that acts as a controller, and a value network that provides guides to the policies during training. When implemented in a real robot, only the estimator and policy network are used. Both networks run in less than 1 ms on the robot's on-board computer. > < Figure 2. Since the estimator can implicitly estimate the ground information as the foot touches the surface, it is possible to adapt quickly to rapidly changing ground conditions. > < Figure 3. Results showing that even a small walking robot was able to overcome steps with height differences of about 20cm. >
Professor Hyunjoo Jenny Lee to Co-Chair IEEE MEMS 2025
Professor Hyunjoo Jenny Lee from the School of Electrical Engineering has been appointed General Chair of the 38th IEEE MEMS 2025 (International Conference on Micro Electro Mechanical Systems). Professor Lee, who is 40, is the conference’s youngest General Chair to date and will work jointly with Professor Sheng-Shian Li of Taiwan’s National Tsing Hua University as co-chairs in 2025. IEEE MEMS is a top-tier international conference on microelectromechanical systems and it serves as a core academic showcase for MEMS research and technology in areas such as microsensors and actuators. With over 800 MEMS paper submissions each year, the conference only accepts and publishes about 250 of them after a rigorous review process recognized for its world-class prestige. Of all the submissions, fewer than 10% are chosen for oral presentations.
Professor Sung-Ju Lee’s Team Wins the Best Paper and the Methods Recognition Awards at the ACM CSCW
A research team led by Professor Sung-Ju Lee at the School of Electrical Engineering won the Best Paper Award and the Methods Recognition Award from ACM CSCW (International Conference on Computer-Supported Cooperative Work and Social Computing) 2021 for their paper “Reflect, not Regret: Understanding Regretful Smartphone Use with App Feature-Level Analysis”. Founded in 1986, CSCW has been a premier conference on HCI (Human Computer Interaction) and Social Computing. This year, 340 full papers were presented and the best paper awards are given to the top 1% papers of the submitted. Methods Recognition, which is a new award, is given “for strong examples of work that includes well developed, explained, or implemented methods, and methodological innovation.” Hyunsung Cho (KAIST alumus and currently a PhD candidate at Carnegie Mellon University), Daeun Choi (KAIST undergraduate researcher), Donghwi Kim (KAIST PhD Candidate), Wan Ju Kang (KAIST PhD Candidate), and Professor Eun Kyoung Choe (University of Maryland and KAIST alumna) collaborated on this research. The authors developed a tool that tracks and analyzes which features of a mobile app (e.g., Instagram’s following post, following story, recommended post, post upload, direct messaging, etc.) are in use based on a smartphone’s User Interface (UI) layout. Utilizing this novel method, the authors revealed which feature usage patterns result in regretful smartphone use. Professor Lee said, “Although many people enjoy the benefits of smartphones, issues have emerged from the overuse of smartphones. With this feature level analysis, users can reflect on their smartphone usage based on finer grained analysis and this could contribute to digital wellbeing.”
Experts to Help Asia Navigate the Post-COVID-19 and 4IR Eras
Risk Quotient 2020, an international conference co-hosted by KAIST and the National University of Singapore (NUS), will bring together world-leading experts from academia and industry to help Asia navigate the post-COVID-19 and Fourth Industrial Revolution (4IR) eras. The online conference will be held on October 29 from 10 a.m. Korean time under the theme “COVID-19 Pandemic and A Brave New World”. It will be streamed live on YouTube at https://www.youtube.com/c/KAISTofficial and https://www.youtube.com/user/NUScast. The Korea Policy Center for the Fourth Industrial Revolution (KPC4IR) at KAIST organized this conference in collaboration with the Lloyd's Register Foundation Institute for the Public Understanding of Risk (IPUR) at NUS. During the conference, global leaders will examine the socioeconomic impacts of the COVID-19 pandemic on areas including digital innovation, education, the workforce, and the economy. They will then highlight digital and 4IR technologies that could be utilized to effectively mitigate the risks and challenges associated with the pandemic, while harnessing the opportunities that these socioeconomic effects may present. Their discussions will mainly focus on the Asian region. In his opening remarks, KAIST President Sung-Chul Shin will express his appreciation for the Asian populations’ greater trust in and compliance with their governments, which have given the continent a leg up against the coronavirus. He will then emphasize that by working together through the exchange of ideas and global collaboration, we will be able to shape ‘a brave new world’ to better humanity. Welcoming remarks by Prof. Sang Yup Lee (Dean, KAIST Institutes) and Prof. Tze Yun Leong (Director, AI Technology at AI Singapore) will follow. For the keynote speech, Prof. Lan Xue (Dean, Schwarzman College, Tsinghua University) will share China’s response to COVID-19 and lessons for crisis management. Prof. Danny Quah (Dean, Lee Kuan Yew School of Public Policy, NUS) will present possible ways to overcome these difficult times. Dr. Kak-Soo Shin (Senior Advisor, Shin & Kim LLC, Former Ambassador to the State of Israel and Japan, and Former First and Second Vice Minister of the Ministry of Foreign Affairs of the Republic of Korea) will stress the importance of the international community’s solidarity to ensure peace, prosperity, and safety in this new era. Panel Session I will address the impact of COVID-19 on digital innovation. Dr. Carol Soon (Senior Research Fellow, Institute of Policy Studies, NUS) will present her interpretation of recent technological developments as both opportunities for our society as a whole and challenges for vulnerable groups such as low-income families. Dr. Christopher SungWook Chang (Managing Director, Kakao Mobility) will show how changes in mobility usage patterns can be captured by Kakao Mobility’s big data analysis. He will illustrate how the data can be used to interpret citizen’s behaviors and how risks can be transformed into opportunities by utilizing technology. Mr. Steve Ledzian’s (Vice President, Chief Technology Officer, FireEye) talk will discuss the dangers caused by threat actors and other cyber risk implications of COVID-19. Dr. June Sung Park (Chairman, Korea Software Technology Association (KOSTA)) will share how COVID-19 has accelerated digital transformations across all industries and why software education should be reformed to improve Korea’s competitiveness. Panel Session II will examine the impact on education and the workforce. Dr. Sang-Jin Ban (President, Korean Educational Development Institute (KEDI)) will explain Korea’s educational response to the pandemic and the concept of “blended learning” as a new paradigm, and present both positive and negative impacts of online education on students’ learning experiences. Prof. Reuben Ng (Professor, Lee Kuan Yew School of Public Policy, NUS) will present on graduate underemployment, which seems to have worsened during COVID-19. Dr. Michael Fung’s presentation (Deputy Chief Executive (Industry), SkillsFuture SG) will introduce the promotion of lifelong learning in Singapore through a new national initiative known as the ‘SkillsFuture Movement’. This movement serves as an example of a national response to disruptions in the job market and the pace of skills obsolescence triggered by AI and COVID-19. Panel Session III will touch on technology leadership and Asia’s digital economy and society. Prof. Naubahar Sharif (Professor, Division of Social Science and Division of Public Policy, Hong Kong University of Science and Technology (HKUST)) will share his views on the potential of China in taking over global technological leadership based on its massive domestic market, its government support, and the globalization process. Prof. Yee Kuang Heng (Professor, Graduate School of Public Policy, University of Tokyo) will illustrate how different legal and political needs in China and Japan have shaped the ways technologies have been deployed in responding to COVID-19. Dr. Hayun Kang (Head, International Cooperation Research Division, Korea Information Society Development Institute (KISDI)) will explain Korea’s relative success containing the pandemic compared to other countries, and how policy leaders and institutions that embrace digital technologies in the pursuit of public welfare objectives can produce positive outcomes while minimizing the side effects. Prof. Kyung Ryul Park (Graduate School of Science and Technology Policy, KAIST) will be hosting the entire conference, whereas Prof. Alice Hae Yun Oh (Director, MARS Artificial Intelligence Research Center, KAIST), Prof. Wonjoon Kim (Dean, Graduate School of Innovation and Technology Management, College of Business, KAIST), Prof. Youngsun Kwon (Dean, KAIST Academy), and Prof. Taejun Lee (Korea Development Institute (KDI) School of Public Policy and Management) are to chair discussions with the keynote speakers and panelists. Closing remarks will be delivered by Prof. Chan Ghee Koh (Director, NUS IPUR), Prof. So Young Kim (Director, KAIST KPC4IR), and Prof. Joungho Kim (Director, KAIST Global Strategy Institute (GSI)). “This conference is expected to serve as a springboard to help Asian countries recover from global crises such as the COVID-19 pandemic through active cooperation and joint engagement among scholars, experts, and policymakers,” according to Director So Young Kim. (END)
Professor Jee-Hwan Ryu Receives IEEE ICRA 2020 Outstanding Reviewer Award
Professor Jee-Hwan Ryu from the Department of Civil and Environmental Engineering was selected as this year’s winner of the Outstanding Reviewer Award presented by the Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation (IEEE ICRA). The award ceremony took place on June 5 during the conference that is being held online May 31 through August 31 for three months. The IEEE ICRA Outstanding Reviewer Award is given every year to the top reviewers who have provided constructive and high-quality thesis reviews, and contributed to improving the quality of papers published as results of the conference. Professor Ryu was one of the four winners of this year’s award. He was selected from 9,425 candidates, which was approximately three times bigger than the candidate pool in previous years. He was strongly recommended by the editorial committee of the conference. (END)
Professor Dongsu Han Named Program Chair for ACM CoNEXT 2020
Professor Dongsu Han from the School of Electrical Engineering has been appointed as the program chair for the 16th Association for Computing Machinery’s International Conference on emerging Networking EXperiments and Technologies (ACM CoNEXT 2020). Professor Han is the first program chair to be appointed from an Asian institution. ACM CoNEXT is hosted by ACM SIGCOMM, ACM's Special Interest Group on Data Communications, which specializes in the field of communication and computer networks. Professor Han will serve as program co-chair along with Professor Anja Feldmann from the Max Planck Institute for Informatics. Together, they have appointed 40 world-leading researchers as program committee members for this conference, including Professor Song Min Kim from KAIST School of Electrical Engineering. Paper submissions for the conference can be made by the end of June, and the event itself is to take place from the 1st to 4th of December. Conference Website: https://conferences2.sigcomm.org/co-next/2020/#!/home (END)
Professor Junil Choi Receives Stephen O. Rice Prize
< Professor Junil Choi (second from the left) > Professor Junil Choi from the School of Electrical Engineering received the Stephen O. Rice Prize at the Global Communications Conference (GLOBECOM) hosted by the Institute of Electrical and Electronics Engineers (IEEE) in Hawaii on December 10, 2019. The Stephen O. Rice Prize is awarded to only one paper of exceptional merit every year. The IEEE Communications Society evaluates all papers published in the IEEE Transactions on Communications journal within the last three years, and marks each paper by aggregating its scores on originality, the number of citations, impact, and peer evaluation. Professor Choi won the prize for his research on one-bit analog-to-digital converters (ADCs) for multiuser massive multiple-input and multiple-output (MIMO) antenna systems published in 2016. In his paper, Professor Choi proposed a technology that can drastically reduce the power consumption of the multiuser massive MIMO antenna systems, which are the core technology for 5G and future wireless communication. Professor Choi’s paper has been cited more than 230 times in various academic journals and conference papers since its publication, and multiple follow-up studies are actively ongoing. In 2015, Professor Choi received the IEEE Signal Processing Society Best Paper Award, an award equals to the Stephen O. Rice Prize. He was also selected as the winner of the 15th Haedong Young Engineering Researcher Award presented by the Korean Institute of Communications and Information Sciences (KICS) on December 6, 2019 for his outstanding academic achievements, including 34 international journal publications and 26 US patent registrations. (END)
KAIST Alumnus NYU Professor Supports Female AI Researchers
A KAIST alumnus and an associate professor at New York University (NYU), Dr. Kyunghyun Cho donated 3,000 USD to the KAIST Graduate School of AI to support female AI researchers. Professor Cho spoke as a guest lecturer at the 2019 Samsung AI Forum on November 4 and received 3,000 USD as an honorarium. He donated this honorarium to the KAIST Graduate School of AI with a special request to support the school’s female PhD students attending the 2020 International Conference on Learning Representations (ICLR), where he serves as a program co-chair. Professor Cho received his BS degree from KAIST’s School of Computing in 2009 and is now serving as an associate professor at NYU’s Computer Science Department and Center for Data Science. His research mainly covers machine learning and natural language processing. Professor Cho said that he decided to make this donation because “In Korea and even in the US, women in science, technology, engineering, and mathematics (STEM) lack opportunities and environments that allow them to excel.” Professor Song Chong, the Head of the KAIST Graduate School of AI, responded, “We are so grateful for Professor Kyunghyun Cho’s contribution and we will also use funds from the school in addition to the donation to support our female PhD students who will attend the ICLR.” (END)
AI to Determine When to Intervene with Your Driving
(Professor Uichin Lee (left) and PhD candidate Auk Kim) Can your AI agent judge when to talk to you while you are driving? According to a KAIST research team, their in-vehicle conservation service technology will judge when it is appropriate to contact you to ensure your safety. Professor Uichin Lee from the Department of Industrial and Systems Engineering at KAIST and his research team have developed AI technology that automatically detects safe moments for AI agents to provide conversation services to drivers. Their research focuses on solving the potential problems of distraction created by in-vehicle conversation services. If an AI agent talks to a driver at an inopportune moment, such as while making a turn, a car accident will be more likely to occur. In-vehicle conversation services need to be convenient as well as safe. However, the cognitive burden of multitasking negatively influences the quality of the service. Users tend to be more distracted during certain traffic conditions. To address this long-standing challenge of the in-vehicle conversation services, the team introduced a composite cognitive model that considers both safe driving and auditory-verbal service performance and used a machine-learning model for all collected data. The combination of these individual measures is able to determine the appropriate moments for conversation and most appropriate types of conversational services. For instance, in the case of delivering simple-context information, such as a weather forecast, driver safety alone would be the most appropriate consideration. Meanwhile, when delivering information that requires a driver response, such as a “Yes” or “No,” the combination of driver safety and auditory-verbal performance should be considered. The research team developed a prototype of an in-vehicle conversation service based on a navigation app that can be used in real driving environments. The app was also connected to the vehicle to collect in-vehicle OBD-II/CAN data, such as the steering wheel angle and brake pedal position, and mobility and environmental data such as the distance between successive cars and traffic flow. Using pseudo-conversation services, the research team collected a real-world driving dataset consisting of 1,388 interactions and sensor data from 29 drivers who interacted with AI conversational agents. Machine learning analysis based on the dataset demonstrated that the opportune moments for driver interruption could be correctly inferred with 87% accuracy. The safety enhancement technology developed by the team is expected to minimize driver distractions caused by in-vehicle conversation services. This technology can be directly applied to current in-vehicle systems that provide conversation services. It can also be extended and applied to the real-time detection of driver distraction problems caused by the use of a smartphone while driving. Professor Lee said, “In the near future, cars will proactively deliver various in-vehicle conversation services. This technology will certainly help vehicles interact with their drivers safely as it can fairly accurately determine when to provide conversation services using only basic sensor data generated by cars.” The researchers presented their findings at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp’19) in London, UK. This research was supported in part by Hyundai NGV and by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT. (Figure: Visual description of safe enhancement technology for in-vehicle conversation services)
Image Analysis to Automatically Quantify Gender Bias in Movies
Many commercial films worldwide continue to express womanhood in a stereotypical manner, a recent study using image analysis showed. A KAIST research team developed a novel image analysis method for automatically quantifying the degree of gender bias in films. The ‘Bechdel Test’ has been the most representative and general method of evaluating gender bias in films. This test indicates the degree of gender bias in a film by measuring how active the presence of women is in a film. A film passes the Bechdel Test if the film (1) has at least two female characters, (2) who talk to each other, and (3) their conversation is not related to the male characters. However, the Bechdel Test has fundamental limitations regarding the accuracy and practicality of the evaluation. Firstly, the Bechdel Test requires considerable human resources, as it is performed subjectively by a person. More importantly, the Bechdel Test analyzes only a single aspect of the film, the dialogues between characters in the script, and provides only a dichotomous result of passing the test, neglecting the fact that a film is a visual art form reflecting multi-layered and complicated gender bias phenomena. It is also difficult to fully represent today’s various discourse on gender bias, which is much more diverse than in 1985 when the Bechdel Test was first presented. Inspired by these limitations, a KAIST research team led by Professor Byungjoo Lee from the Graduate School of Culture Technology proposed an advanced system that uses computer vision technology to automatically analyzes the visual information of each frame of the film. This allows the system to more accurately and practically evaluate the degree to which female and male characters are discriminatingly depicted in a film in quantitative terms, and further enables the revealing of gender bias that conventional analysis methods could not yet detect. Professor Lee and his researchers Ji Yoon Jang and Sangyoon Lee analyzed 40 films from Hollywood and South Korea released between 2017 and 2018. They downsampled the films from 24 to 3 frames per second, and used Microsoft’s Face API facial recognition technology and object detection technology YOLO9000 to verify the details of the characters and their surrounding objects in the scenes. Using the new system, the team computed eight quantitative indices that describe the representation of a particular gender in the films. They are: emotional diversity, spatial staticity, spatial occupancy, temporal occupancy, mean age, intellectual image, emphasis on appearance, and type and frequency of surrounding objects. Figure 1. System Diagram Figure 2. 40 Hollywood and Korean Films Analyzed in the Study According to the emotional diversity index, the depicted women were found to be more prone to expressing passive emotions, such as sadness, fear, and surprise. In contrast, male characters in the same films were more likely to demonstrate active emotions, such as anger and hatred. Figure 3. Difference in Emotional Diversity between Female and Male Characters The type and frequency of surrounding objects index revealed that female characters and automobiles were tracked together only 55.7 % as much as that of male characters, while they were more likely to appear with furniture and in a household, with 123.9% probability. In cases of temporal occupancy and mean age, female characters appeared less frequently in films than males at the rate of 56%, and were on average younger in 79.1% of the cases. These two indices were especially conspicuous in Korean films. Professor Lee said, “Our research confirmed that many commercial films depict women from a stereotypical perspective. I hope this result promotes public awareness of the importance of taking prudence when filmmakers create characters in films.” This study was supported by KAIST College of Liberal Arts and Convergence Science as part of the Venture Research Program for Master’s and PhD Students, and will be presented at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on November 11 to be held in Austin, Texas. Publication: Ji Yoon Jang, Sangyoon Lee, and Byungjoo Lee. 2019. Quantification of Gender Representation Bias in Commercial Films based on Image Analysis. In Proceedings of the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW). ACM, New York, NY, USA, Article 198, 29 pages. https://doi.org/10.1145/3359300 Link to download the full-text paper: https://files.cargocollective.com/611692/cscw198-jangA--1-.pdf Profile: Prof. Byungjoo Lee, MD, PhD firstname.lastname@example.org http://kiml.org/ Assistant Professor Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea Profile: Ji Yoon Jang, M.S. email@example.com Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea Profile: Sangyoon Lee, M.S. Candidate firstname.lastname@example.org Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea (END)
Object Identification and Interaction with a Smartphone Knock
(Professor Lee (far right) demonstrate 'Knocker' with his students.) A KAIST team has featured a new technology, “Knocker”, which identifies objects and executes actions just by knocking on it with the smartphone. Software powered by machine learning of sounds, vibrations, and other reactions will perform the users’ directions. What separates Knocker from existing technology is the sensor fusion of sound and motion. Previously, object identification used either computer vision technology with cameras or hardware such as RFID (Radio Frequency Identification) tags. These solutions all have their limitations. For computer vision technology, users need to take pictures of every item. Even worse, the technology will not work well in poor lighting situations. Using hardware leads to additional costs and labor burdens. Knocker, on the other hand, can identify objects even in dark environments only with a smartphone, without requiring any specialized hardware or using a camera. Knocker utilizes the smartphone’s built-in sensors such as a microphone, an accelerometer, and a gyroscope to capture a unique set of responses generated when a smartphone is knocked against an object. Machine learning is used to analyze these responses and classify and identify objects. The research team under Professor Sung-Ju Lee from the School of Computing confirmed the applicability of Knocker technology using 23 everyday objects such as books, laptop computers, water bottles, and bicycles. In noisy environments such as a busy café or on the side of a road, it achieved 83% identification accuracy. In a quiet indoor environment, the accuracy rose to 98%. The team believes Knocker will open a new paradigm of object interaction. For instance, by knocking on an empty water bottle, a smartphone can automatically order new water bottles from a merchant app. When integrated with IoT devices, knocking on a bed’s headboard before going to sleep could turn off the lights and set an alarm. The team suggested and implemented 15 application cases in the paper, presented during the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2019) held in London last month. Professor Sung-Ju Lee said, “This new technology does not require any specialized sensor or hardware. It simply uses the built-in sensors on smartphones and takes advantage of the power of machine learning. It’s a software solution that everyday smartphone users could immediately benefit from.” He continued, “This technology enables users to conveniently interact with their favorite objects.” The research was supported in part by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea funded by the Ministry of Science and ICT and an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Ministry of Science and ICT. Figure: An example knock on a bottle. Knocker identifies the object by analyzing a unique set of responses from the knock, and automatically launches a proper application or service.
마지막 페이지 7
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.