Professor Kuk-Jin Yoon’s Research Team at the Department of Mechanical Engineering Achieves Landmark Success with 10 Papers Accepted at CVPR 2026
<Professor Kuk-Jin Joon from Department of Mechanical Engineering>
Professor Kuk-Jin Yoon’s research team from our university’s Department of Mechanical Engineering has once again demonstrated its overwhelming academic prowess by having a total of 10 papers accepted as lead authors at the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2026 (CVPR 2026).
CVPR is the most influential international conference in the fields of artificial intelligence and visual intelligence. Since its inception in 1983, it has selected outstanding research through a rigorous peer-review process every year. For CVPR 2026, a total of 16,092 papers were submitted worldwide, with 4,090 accepted, resulting in a competitive acceptance rate of approximately 25.42%. Achieving 10 accepted papers as lead or corresponding authors from a single laboratory is regarded as an exceptionally rare and world-class feat.
Professor Kuk-Jin Yoon’s team conducts extensive research with the ultimate goal of achieving human-level visual intelligence. The papers accepted this year cover cutting-edge topics in computer vision, including:
Event camera-based technologies
Perception technologies for autonomous driving
AI optimization and adaptation techniques
This achievement follows the team's remarkable success at ICCV 2025 last year, where they published 12 papers as lead/corresponding authors. The results at CVPR 2026 further solidify the laboratory's position as a global hub for pioneering computer vision research. The research team plans to continue contributing to the advancement of future AI technologies by tackling challenging research that transcends the limitations of existing methods.
Meanwhile, CVPR 2026 is scheduled to be held in Denver, Colorado, USA, from June 3 to June 7.
<CVPR 2026 (Denver, USA)>
Image Analysis to Automatically Quantify Gender Bias in Movies
Many commercial films worldwide continue to express womanhood in a stereotypical manner, a recent study using image analysis showed. A KAIST research team developed a novel image analysis method for automatically quantifying the degree of gender bias in films.
The ‘Bechdel Test’ has been the most representative and general method of evaluating gender bias in films. This test indicates the degree of gender bias in a film by measuring how active the presence of women is in a film. A film passes the Bechdel Test if the film (1) has at least two female characters, (2) who talk to each other, and (3) their conversation is not related to the male characters.
However, the Bechdel Test has fundamental limitations regarding the accuracy and practicality of the evaluation. Firstly, the Bechdel Test requires considerable human resources, as it is performed subjectively by a person. More importantly, the Bechdel Test analyzes only a single aspect of the film, the dialogues between characters in the script, and provides only a dichotomous result of passing the test, neglecting the fact that a film is a visual art form reflecting multi-layered and complicated gender bias phenomena. It is also difficult to fully represent today’s various discourse on gender bias, which is much more diverse than in 1985 when the Bechdel Test was first presented.
Inspired by these limitations, a KAIST research team led by Professor Byungjoo Lee from the Graduate School of Culture Technology proposed an advanced system that uses computer vision technology to automatically analyzes the visual information of each frame of the film. This allows the system to more accurately and practically evaluate the degree to which female and male characters are discriminatingly depicted in a film in quantitative terms, and further enables the revealing of gender bias that conventional analysis methods could not yet detect.
Professor Lee and his researchers Ji Yoon Jang and Sangyoon Lee analyzed 40 films from Hollywood and South Korea released between 2017 and 2018. They downsampled the films from 24 to 3 frames per second, and used Microsoft’s Face API facial recognition technology and object detection technology YOLO9000 to verify the details of the characters and their surrounding objects in the scenes.
Using the new system, the team computed eight quantitative indices that describe the representation of a particular gender in the films. They are: emotional diversity, spatial staticity, spatial occupancy, temporal occupancy, mean age, intellectual image, emphasis on appearance, and type and frequency of surrounding objects.
Figure 1. System Diagram
Figure 2. 40 Hollywood and Korean Films Analyzed in the Study
According to the emotional diversity index, the depicted women were found to be more prone to expressing passive emotions, such as sadness, fear, and surprise. In contrast, male characters in the same films were more likely to demonstrate active emotions, such as anger and hatred.
Figure 3. Difference in Emotional Diversity between Female and Male Characters
The type and frequency of surrounding objects index revealed that female characters and automobiles were tracked together only 55.7 % as much as that of male characters, while they were more likely to appear with furniture and in a household, with 123.9% probability.
In cases of temporal occupancy and mean age, female characters appeared less frequently in films than males at the rate of 56%, and were on average younger in 79.1% of the cases. These two indices were especially conspicuous in Korean films.
Professor Lee said, “Our research confirmed that many commercial films depict women from a stereotypical perspective. I hope this result promotes public awareness of the importance of taking prudence when filmmakers create characters in films.”
This study was supported by KAIST College of Liberal Arts and Convergence Science as part of the Venture Research Program for Master’s and PhD Students, and will be presented at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on November 11 to be held in Austin, Texas.
Publication:
Ji Yoon Jang, Sangyoon Lee, and Byungjoo Lee. 2019. Quantification of Gender Representation Bias in Commercial Films based on Image Analysis. In Proceedings of the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW). ACM, New York, NY, USA, Article 198, 29 pages. https://doi.org/10.1145/3359300
Link to download the full-text paper:
https://files.cargocollective.com/611692/cscw198-jangA--1-.pdf
Profile: Prof. Byungjoo Lee, MD, PhD
byungjoo.lee@kaist.ac.kr
http://kiml.org/
Assistant Professor
Graduate School of Culture Technology (CT)
Korea Advanced Institute of Science and Technology (KAIST)
https://www.kaist.ac.kr Daejeon 34141, Korea
Profile: Ji Yoon Jang, M.S.
yoone3422@kaist.ac.kr
Interactive Media Lab
Graduate School of Culture Technology (CT)
Korea Advanced Institute of Science and Technology (KAIST)
https://www.kaist.ac.kr Daejeon 34141, Korea
Profile: Sangyoon Lee, M.S. Candidate
sl2820@kaist.ac.kr
Interactive Media Lab
Graduate School of Culture Technology (CT)
Korea Advanced Institute of Science and Technology (KAIST)
https://www.kaist.ac.kr Daejeon 34141, Korea
(END)
Seong-Tae Kim Wins Robert-Wagner All-Conference Best Paper Award
(Ph.D. candidate Seong-Tae Kim)
Ph.D. candidate Seong-Tae Kim from the School of Electrical Engineering won the Robert Wagner All-Conference Best Student Paper Award during the 2018 International Society for Optics and Photonics (SPIE) Medical Imaging Conference, which was held in Houston last month.
Kim, supervised by Professor Yong Man Ro, received the award for his paper in the category of computer-aided diagnosis. His paper, titled “ICADx: Interpretable Computer-Aided Diagnosis of Breast Masses”, was selected as the best paper out of 900 submissions. The conference selects the best paper in nine different categories. His research provides new insights on diagnostic technology to detect breast cancer powered by deep learning.