Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
Yuji Roh Awarded 2022 Microsoft Research PhD Fellowship
KAIST PhD candidate Yuji Roh of the School of Electrical Engineering (advisor: Prof. Steven Euijong Whang) was selected as a recipient of the 2022 Microsoft Research PhD Fellowship. < KAIST PhD candidate Yuji Roh (advisor: Prof. Steven Euijong Whang) > The Microsoft Research PhD Fellowship is a scholarship program that recognizes outstanding graduate students for their exceptional and innovative research in areas relevant to computer science and related fields. This year, 36 people from around the world received the fellowship, and Yuji Roh from KAIST EE is the only recipient from universities in Korea. Each selected fellow will receive a $10,000 scholarship and an opportunity to intern at Microsoft under the guidance of an experienced researcher. Yuji Roh was named a fellow in the field of “Machine Learning” for her outstanding achievements in Trustworthy AI. Her research highlights include designing a state-of-the-art fair training framework using batch selection and developing novel algorithms for both fair and robust training. Her works have been presented at the top machine learning conferences ICML, ICLR, and NeurIPS among others. She also co-presented a tutorial on Trustworthy AI at the top data mining conference ACM SIGKDD. She is currently interning at the NVIDIA Research AI Algorithms Group developing large-scale real-world fair AI frameworks. The list of fellowship recipients and the interview videos are displayed on the Microsoft webpage and Youtube. The list of recipients: https://www.microsoft.com/en-us/research/academic-program/phd-fellowship/2022-recipients/ Interview (Global): https://www.youtube.com/watch?v=T4Q-XwOOoJc Interview (Asia): https://www.youtube.com/watch?v=qwq3R1XU8UE [Highlighted research achievements by Yuji Roh: Fair batch selection framework] [Highlighted research achievements by Yuji Roh: Fair and robust training framework]
T-GPS Processes a Graph with Trillion Edges on a Single Computer
Trillion-scale graph processing simulation on a single computer presents a new concept of graph processing A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer. Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect. Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine. The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible. The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high. The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach. The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated. The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers. Professor Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, “T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm.” This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP). Publication: Park, H., et al. (2021) “Trillion-scale Graph Processing Simulation based on Top-Down Graph Upscaling,” Presented at the IEEE ICDE 2021 (April 19-22, 2021, Chania, Greece) Profile: Min-Soo Kim Associate Professor firstname.lastname@example.org http://infolab.kaist.ac.kr School of Computing KAIST
Drawing the Line to Answer Art’s Big Questions
- KAIST scientists show how statistical physics can reveal art trends across time and culture. - Algorithms have shown that the compositional structure of Western landscape paintings changed “suspiciously” smoothly between 1500 and 2000 AD, potentially indicating a selection bias by art curators or in art historical literature, physicists from the Korea Advanced Institute of Science and Technology (KAIST) and colleagues report in the Proceedings of the National Academy of Sciences (PNAS). KAIST statistical physicist Hawoong Jeong worked with statisticians, digital analysts and art historians in Korea, Estonia and the US to clarify whether computer algorithms could help resolve long-standing questions about design principles used in landscape paintings, such as the placement of the horizon and other primary features. “A foundational question among art historians is whether artwork contains organizing principles that transcend culture and time and, if yes, how these principles evolved over time,” explains Jeong. “We developed an information-theoretic approach that can capture compositional proportion in landscape paintings and found that the preferred compositional proportion systematically evolved over time.” Digital versions of almost 15,000 canonical landscape paintings from the Western renaissance in the 1500s to the more recent contemporary art period were run through a computer algorithm. The algorithm progressively divides artwork into horizontal and vertical lines depending on the amount of information in each subsequent partition. It allows scientists to evaluate how artists and various art styles compose landscape artwork, in terms of placement of a piece’s most important components, in addition to how high or low the landscape’s horizon is placed. The scientists started by analysing the first two partitioning lines identified by the algorithm in the paintings and found they could be categorized into four groups: an initial horizontal line followed by a second horizontal line (H-H); an initial horizontal line followed by a second vertical line (H-V); a vertical followed by horizontal line (V-H); or a vertical followed by a vertical line (V-V) (see image 1 and 2). They then looked at the categorizations over time. They found that before the mid-nineteenth century, H-V was the dominant composition type, followed by H-H, V-H, and V-V. The mid-nineteenth century then brought change, with the H-V composition style decreasing in popularity with a rise in the H-H composition style. The other two styles remained relatively stable. The scientists also looked at how the horizon line, which separates sky from land, changed over time. In the 16th century, the dominant horizon line of the painting was above the middle of the canvas, but it gradually descended to the lower middle of the canvas by the 17th century, where it remained until the mid-nineteenth century. After that, the horizon line began gradually rising again. Interestingly, the algorithm showed that these findings were similar across cultures and artistic periods, even through periods dominated by a diversity in art styles. This similarity may well be a function, then, of a bias in the dataset. “In recent decades, art historians have prioritized the argument that there is great diversity in the evolution of artistic expression rather than offering a relatively smoother consensus story in Western art,” Jeong says. “This study serves as a reminder that the available large-scale datasets might be perpetuating severe biases.” The scientists next aim to broaden their analyses to include more diverse artwork, as this particular dataset was ultimately Western and male biased. Future analyses should also consider diagonal compositions in paintings, they say. This work was supported by the National Research Foundation (NRF) of Korea. Publication: Lee, B, et al. (2020) Dissecting landscape art history with information theory. Proceedings of the National Academy of Sciences (PNAS), Vol. 117, No. 43, 26580-26590. Available online at https://doi.org/10.1073/pnas.2011927117 Profile: Hawoong Jeong, Ph.D. Professor email@example.com https://www.kaist.ac.kr Department of Physics Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
Quantum Classifiers with Tailored Quantum Kernel
Quantum information scientists have introduced a new method for machine learning classifications in quantum computing. The non-linear quantum kernels in a quantum binary classifier provide new insights for improving the accuracy of quantum machine learning, deemed able to outperform the current AI technology. The research team led by Professor June-Koo Kevin Rhee from the School of Electrical Engineering, proposed a quantum classifier based on quantum state fidelity by using a different initial state and replacing the Hadamard classification with a swap test. Unlike the conventional approach, this method is expected to significantly enhance the classification tasks when the training dataset is small, by exploiting the quantum advantage in finding non-linear features in a large feature space. Quantum machine learning holds promise as one of the imperative applications for quantum computing. In machine learning, one fundamental problem for a wide range of applications is classification, a task needed for recognizing patterns in labeled training data in order to assign a label to new, previously unseen data; and the kernel method has been an invaluable classification tool for identifying non-linear relationships in complex data. More recently, the kernel method has been introduced in quantum machine learning with great success. The ability of quantum computers to efficiently access and manipulate data in the quantum feature space can open opportunities for quantum techniques to enhance various existing machine learning methods. The idea of the classification algorithm with a nonlinear kernel is that given a quantum test state, the protocol calculates the weighted power sum of the fidelities of quantum data in quantum parallel via a swap-test circuit followed by two single-qubit measurements (see Figure 1). This requires only a small number of quantum data operations regardless of the size of data. The novelty of this approach lies in the fact that labeled training data can be densely packed into a quantum state and then compared to the test data. The KAIST team, in collaboration with researchers from the University of KwaZulu-Natal (UKZN) in South Africa and Data Cybernetics in Germany, has further advanced the rapidly evolving field of quantum machine learning by introducing quantum classifiers with tailored quantum kernels.This study was reported at npj Quantum Information in May. The input data is either represented by classical data via a quantum feature map or intrinsic quantum data, and the classification is based on the kernel function that measures the closeness of the test data to training data. Dr. Daniel Park at KAIST, one of the lead authors of this research, said that the quantum kernel can be tailored systematically to an arbitrary power sum, which makes it an excellent candidate for real-world applications. Professor Rhee said that quantum forking, a technique that was invented by the team previously, makes it possible to start the protocol from scratch, even when all the labeled training data and the test data are independently encoded in separate qubits. Professor Francesco Petruccione from UKZN explained, “The state fidelity of two quantum states includes the imaginary parts of the probability amplitudes, which enables use of the full quantum feature space.” To demonstrate the usefulness of the classification protocol, Carsten Blank from Data Cybernetics implemented the classifier and compared classical simulations using the five-qubit IBM quantum computer that is freely available to public users via cloud service. “This is a promising sign that the field is progressing,” Blank noted. Link to download the full-text paper: https://www.nature.com/articles/s41534-020-0272-6 -Profile Professor June-Koo Kevin Rhee firstname.lastname@example.org Professor, School of Electrical Engineering Director, ITRC of Quantum Computing for AIKAIST Daniel Kyungdeock Parkkpark10@kaist.ac.krResearch Assistant ProfessorSchool of Electrical EngineeringKAIST
Soul-Searching & Odds-Defying Determination: A Commencement Story of Dr. Tae-Hyun Oh
(Dr. Tae-Hyun Oh, one of the 2736 graduates of the 2018) Each and every one of the 2,736 graduates has come a long way to the 2018 Commencement. Tae-Hyun Oh, who just started his new research career at MIT after completing his Ph.D. at KAIST, is no exception. Unlike the most KAIST freshmen straight out of the ingenious science academies of Korea, he is among the many who endured very challenging and turbulent adolescent years. Buffeted by family instability and struggling during his time at school, he saw himself trapped by seemingly impenetrable barriers. His mother, who hated to see his struggling, advised him to take a break to reflect on who he is and what he wanted to do. After dropping out of high school in his first year, ways to make money and support his family occupied his thoughts. He took on odd jobs from a car body shop to a gas station, but the real world was very tough and sometimes even cruel to the high school dropout. Bias and prejudice stigmatizing dropouts hurt him so much. He often overheard a parent who dropped by the body shop that he worked in saying, “If you do not study hard, you will end up like this guy.” Hearing such things terrified him and awoke his sense of purpose. So he decided to do something meaningful and be a better man than he was. “I didn’t like the person I was growing up to become. I needed to find myself and get away from the place I was growing up. It was my adventure and it was the best decision I ever made,” says Oh. After completing his high school diploma national certificate, he planned to apply to an engineering college. On his second try, he gained admission into the Department of Electrical Engineering at Kwang Woon University with a full scholarship. He was so thrilled for this opportunity and hoped he could do well at college. Signal processing and image processing became the interest of his research and he finished his undergraduate degree summa cum laude. Gaining confidence in his studies, he searched around graduate school department websites in Korea to select the path he was interested in. Among others, the Robotics and Computer Vision Lab of Professor In-So Kweon at the Department of Electrical Engineering at KAIST was attractive to him. Professor Kweon’s lab is globally renowned for robot vision technology. Their technologies were applied into HUBO, the KAIST-developed bimodal humanoid robot that won the 2015 DARPA Challenges. “I am so appreciate of Professor Kweon, who accepted and guided me,” he said. Under Professor Kweon’s advising, he could finish his Master’s and Ph.D. courses in seven years. The mathematical modeling on fundamental computer algorithms became his main research topic. While at KAIST, his academic research has blossomed. He won a total of 13 research prizes sponsored by corporations at home and abroad such as Kolon, Samsung, Hyundai Motors, and Qualcomm. In 2015, he won the Microsoft Research Asia Fellowship as the sole Korean among 13 Ph.D. candidates in the Asian region. With the MSRA fellowship, he could intern at the MS Research Beijing Office for half a year and then in Redmond, Washington in the US. “Professor Kweon’s lab filled me up with knowledge. Whenever I presented our team’s paper at an international conference, I was amazed by the strong interest shown by foreign experts, researchers, and professors. Their strong support and interest encouraged me a lot. I was fully charged with the belief that I could go abroad and explore more opportunities,” he said. Dr. Oh, who completed his dissertation last fall, now works at the Department of Electrical Engineering and Computer Science at MIT under Professor Wojciech Matusik. “I think the research environment at KAIST is on par with MIT. I have very rich resources for my studies and research at both schools, but at MIT the working culture is a little different and it remains a big challenge for me. I am still not familiar with collaborative work with colleagues from very diverse backgrounds and countries, and to persuade them and communicate with them is very tough. But I think I am getting better and better,” he said. Oh, who is an avid computer game player as well, said life seems to be a game. The level of the game will be upgraded to the next level after something is accomplished. He feels great joy when he is moving up and he believes such diverse experiences have helped him become a better person day by day. Once he identified what gave him a strong sense of purpose, he wasn’t stressed out by his studies any more. He was so excited to be able to follow his passion and is ready for the next challenge.
Professor Otfried Cheong Named as Distinguished Scientist by ACM
Professor Otfried Cheong (Schwarzkopf) of the School of Computing was named as a Distinguished Scientist of 2016 by the Association for Computing Machinery (ACM). The ACM recognized 45 Distinguished Members in the category of Distinguished Scientist, Educator, and Engineer for their individual contributions to the field of computing. Professor Cheong is the sole recipient from a Korean institution. The recipients were selected among the top 10 percent of ACM members with at least 15 years of professional experience and five years of continuous professional membership. He is known as one of the authors of the widely used computational geometry textbook Computational Geometry: Algorithms and Applications and as the developer of Ipe, a vector graphics editor. Professor Cheong joined KAIST in 2005, after earning his doctorate from the Free University of Berlin in 1992. He previously taught at Ultrecht University, Pohang University of Science and Technology, Hong Kong University of Science and Technology, and the Eindhoven University of Technology.
Professor Jinwoo Shin of the Electrical Engineering Department Receives the 2015 ACM SIGMETRICS Rising Star Research Award
Professor Jinwoo Shin of the Electrical Engineering Department at KAIST was selected as the recipient of the 2015 ACM SIGMETRICS Rising Star Research Award. As a computer systems performance evaluation community, SIGMETRICS annually awards a junior researcher. He was selected as the 8th annual recipient, being the first from an Asian university. Professor Shin was recognized for his work in theoretical analysis of stochastic queueing networks and machine learning. He said, “I would like to contribute to the expansion of computing and network theory in Korea wherein those fields are unrecognized.” He has received numerous awards including Kennneth C. Sevcik (Best Student Paper) Award at SIGMETRICS 2009, George M. Sprowls (Best MIT CS PhD Thesis) Award 2010, Best Paper Award at MOBIHOC 2013, Best Publication Award from INFORMS Applied Probability Society 2013, and Bloomberg Scientific Research Award 2015.
Professor Jinwoo Shin Receives the Bloomberg Scientific Research Award
Professor Jinwoo Shin (https://sites.google.com/site/mijirim/) of the Electrical Engineering Department at KAIST has been selected as one of the three winners to receive the first Bloomberg Scientific Research Award this month. The newly created award is presented to researchers in computer science who conduct high-quality research in such areas as machine learning, natural language processing, machine translation, statistics, and theory. Professor Shin submitted his research proposal entitled “Scalable Probabilistic Deep Leaning,” and the award will support funding his research for one year. For details, please click on the link below for an article released by Bloomberg News, announcing the winners of the award: Bloomberg News, April 28, 2015 “Announcing the Winners of the Bloomberg’s First Scientific Research Program” https://3blmedia.com/News/Announcing-Winners-Bloombergs-First-Scientific-Research-Program
KAIST Wins First Prize at Recon Challenge of Int"l Magnetic Resonance Society
Professor Jong-chul Ye of the Department of Bio and Brain Engineering and Hong Jeong, a doctorate student, won the first prize at the Recon Challenge held as part of a workshop sponsored by the International Society for Magnetic Resonance in Medicine (ISMRM) held in Sedona, the United States. The workshop took place under the theme of “data sampling and image reconstruction” on Jan. 25-28 in Sedona, Arizona, the United States. The KAIST team beat out major magnetic resonance imaging groups from the U.S. and Europe. The Recon Challenge is a biennial competition highlighting different reconstruction strategies and metrics to compare them. ISMRM is an international, nonprofit, scientific association which promotes communication, research, development, and applications in the field of magnetic resonance in medicine and biology and other related topics. At the competition, the KAIST team presented a new dynamic MRI algorithm called k-t FOCUSS that is optimal from a compressed sensing perspective. The main contribution of the method is extension of k-t FOCUSS to a more general framework with prediction and residual encoding. The prediction provides an initial estimate while the residual encoding takes care of the remaining residual signals.
마지막 페이지 1
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.