본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.22
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Data
by recently order
by view order
Yuji Roh Awarded 2022 Microsoft Research PhD Fellowship
KAIST PhD candidate Yuji Roh of the School of Electrical Engineering (advisor: Prof. Steven Euijong Whang) was selected as a recipient of the 2022 Microsoft Research PhD Fellowship. < KAIST PhD candidate Yuji Roh (advisor: Prof. Steven Euijong Whang) > The Microsoft Research PhD Fellowship is a scholarship program that recognizes outstanding graduate students for their exceptional and innovative research in areas relevant to computer science and related fields. This year, 36 people from around the world received the fellowship, and Yuji Roh from KAIST EE is the only recipient from universities in Korea. Each selected fellow will receive a $10,000 scholarship and an opportunity to intern at Microsoft under the guidance of an experienced researcher. Yuji Roh was named a fellow in the field of “Machine Learning” for her outstanding achievements in Trustworthy AI. Her research highlights include designing a state-of-the-art fair training framework using batch selection and developing novel algorithms for both fair and robust training. Her works have been presented at the top machine learning conferences ICML, ICLR, and NeurIPS among others. She also co-presented a tutorial on Trustworthy AI at the top data mining conference ACM SIGKDD. She is currently interning at the NVIDIA Research AI Algorithms Group developing large-scale real-world fair AI frameworks. The list of fellowship recipients and the interview videos are displayed on the Microsoft webpage and Youtube. The list of recipients: https://www.microsoft.com/en-us/research/academic-program/phd-fellowship/2022-recipients/ Interview (Global): https://www.youtube.com/watch?v=T4Q-XwOOoJc Interview (Asia): https://www.youtube.com/watch?v=qwq3R1XU8UE [Highlighted research achievements by Yuji Roh: Fair batch selection framework] [Highlighted research achievements by Yuji Roh: Fair and robust training framework]
2022.10.28
View 1858
Machine Learning-Based Algorithm to Speed up DNA Sequencing
The algorithm presents the first full-fledged, short-read alignment software that leverages learned indices for solving the exact match search problem for efficient seeding The human genome consists of a complete set of DNA, which is about 6.4 billion letters long. Because of its size, reading the whole genome sequence at once is challenging. So scientists use DNA sequencers to produce hundreds of millions of DNA sequence fragments, or short reads, up to 300 letters long. Then the DNA sequencer assembles all the short reads like a giant jigsaw puzzle to reconstruct the entire genome sequence. Even with very fast computers, this job can take hours to complete. A research team at KAIST has achieved up to 3.45x faster speeds by developing the first short-read alignment software that uses a recent advance in machine-learning called a learned index. The research team reported their findings on March 7, 2022 in the journal Bioinformatics. The software has been released as open source and can be found on github (https://github.com/kaist-ina/BWA-MEME). Next-generation sequencing (NGS) is a state-of-the-art DNA sequencing method. Projects are underway with the goal of producing genome sequencing at population scale. Modern NGS hardware is capable of generating billions of short reads in a single run. Then the short reads have to be aligned with the reference DNA sequence. With large-scale DNA sequencing operations running hundreds of next-generation sequences, the need for an efficient short read alignment tool has become even more critical. Accelerating the DNA sequence alignment would be a step toward achieving the goal of population-scale sequencing. However, existing algorithms are limited in their performance because of their frequent memory accesses. BWA-MEM2 is a popular short-read alignment software package currently used to sequence the DNA. However, it has its limitations. The state-of-the-art alignment has two phases – seeding and extending. During the seeding phase, searches find exact matches of short reads in the reference DNA sequence. During the extending phase, the short reads from the seeding phase are extended. In the current process, bottlenecks occur in the seeding phase. Finding the exact matches slows the process. The researchers set out to solve the problem of accelerating the DNA sequence alignment. To speed the process, they applied machine learning techniques to create an algorithmic improvement. Their algorithm, BWA-MEME (BWA-MEM emulated) leverages learned indices to solve the exact match search problem. The original software compared one character at a time for an exact match search. The team’s new algorithm achieves up to 3.45x faster speeds in seeding throughput over BWA-MEM2 by reducing the number of instructions by 4.60x and memory accesses by 8.77x. “Through this study, it has been shown that full genome big data analysis can be performed faster and less costly than conventional methods by applying machine learning technology,” said Professor Dongsu Han from the School of Electrical Engineering at KAIST. The researchers’ ultimate goal was to develop efficient software that scientists from academia and industry could use on a daily basis for analyzing big data in genomics. “With the recent advances in artificial intelligence and machine learning, we see so many opportunities for designing better software for genomic data analysis. The potential is there for accelerating existing analysis as well as enabling new types of analysis, and our goal is to develop such software,” added Han. Whole genome sequencing has traditionally been used for discovering genomic mutations and identifying the root causes of diseases, which leads to the discovery and development of new drugs and cures. There could be many potential applications. Whole genome sequencing is used not only for research, but also for clinical purposes. “The science and technology for analyzing genomic data is making rapid progress to make it more accessible for scientists and patients. This will enhance our understanding about diseases and develop a better cure for patients of various diseases.” The research was funded by the National Research Foundation of the Korean government’s Ministry of Science and ICT. -PublicationYoungmok Jung, Dongsu Han, “BWA-MEME:BWA-MEM emulated with a machine learning approach,” Bioinformatics, Volume 38, Issue 9, May 2022 (https://doi.org/10.1093/bioinformatics/btac137) -ProfileProfessor Dongsu HanSchool of Electrical EngineeringKAIST
2022.05.10
View 2040
CXL-Based Memory Disaggregation Technology Opens Up a New Direction for Big Data Solution Frameworks
A KAIST team’s compute express link (CXL) provides new insights on memory disaggregation and ensures direct access and high-performance capabilities A team from the Computer Architecture and Memory Systems Laboratory (CAMEL) at KAIST presented a new compute express link (CXL) solution whose directly accessible, and high-performance memory disaggregation opens new directions for big data memory processing. Professor Myoungsoo Jung said the team’s technology significantly improves performance compared to existing remote direct memory access (RDMA)-based memory disaggregation. CXL is a peripheral component interconnect-express (PCIe)-based new dynamic multi-protocol made for efficiently utilizing memory devices and accelerators. Many enterprise data centers and memory vendors are paying attention to it as the next-generation multi-protocol for the era of big data. Emerging big data applications such as machine learning, graph analytics, and in-memory databases require large memory capacities. However, scaling out the memory capacity via a prior memory interface like double data rate (DDR) is limited by the number of the central processing units (CPUs) and memory controllers. Therefore, memory disaggregation, which allows connecting a host to another host’s memory or memory nodes, has appeared. RDMA is a way that a host can directly access another host’s memory via InfiniBand, the commonly used network protocol in data centers. Nowadays, most existing memory disaggregation technologies employ RDMA to get a large memory capacity. As a result, a host can share another host’s memory by transferring the data between local and remote memory. Although RDMA-based memory disaggregation provides a large memory capacity to a host, two critical problems exist. First, scaling out the memory still needs an extra CPU to be added. Since passive memory such as dynamic random-access memory (DRAM), cannot operate by itself, it should be controlled by the CPU. Second, redundant data copies and software fabric interventions for RDMA-based memory disaggregation cause longer access latency. For example, remote memory access latency in RDMA-based memory disaggregation is multiple orders of magnitude longer than local memory access. To address these issues, Professor Jung’s team developed the CXL-based memory disaggregation framework, including CXL-enabled customized CPUs, CXL devices, CXL switches, and CXL-aware operating system modules. The team’s CXL device is a pure passive and directly accessible memory node that contains multiple DRAM dual inline memory modules (DIMMs) and a CXL memory controller. Since the CXL memory controller supports the memory in the CXL device, a host can utilize the memory node without processor or software intervention. The team’s CXL switch enables scaling out a host’s memory capacity by hierarchically connecting multiple CXL devices to the CXL switch allowing more than hundreds of devices. Atop the switches and devices, the team’s CXL-enabled operating system removes redundant data copy and protocol conversion exhibited by conventional RDMA, which can significantly decrease access latency to the memory nodes. In a test comparing loading 64B (cacheline) data from memory pooling devices, CXL-based memory disaggregation showed 8.2 times higher data load performance than RDMA-based memory disaggregation and even similar performance to local DRAM memory. In the team’s evaluations for a big data benchmark such as a machine learning-based test, CXL-based memory disaggregation technology also showed a maximum of 3.7 times higher performance than prior RDMA-based memory disaggregation technologies. “Escaping from the conventional RDMA-based memory disaggregation, our CXL-based memory disaggregation framework can provide high scalability and performance for diverse datacenters and cloud service infrastructures,” said Professor Jung. He went on to stress, “Our CXL-based memory disaggregation research will bring about a new paradigm for memory solutions that will lead the era of big data.” -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL)http://camelab.org School of Electrical EngineeringKAIST
2022.03.16
View 13379
‘Game&Art: Auguries of Fantasy’ Features Future of the Metaverse
‘Game & Art: Auguries of Fantasy,’ a special exhibition combining art and technology will feature the new future of metaverse fantasy. The show will be hosted at the Daejeon Creative Center at the Daejeon Museum of Art through September 5. This show exhibits a combination of science and technology with culture and arts, and introduces young artists whose creativity will lead to new opportunities in games and art. The Graduate School of Culture Technology was designated as a leading culture content academy in 2020 by the Ministry of Culture, Sports & Tourism and the Korea Creative Content Agency for fostering the R&D workforce in creative culture technology. NCsoft sponsored the show and also participated as an artist. It combined its game-composing elements and technologies with other genres, including data for game construction, scenarios for forming a worldview, and game art and sound. All of the contents can be experienced online in a virtual space as well as offline, and can be easily accessed through personal devices. Characterized by the themes ‘timeless’ and ‘spaceless’ which connect the past, present, and future, and space created in the digital world. The exhibition gives audience members an opportunity to experience freedom beyond the constraints of time and space under the theme of a fantasy reality created by games and art. "Computer games, which began in the 1980s, have become cultural content that spans generations, and games are now the fusion field for leading-edge technologies including computer graphics, sound, human-computer interactions, big data, and AI. They are also the best platform for artistic creativity by adding human imagination to technology," said Professor Joo-Han Nam from the Graduate School of Culture Technology, who led the project. "Our artists wanted to convey various messages to our society through works that connect the past, present, and future through games." Ju-young Oh's "Unexpected Scenery V2" and "Hope for Rats V2" display game-type media work that raises issues surrounding technology, such as the lack of understanding behind various scientific achievements, the history of accidental achievements, and the side effects of new conveniences. Tae-Wan Kim, in his work themed ‘healing’ combined the real-time movement of particles which follows the movements of people recorded as digital data. Metadata is collected by sensors in the exhibition space, and floating particle forms are evolved into abstract graphic designs according to audio-visual responses. Meanwhile, ‘SOS’ is a collaboration work from six KAIST researchers (In-Hwa Yeom, Seung-Eon Lee, Seong-Jin Jeon, Jin-Seok Hong, Hyung-Seok Yoon, and Sang-Min Lee). SOS is based on diverse perspectives embracing phenomena surrounding contemporary natural resources. Audience members follow a gamified path between the various media-elements composing the art’s environment. Through this process, the audience can experience various emotions such as curiosity, suspicion, and recovery. ‘Diversity’ by Sung-Hyun Kim uses devices that recognize the movements of hands and fingers to provide experiences exploring the latent space of game play images learned by deep neural networks. Image volumes generated by neural networks are visualized through physics-based, three-dimensional, volume-rendering algorithms, and a series of processes were implemented based on the self-written code.
2021.06.21
View 2386
‘Urban Green Space Affects Citizens’ Happiness’
Study finds the relationship between green space, the economy, and happiness A recent study revealed that as a city becomes more economically developed, its citizens’ happiness becomes more directly related to the area of urban green space. A joint research project by Professor Meeyoung Cha of the School of Computing and her collaborators studied the relationship between green space and citizen happiness by analyzing big data from satellite images of 60 different countries. Urban green space, including parks, gardens, and riversides not only provides aesthetic pleasure, but also positively affects our health by promoting physical activity and social interactions. Most of the previous research attempting to verify the correlation between urban green space and citizen happiness was based on few developed countries. Therefore, it was difficult to identify whether the positive effects of green space are global, or merely phenomena that depended on the economic state of the country. There have also been limitations in data collection, as it is difficult to visit each location or carry out investigations on a large scale based on aerial photographs. The research team used data collected by Sentinel-2, a high-resolution satellite operated by the European Space Agency (ESA) to investigate 90 green spaces from 60 different countries around the world. The subjects of analysis were cities with the highest population densities (cities that contain at least 10% of the national population), and the images were obtained during the summer of each region for clarity. Images from the northern hemisphere were obtained between June and September of 2018, and those from the southern hemisphere were obtained between December of 2017 and February of 2018. The areas of urban green space were then quantified and crossed with data from the World Happiness Report and GDP by country reported by the United Nations in 2018. Using these data, the relationships between green space, the economy, and citizen happiness were analyzed. The results showed that in all cities, citizen happiness was positively correlated with the area of urban green space regardless of the country’s economic state. However, out of the 60 countries studied, the happiness index of the bottom 30 by GDP showed a stronger correlation with economic growth. In countries whose gross national income (GDP per capita) was higher than 38,000 USD, the area of green space acted as a more important factor affecting happiness than economic growth. Data from Seoul was analyzed to represent South Korea, and showed an increased happiness index with increased green areas compared to the past. The authors point out their work has several policy-level implications. First, public green space should be made accessible to urban dwellers to enhance social support. If public safety in urban parks is not guaranteed, its positive role in social support and happiness may diminish. Also, the meaning of public safety may change; for example, ensuring biological safety will be a priority in keeping urban parks accessible during the COVID-19 pandemic. Second, urban planning for public green space is needed for both developed and developing countries. As it is challenging or nearly impossible to secure land for green space after the area is developed, urban planning for parks and green space should be considered in developing economies where new cities and suburban areas are rapidly expanding. Third, recent climate changes can present substantial difficulty in sustaining urban green space. Extreme events such as wildfires, floods, droughts, and cold waves could endanger urban forests while global warming could conversely accelerate tree growth in cities due to the urban heat island effect. Thus, more attention must be paid to predict climate changes and discovering their impact on the maintenance of urban green space. “There has recently been an increase in the number of studies using big data from satellite images to solve social conundrums,” said Professor Cha. “The tool developed for this investigation can also be used to quantify the area of aquatic environments like lakes and the seaside, and it will now be possible to analyze the relationship between citizen happiness and aquatic environments in future studies,” she added. Professor Woo Sung Jung from POSTECH and Professor Donghee Wohn from the New Jersey Institute of Technology also joined this research. It was reported in the online issue of EPJ Data Science on May 30. -PublicationOh-Hyun Kwon, Inho Hong, Jeasurk Yang, Donghee Y. Wohn, Woo-Sung Jung, andMeeyoung Cha, 2021. Urban green space and happiness in developed countries. EPJ Data Science. DOI: https://doi.org/10.1140/epjds/s13688-021-00278-7 -ProfileProfessor Meeyoung ChaData Science Labhttps://ds.ibs.re.kr/ School of Computing KAIST
2021.06.21
View 3439
Advanced NVMe Controller Technology for Next Generation Memory Devices
KAIST researchers advanced non-volatile memory express (NVMe) controller technology for next generation information storage devices, and made this new technology named ‘OpenExpress’ freely available to all universities and research institutes around the world to help reduce the research cost in related fields. NVMe is a communication protocol made for high-performance storage devices based on a peripheral component interconnect-express (PCI-E) interface. NVMe has been developed to take the place of the Serial AT Attachment (SATA) protocol, which was developed to process data on hard disk drives (HDDs) and did not perform well in solid state drives (SSDs). Unlike HDDs that use magnetic spinning disks, SSDs use semiconductor memory, allowing the rapid reading and writing of data. SSDs also generate less heat and noise, and are much more compact and lightweight. Since data processing in SSDs using NVMe is up to six times faster than when SATA is used, NVMe has become the standard protocol for ultra-high speed and volume data processing, and is currently used in many flash-based information storage devices. Studies on NVMe continue at both the academic and industrial levels, however, its poor accessibility is a drawback. Major information and communications technology (ICT) companies around the world expend astronomical costs to procure intellectual property (IP) related to hardware NVMe controllers, necessary for the use of NVMe. However, such IP is not publicly disclosed, making it difficult to be used by universities and research institutes for research purposes. Although a small number of U.S. Silicon Valley startups provide parts of their independently developed IP for research, the cost of usage is around 34,000 USD per month. The costs skyrocket even further because each copy of single-use source code purchased for IP modification costs approximately 84,000 USD. In order to address these issues, a group of researchers led by Professor Myoungsoo Jung from the School of Electrical Engineering at KAIST developed a next generation NVMe controller technology that achieved parallel data input/output processing for SSDs in a fully hardware automated form. The researchers presented their work at the 2020 USENIX Annual Technical Conference (USENIX ATC ’20) in July, and released it as an open research framework named ‘OpenExpress.’ This NVMe controller technology developed by Professor Jung’s team comprises a wide range of basic hardware IP and key NVMe IP cores. To examine its actual performance, the team made an NVMe hardware controller prototype using OpenExpress, and designed all logics provided by OpenExpress to operate at high frequency. The field-programmable gate array (FPGA) memory card prototype developed using OpenExpress demonstrated increased input/output data processing capacity per second, supporting up to 7 gigabit per second (GB/s) bandwidth. This makes it suitable for ultra-high speed and volume next generation memory device research. In a test comparing various storage server loads on devices, the team’s FPGA also showed 76% higher bandwidth and 68% lower input/output delay compared to Intel’s new high performance SSD (Optane SSD), which is sufficient for many researchers studying systems employing future memory devices. Depending on user needs, silicon devices can be synthesized as well, which is expected to further enhance performance. The NVMe controller technology of Professor Jung’s team can be freely used and modified under the OpenExpress open-source end-user agreement for non-commercial use by all universities and research institutes. This makes it extremely useful for research on next-generation memory compatible NVMe controllers and software stacks. “With the product of this study being disclosed to the world, universities and research institutes can now use controllers that used to be exclusive for only the world’s biggest companies, at no cost,ˮ said Professor Jung. He went on to stress, “This is a meaningful first step in research of information storage device systems such as high-speed and volume next generation memory.” This work was supported by a grant from MemRay, a company specializing in next generation memory development and distribution. More details about the study can be found at http://camelab.org. Image credit: Professor Myoungsoo Jung, KAIST Image usage restrictions: News organizations may use or redistribute these figures and image, with proper attribution, as part of news coverage of this paper only. -Publication: Myoungsoo Jung. (2020). OpenExpress: Fully Hardware Automated Open Research Framework for Future Fast NVMe Devices. Presented in the Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC ’20), Available online at https://www.usenix.org/system/files/atc20-jung.pdf -Profile: Myoungsoo Jung Associate Professor m.jung@kaist.ac.kr http://camelab.org Computer Architecture and Memory Systems Laboratory School of Electrical Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2020.09.04
View 4389
Research on the Million Follower Fallacy Receives the Test of Time Award
Professor Meeyoung Cha’s research investigating the correlation between the number of followers on social media and its influence was re-highlighted after 10 years of publication of the paper. Saying that her research is still as relevant today as the day it was published 10 years ago, the Association for the Advancement of Artificial Intelligence (AAAI) presented Professor Cha from the School of Computing with the Test of Time Award during the 14th International Conference on Web and Social Media (ICWSM) held online June 8 through 11. In her 2010 paper titled ‘Measuring User Influence in Twitter: The Million Follower Fallacy,’ Professor Cha proved that number of followers does not match the influential power. She investigated the data including 54,981,152 user accounts, 1,963,263,821 social links, and 1,755,925,520 Tweets, collected with 50 servers. The research compares and illustrates the limitations of various methods used to measure the influence a user has on a social networking platform. These results provided new insights and interpretations to the influencer selection algorithm used to maximize the advertizing impact on big social networking platforms. The research also looked at how long an influential user was active for, and whether the user could freely cross the borders between fields and be influential on different topics as well. By analyzing cases of who becomes an influencer when new events occur, it was shown that a person could quickly become an influencer using several key tactics, unlike what was previously claimed by the ‘accidental influential theory’. Professor Cha explained, “At the time, data from social networking platforms did not receive much attention in computer science, but I remember those all-nighters I pulled to work on this project, fascinated by the fact that internet data could be used to solve difficult social science problems. I feel so grateful that my research has been endeared for such a long time.” Professor Cha received both her undergraduate and graduate degrees from KAIST, and conducted this research during her postdoctoral course at the Max Planck Institute in Germany. She now also serves as a chief investigator of a data science group at the Institute for Basic Science (IBS). (END)
2020.06.22
View 2818
Unravelling Complex Brain Networks with Automated 3-D Neural Mapping
-Automated 3-D brain imaging data analysis technology offers more reliable and standardized analysis of the spatial organization of complex neural circuits.- KAIST researchers developed a new algorithm for brain imaging data analysis that enables the precise and quantitative mapping of complex neural circuits onto a standardized 3-D reference atlas. Brain imaging data analysis is indispensable in the studies of neuroscience. However, analysis of obtained brain imaging data has been heavily dependent on manual processing, which cannot guarantee the accuracy, consistency, and reliability of the results. Conventional brain imaging data analysis typically begins with finding a 2-D brain atlas image that is visually similar to the experimentally obtained brain image. Then, the region-of-interest (ROI) of the atlas image is matched manually with the obtained image, and the number of labeled neurons in the ROI is counted. Such a visual matching process between experimentally obtained brain images and 2-D brain atlas images has been one of the major sources of error in brain imaging data analysis, as the process is highly subjective, sample-specific, and susceptible to human error. Manual analysis processes for brain images are also laborious, and thus studying the complete 3-D neuronal organization on a whole-brain scale is a formidable task. To address these issues, a KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering developed new brain imaging data analysis software named 'AMaSiNe (Automated 3-D Mapping of Single Neurons)', and introduced the algorithm in the May 26 issue of Cell Reports. AMaSiNe automatically detects the positions of single neurons from multiple brain images, and accurately maps all the data onto a common standard 3-D reference space. The algorithm allows the direct comparison of brain data from different animals by automatically matching similar features from the images, and computing the image similarity score. This feature-based quantitative image-to-image comparison technology improves the accuracy, consistency, and reliability of analysis results using only a small number of brain slice image samples, and helps standardize brain imaging data analyses. Unlike other existing brain imaging data analysis methods, AMaSiNe can also automatically find the alignment conditions from misaligned and distorted brain images, and draw an accurate ROI, without any cumbersome manual validation process. AMaSiNe has been further proved to produce consistent results with brain slice images stained utilizing various methods including DAPI, Nissl, and autofluorescence. The two co-lead authors of this study, Jun Ho Song and Woochul Choi, exploited these benefits of AMaSiNe to investigate the topographic organization of neurons that project to the primary visual area (VISp) in various ROIs, such as the dorsal lateral geniculate nucleus (LGd), which could hardly be addressed without proper calibration and standardization of the brain slice image samples. In collaboration with Professor Seung-Hee Lee's group of the Department of Biological Science, the researchers successfully observed the 3-D topographic neural projections to the VISp from LGd, and also demonstrated that these projections could not be observed when the slicing angle was not properly corrected by AMaSiNe. The results suggest that the precise correction of a slicing angle is essential for the investigation of complex and important brain structures. AMaSiNe is widely applicable in the studies of various brain regions and other experimental conditions. For example, in the research team’s previous study jointly conducted with Professor Yang Dan’s group at UC Berkeley, the algorithm enabled the accurate analysis of the neuronal subsets in the substantia nigra and their projections to the whole brain. Their findings were published in Science on January 24. AMaSiNe is of great interest to many neuroscientists in Korea and abroad, and is being actively used by a number of other research groups at KAIST, MIT, Harvard, Caltech, and UC San Diego. Professor Paik said, “Our new algorithm allows the spatial organization of complex neural circuits to be found in a standardized 3-D reference atlas on a whole-brain scale. This will bring brain imaging data analysis to a new level.” He continued, “More in-depth insights for understanding the function of brain circuits can be achieved by facilitating more reliable and standardized analysis of the spatial organization of neural circuits in various regions of the brain.” This work was supported by KAIST and the National Research Foundation of Korea (NRF). Figure and Image Credit: Professor Se-Bum Paik, KAIST Figure and Image Usage Restrictions: News organizations may use or redistribute these figures and images, with proper attribution, as part of news coverage of this paper only. Publication: Song, J. H., et al. (2020). Precise Mapping of Single Neurons by Calibrated 3D Reconstruction of Brain Slices Reveals Topographic Projection in Mouse Visual Cortex. Cell Reports. Volume 31, 107682. Available online at https://doi.org/10.1016/j.celrep.2020.107682 Profile: Se-Bum Paik Assistant Professor sbpaik@kaist.ac.kr http://vs.kaist.ac.kr/ VSNN Laboratory Department of Bio and Brain Engineering Program of Brain and Cognitive Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2020.06.08
View 7713
Professor Dongsu Han Named Program Chair for ACM CoNEXT 2020
Professor Dongsu Han from the School of Electrical Engineering has been appointed as the program chair for the 16th Association for Computing Machinery’s International Conference on emerging Networking EXperiments and Technologies (ACM CoNEXT 2020). Professor Han is the first program chair to be appointed from an Asian institution. ACM CoNEXT is hosted by ACM SIGCOMM, ACM's Special Interest Group on Data Communications, which specializes in the field of communication and computer networks. Professor Han will serve as program co-chair along with Professor Anja Feldmann from the Max Planck Institute for Informatics. Together, they have appointed 40 world-leading researchers as program committee members for this conference, including Professor Song Min Kim from KAIST School of Electrical Engineering. Paper submissions for the conference can be made by the end of June, and the event itself is to take place from the 1st to 4th of December. Conference Website: https://conferences2.sigcomm.org/co-next/2020/#!/home (END)
2020.06.02
View 4088
A Global Campaign of ‘Facts before Rumors’ on COVID-19 Launched
- A KAIST data scientist group responds to facts and rumors on COVID-19 for global awareness of the pandemic. - Like the novel coronavirus, rumors have no borders. The world is fighting to contain the pandemic, but we also have to deal with the appalling spread of an infodemic that is as contagious as the virus. This infodemic, a pandemic of false information, is bringing chaos and extreme fear to the general public. Professor Meeyoung Cha’s group at the School of Computing started a global campaign called ‘Facts before Rumors,’ to prevent the spread of false information from crossing borders. She explained, “We saw many rumors that had already been fact-checked long before in China and South Korea now begin to circulate in other countries, sometimes leading to detrimental results. We launched an official campaign, Facts before Rumors, to deliver COVID-19-related facts to countries where the number of cases is now increasing.” She released the first set of facts on March 26 via her Twitter account @nekozzang. Professor Cha, a data scientist who has focused on detecting global fake news, is now part of the COVID-19 AI Task Force at the Global Strategy Institute at KAIST. She is also leading the Data Science Group at the Institute for Basic Science (IBS) as Chief Investigator. Her research group worked in collaboration with the College of Nursing at Ewha Woman’s University to identify 15 claims about COVID-19 that circulated on social networks (SNS) and among the general public. The team fact-checked these claims based on information from the WHO and CDCs of Korea and the US. The research group is now working on translating the list of claims into Portuguese, Spanish, Persian, Chinese, Amharic, Hindi, and Vietnamese. Delivering facts before rumors, the team says, will help contain the disease and prevent any harm caused by misinformation. The pandemic, which spread in China and South Korea before arriving in Europe and the US, is now moving into South America, Africa, and Southeast Asia. “We would like to play a part in preventing the further spread of the disease with the provision of only scientifically vetted, truthful facts,” said the team. For this campaign, Professor Cha’s team investigated more than 200 rumored claims on COVID-19 in China during the early days of the pandemic. These claims spread in different levels: while some were only relevant locally or in larger regions of China, others propagated in Asia and are now spreading to countries that are currently most affected by the disease. For example, the false claim which publicized that ‘Fireworks can help tame the virus in the air’ only spread in China. Other claims such as ‘Eating garlic helps people overcome the disease’ or ‘Gargling with salt water prevents the contraction of the disease,’ spread around the world even after being proved groundless. The team noted, however, that the times at which these claims propagate are different from one country to another. “This opens up an opportunity to debunk rumors in some countries, even before they start to emerge,” said Professor Cha. Kun-Woo Kim, a master’s candidate in the Department of Industrial Design who joined this campaign and designed the Facts before Rumors chart also expressed his hope that this campaign will help reduce the number of victims. He added, “I am very grateful to our scientists who quickly responded to the Fact Check in these challenging times.”
2020.03.27
View 5774
COVID-19 Map Shows How the Global Pandemic Moves
- A School of Computing team facilitated the data from COVID-19 to show the global spread of the virus. - The COVID-19 map made by KAIST data scientists shows where and how the virus is spreading from China, reportedly the epicenter of the disease. Professor Meeyoung Cha from the School of Computing and her group facilitated data based on the number of confirmed cases from January 22 to March 22 to analyze the trends of this global epidemic. The statistics include the number of confirmed cases, recoveries, and deaths across major continents based on the number of confirmed case data during that period. The moving dot on the map strikingly shows how the confirmed cases are moving across the globe. According to their statistics, the centroid of the disease starts from near Wuhan in China and moved to Korea, then through the European region via Italy and Iran. The data is collected by a graduate student from the School of Computing, Geng Sun, who started the process during the time he was quarantined since coming back from his home in China. An undergraduate colleague of Geng's, Gabriel Camilo Lima who made the map, is now working remotely from his home in Brazil since all undergraduate students were required to move out of the dormitory last week. The university closed all undergraduate housing and advised the undergraduate students to go back home in a preventive measure to stop the virus from spreading across the campus. Gabriel said he calculated the centroid of all confirmed cases up to a given day. He explained, “I weighed each coordinate by the number of cases in that region and country and calculated an approximate center of gravity.” “The Earth is round, so the shortest path from Asia to Europe is often through Russia. In early March, the center of gravity of new cases was moving from Asia to Europe. Therefore, the centroid is moving to the west and goes through Russia, even though Russia has not reported many cases,” he added. Professor Cha, who is also responsible for the Data Science Group at the Institute for Basic Science (IBS) as the Chief Investigator, said their group will continue to update the map using public data at https://ds.ibs.re.kr/index.php/covid-19/. (END)
2020.03.27
View 5915
Participation in the 2018 Bio-Digital City Workshop in Paris
(A student make a presentatiion during the Bio-Digital City Workshop in Paris last month.) KAIST students explored ideas for developing future cities during the 2018 Bio-Digital City Workshop held in Paris last month. This international workshop hosted by Cité des Sciences et de l'Industrie was held under the theme “Biomimicry, Digital City and Big Data.” During the workshop from July 10 to July 20, students teamed up with French counterparts to develop innovative urban design ideas. Cité des Sciences et de l'Industrie is the largest science museum in Europe and is operated by Universcience, a specialized institute of science and technology in France. Professor Seongju Chang from the Department of Civil and Environmental Engineering and Professor Jihyun Lee of the Graduate School of Culture Technology Students led the students group. Participants presented their ideas and findings on new urban solutions that combine biomimetic systems and digital technology. Each student group analyzed a special natural ecosystem such as sand dunes, jellyfish communities, or mangrove forests and conducted research to extract algorithms for constructing sustainable urban building complexes based on the results. The extracted algorithm was used to conceive a sustainable building complex forming a part of the urban environment by applying it to the actual Parisian city segment given as the virtual site for the workshop. Students from diverse background in both countries participated in this convergence workshop. KAIST students included Ph.D. candidate Hyung Min Cho, undergraduates Min-Woo Jeong, Seung-Hwan Cha, and Sang-Jun Park from the Department of Civil and Environmental Engineering, undergraduate Kyeong-Keun Seo from the Department of Materials Science and Engineering, JiWhan Jeong (Master’s course) from the Department of Industrial and Systems Engineering, Ph.D. candidate Bo-Yoon Zang from the Graduate School of Culture Technology. They teamed up with French students from diverse backgrounds, including Design/Science, Visual Design, Geography, Computer Science and Humanities and Social Science. This workshop will serve as another opportunity to expand academic and human exchange efforts in the domain of smart and sustainable cities with Europe in the future as the first international cooperation activity of KAIST and the Paris La Villette Science Museum. Professor Seong-Ju Chang who led the research group said, "We will continue to establish a cooperative relationship between KAIST and the European scientific community. This workshop is a good opportunity to demonstrate the competence of KAIST students and their scientific and technological excellence on the international stage.”
2018.08.01
View 4789
<<
첫번째페이지
<
이전 페이지
1
2
3
>
다음 페이지
>>
마지막 페이지 3