본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.26
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Technology
by recently order
by view order
Professor Sung Yong Kim Elected as the Chair of PICES MONITOR
< Professor Sung Yong Kim > Professor Sung Yong Kim from the Department of Mechanical Engineering was elected as the chair of the Technical Committee on Monitoring (MONITOR) of the North Pacific Marine Science Organization (PICES). PICES is an intergovernmental marine science organization that was established in 1992 through a collaboration between six North Pacific nations including South Korea, Russia, the United States, Japan, China, and Canada to exchange and discuss research on the Pacific waters. Its headquarters is located in Canada and the organization consists of seven affiliated maritime science and marine technology committees. Professor Kim was elected as the chair of the technical committee that focuses on monitoring and will be part of the Science Board as an ex-officio member. His term will last three years from November 2019. Professor Kim was recognized for his academic excellence, expertise, and leadership among oceanographers both domestically and internationally. Professor Kim will also participate as an academia civilian committee member of the Maritime and Fisheries Science and Technology Committee under the Korean Ministry of Oceans and Fisheries for two years from December 18, 2019. He stated, “I will give my full efforts to broaden Korean oceanography research by participating in maritime leadership positions at home and abroad, and help South Korea become a maritime powerhouse.” (END)
2019.12.22
View 9300
Professor Shin-Hyun Kim Receives the Young Scientist Award
Professor Shin-Hyun Kim from the Department of Chemical and Biomolecular Engineering received the Young Scientist Award from the Korean Academy of Science and Technology. The Young Scientist Award is presented to a promising young Korean scientist under the age of 40 who shows significant potential, passion, and remarkable achievement. Professor Kim was lauded for his research of intelligent soft materials. By applying his research, he developed a capsule sensor material that can not only be used for sensors, but also for displays, color aesthetics, anti-counterfeit technology, residual drug detection, and more. The award ceremony took place on December 14 at the Gwacheon National Science Museum. The Korean minister of Science and ICT delivered words of encouragement, reminding everyone that “the driving force behind creative performance of scientists is the provision of continuous support.” He added, “Researchers of Korea deserve greater public attention and support.” (END)
2019.12.21
View 7730
New Members of KAST 2020
< Professor Zong-Tae Bae (Left) and Professor Sang Ouk Kim (Right) > Professor Zong-Tae Bae from the School of Management Engineering and Professor Sang Ouk Kim from the Department of Materials Science and Engineering became new fellows of the Korean Academy of Science and Technology (KAST) along with 22 other scientists in Korea. On November 22, KAST announced 24 new members for the year 2020. This includes seven scientists from the field of natural sciences, six from engineering, four from medical sciences, another four from policy research, and three from agriculture and fishery. The new fellows will begin their term from January next year, and their fellowships wll be conferred during the KAST’s New Year Reception to be held on January 14 in Seoul. (END)
2019.12.09
View 12122
KAIST and Google Jointly Develop AI Curricula
KAIST selected the two professors who will develop AI curriculum under the auspices of the KAIST-Google Partnership for AI Education and Research. The Graduate School of AI announced the two authors among the 20 applicants who will develop the curriculum next year. They will be provided 7,500 USD per subject. Professor Changho Suh from the School of Electrical Engineering and Professor Yong-Jin Yoon from the Department of Mechanical Engineering will use Google technology such as TensorFlow, Google Cloud, and Android to create the curriculum. Professor Suh’s “TensorFlow for Information Theory and Convex Optimization “will be used for curriculum in the graduate courses and Professor Yoon’s “AI Convergence Project Based Learning (PBL)” will be used for online courses. Professor Yoon’s course will explore and define problems by utilizing AI and experiencing the process of developing products that use AI through design thinking, which involves product design, production, and verification. Professor Suh’s course will discus“information theory and convergence,” which uses basic sciences and engineering as well as AI, machine learning, and deep learning.
2019.12.04
View 13200
AI to Determine When to Intervene with Your Driving
(Professor Uichin Lee (left) and PhD candidate Auk Kim) Can your AI agent judge when to talk to you while you are driving? According to a KAIST research team, their in-vehicle conservation service technology will judge when it is appropriate to contact you to ensure your safety. Professor Uichin Lee from the Department of Industrial and Systems Engineering at KAIST and his research team have developed AI technology that automatically detects safe moments for AI agents to provide conversation services to drivers. Their research focuses on solving the potential problems of distraction created by in-vehicle conversation services. If an AI agent talks to a driver at an inopportune moment, such as while making a turn, a car accident will be more likely to occur. In-vehicle conversation services need to be convenient as well as safe. However, the cognitive burden of multitasking negatively influences the quality of the service. Users tend to be more distracted during certain traffic conditions. To address this long-standing challenge of the in-vehicle conversation services, the team introduced a composite cognitive model that considers both safe driving and auditory-verbal service performance and used a machine-learning model for all collected data. The combination of these individual measures is able to determine the appropriate moments for conversation and most appropriate types of conversational services. For instance, in the case of delivering simple-context information, such as a weather forecast, driver safety alone would be the most appropriate consideration. Meanwhile, when delivering information that requires a driver response, such as a “Yes” or “No,” the combination of driver safety and auditory-verbal performance should be considered. The research team developed a prototype of an in-vehicle conversation service based on a navigation app that can be used in real driving environments. The app was also connected to the vehicle to collect in-vehicle OBD-II/CAN data, such as the steering wheel angle and brake pedal position, and mobility and environmental data such as the distance between successive cars and traffic flow. Using pseudo-conversation services, the research team collected a real-world driving dataset consisting of 1,388 interactions and sensor data from 29 drivers who interacted with AI conversational agents. Machine learning analysis based on the dataset demonstrated that the opportune moments for driver interruption could be correctly inferred with 87% accuracy. The safety enhancement technology developed by the team is expected to minimize driver distractions caused by in-vehicle conversation services. This technology can be directly applied to current in-vehicle systems that provide conversation services. It can also be extended and applied to the real-time detection of driver distraction problems caused by the use of a smartphone while driving. Professor Lee said, “In the near future, cars will proactively deliver various in-vehicle conversation services. This technology will certainly help vehicles interact with their drivers safely as it can fairly accurately determine when to provide conversation services using only basic sensor data generated by cars.” The researchers presented their findings at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp’19) in London, UK. This research was supported in part by Hyundai NGV and by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT. (Figure: Visual description of safe enhancement technology for in-vehicle conversation services)
2019.11.13
View 16775
Ultrafast Quantum Motion in a Nanoscale Trap Detected
< Professor Heung-Sun Sim (left) and Co-author Dr. Sungguen Ryu (right) > KAIST researchers have reported the detection of a picosecond electron motion in a silicon transistor. This study has presented a new protocol for measuring ultrafast electronic dynamics in an effective time-resolved fashion of picosecond resolution. The detection was made in collaboration with Nippon Telegraph and Telephone Corp. (NTT) in Japan and National Physical Laboratory (NPL) in the UK and is the first report to the best of our knowledge. When an electron is captured in a nanoscale trap in solids, its quantum mechanical wave function can exhibit spatial oscillation at sub-terahertz frequencies. Time-resolved detection of such picosecond dynamics of quantum waves is important, as the detection provides a way of understanding the quantum behavior of electrons in nano-electronics. It also applies to quantum information technologies such as the ultrafast quantum-bit operation of quantum computing and high-sensitivity electromagnetic-field sensing. However, detecting picosecond dynamics has been a challenge since the sub-terahertz scale is far beyond the latest bandwidth measurement tools. A KAIST team led by Professor Heung-Sun Sim developed a theory of ultrafast electron dynamics in a nanoscale trap, and proposed a scheme for detecting the dynamics, which utilizes a quantum-mechanical resonant state formed beside the trap. The coupling between the electron dynamics and the resonant state is switched on and off at a picosecond so that information on the dynamics is read out on the electric current being generated when the coupling is switched on. NTT realized, together with NPL, the detection scheme and applied it to electron motions in a nanoscale trap formed in a silicon transistor. A single electron was captured in the trap by controlling electrostatic gates, and a resonant state was formed in the potential barrier of the trap. The switching on and off of the coupling between the electron and the resonant state was achieved by aligning the resonance energy with the energy of the electron within a picosecond. An electric current from the trap through the resonant state to an electrode was measured at only a few Kelvin degrees, unveiling the spatial quantum-coherent oscillation of the electron with 250 GHz frequency inside the trap. Professor Sim said, “This work suggests a scheme of detecting picosecond electron motions in submicron scales by utilizing quantum resonance. It will be useful in dynamical control of quantum mechanical electron waves for various purposes in nano-electronics, quantum sensing, and quantum information”. This work was published online at Nature Nanotechnology on November 4. It was partly supported by the Korea National Research Foundation through the SRC Center for Quantum Coherence in Condensed Matter. For more on the NTT news release this article, please visit https://www.ntt.co.jp/news2019/1911e/191105a.html -ProfileProfessor Heung-Sun Sim Department of PhysicsDirector, SRC Center for Quantum Coherence in Condensed Matterhttps://qet.kaist.ac.kr KAIST -Publication:Gento Yamahata, Sungguen Ryu, Nathan Johnson, H.-S. Sim, Akira Fujiwara, and Masaya Kataoka. 2019. Picosecond coherent electron motion in a silicon single-electron source. Nature Nanotechnology (Online Publication). 6 pages. https://doi.org/10.1038/s41565-019-0563-2
2019.11.05
View 17437
Object Identification and Interaction with a Smartphone Knock
(Professor Lee (far right) demonstrate 'Knocker' with his students.) A KAIST team has featured a new technology, “Knocker”, which identifies objects and executes actions just by knocking on it with the smartphone. Software powered by machine learning of sounds, vibrations, and other reactions will perform the users’ directions. What separates Knocker from existing technology is the sensor fusion of sound and motion. Previously, object identification used either computer vision technology with cameras or hardware such as RFID (Radio Frequency Identification) tags. These solutions all have their limitations. For computer vision technology, users need to take pictures of every item. Even worse, the technology will not work well in poor lighting situations. Using hardware leads to additional costs and labor burdens. Knocker, on the other hand, can identify objects even in dark environments only with a smartphone, without requiring any specialized hardware or using a camera. Knocker utilizes the smartphone’s built-in sensors such as a microphone, an accelerometer, and a gyroscope to capture a unique set of responses generated when a smartphone is knocked against an object. Machine learning is used to analyze these responses and classify and identify objects. The research team under Professor Sung-Ju Lee from the School of Computing confirmed the applicability of Knocker technology using 23 everyday objects such as books, laptop computers, water bottles, and bicycles. In noisy environments such as a busy café or on the side of a road, it achieved 83% identification accuracy. In a quiet indoor environment, the accuracy rose to 98%. The team believes Knocker will open a new paradigm of object interaction. For instance, by knocking on an empty water bottle, a smartphone can automatically order new water bottles from a merchant app. When integrated with IoT devices, knocking on a bed’s headboard before going to sleep could turn off the lights and set an alarm. The team suggested and implemented 15 application cases in the paper, presented during the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2019) held in London last month. Professor Sung-Ju Lee said, “This new technology does not require any specialized sensor or hardware. It simply uses the built-in sensors on smartphones and takes advantage of the power of machine learning. It’s a software solution that everyday smartphone users could immediately benefit from.” He continued, “This technology enables users to conveniently interact with their favorite objects.” The research was supported in part by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea funded by the Ministry of Science and ICT and an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Ministry of Science and ICT. Figure: An example knock on a bottle. Knocker identifies the object by analyzing a unique set of responses from the knock, and automatically launches a proper application or service.
2019.10.02
View 27371
Chair Professor Jo-Won Lee Named Sixth President of National Nano Fab Center
< President Jo-Won Lee > Chair professor Jo-Won Lee from Hanyang University was appointed as the sixth president of the National Nano Fab Center (NNFC). President Lee will serve his term for three years from September 16. The NNFC is an affiliated institution of KAIST, established in 2002 to foster qualified manpower in the field of nanotechnology (NT) in Korea. The NNFC features cutting-edge NT-related research equipment and fabrication services, and provides students and researchers quality education and training. The NNFC seeks to become a world-leading institute by performing extensive business operations including the commercialization of NT research results and conducting various multidisciplinary projects. President Lee received his BS degree from Hanyang University and his MS and PhD degrees in metals science from the Pennsylvania State University. He taught nano-conversion science at his alma mater, Hanyang University, while serving as the director of the National Program for Tera-level Nanodevices. President Lee also guided the governmental planning committee for the 10-year Korea Nanotechnology Initiative as secretary general. President Lee said, “The NNFC has been striving to develop Korea as the world’s strongest nation in nanotechnology thus far. All of the members of the NNFC will continue giving our best effort for the improvement of our nation’s nanotechnology.” (END)
2019.09.17
View 3852
Accurate Detection of Low-Level Somatic Mutation in Intractable Epilepsy
KAIST medical scientists have developed an advanced method for perfectly detecting low-level somatic mutation in patients with intractable epilepsy. Their study showed that deep sequencing replicates of major focal epilepsy genes accurately and efficiently identified low-level somatic mutations in intractable epilepsy. According to the study, their diagnostic method could increase the accuracy up to 100%, unlike the conventional sequencing analysis, which stands at about 30% accuracy. This work was published in Acta Neuropathologica. Epilepsy is a neurological disorder common in children. Approximately one third of child patients are diagnosed with intractable epilepsy despite adequate anti-epileptic medication treatment. Somatic mutations in mTOR pathway genes, SLC35A2, and BRAF are the major genetic causes of intractable epilepsies. A clinical trial to target Focal Cortical Dysplasia type II (FCDII), the mTOR inhibitor is underway at Severance Hospital, their collaborator in Seoul, Korea. However, it is difficult to detect such somatic mutations causing intractable epilepsy because their mutational burden is less than 5%, which is similar to the level of sequencing artifacts. In the clinical field, this has remained a standing challenge for the genetic diagnosis of somatic mutations in intractable epilepsy. Professor Jeong Ho Lee’s team at the Graduate School of Medical Science and Engineering analyzed paired brain and peripheral tissues from 232 intractable epilepsy patients with various brain pathologies at Severance Hospital using deep sequencing and extracted the major focal epilepsy genes. They narrowed down target genes to eight major focal epilepsy genes, eliminating almost all of the false positive calls using deep targeted sequencing. As a result, the advanced method robustly increased the accuracy and enabled them to detect low-level somatic mutations in unmatched Formalin Fixed Paraffin Embedded (FFPE) brain samples, the most clinically relevant samples. Professor Lee conducted this study in collaboration with Professor Dong Suk Kim and Hoon-Chul Kang at Severance Hospital of Yonsei University. He said, “This advanced method of genetic analysis will improve overall patient care by providing more comprehensive genetic counseling and informing decisions on alternative treatments.” Professor Lee has investigated low-level somatic mutations arising in the brain for a decade. He is developing innovative diagnostics and therapeutics for untreatable brain disorders including intractable epilepsy and glioblastoma at a tech-startup called SoVarGen. “All of the technologies we used during the research were transferred to the company. This research gave us very good momentum to reach the next phase of our startup,” he remarked. The work was supported by grants from the Suh Kyungbae Foundation, a National Research Foundation of Korea grant funded by the Ministry of Science and ICT, the Korean Health Technology R&D Project from the Ministry of Health & Welfare, and the Netherlands Organization for Health Research and Development. (Figure: Landscape of somatic and germline mutations identified in intractable epilepsy patients. a Signaling pathways for all of the mutated genes identified in this study. Bold: somatic mutation, Regular: germline mutation. b The distribution of variant allelic frequencies (VAFs) of identified somatic mutations. c The detecting rate and types of identified mutations according to histopathology. Yellow: somatic mutations, green: two-hit mutations, grey: germline mutations.)
2019.08.14
View 28414
FIRIC-EU JRC Joint Workshop on Smart Specialization
The Fourth Industrial Revolution Intelligence Center (FIRIC) at KAIST discussed ‘Smart Specialization’ for regional innovation and economic growth in the wake of the Fourth Industrial Revolution during the workshop with the EU Joint Research Center (EU-JRC) in Seville, Spain last week. The two sides also agreed to sign an MOU to expand mutual collaboration. KAIST’s FIRIC was founded in cooperation with the World Economic Forum in July 2017 to carry out policy research for the promotion of science and technology-based inclusive growth and innovation and to lead related global efforts. The EU-JRC has committed to developing cohesive policies that aim to narrow regional gaps within the European Union. Founded in 1958 in Brussels, the EU-JRC has long been in charge of EU strategies for regional innovation based on emerging technologies. The workshop also covered issues related to public-private partnerships and innovation clusters from the perspective of the EU and Asia, such as the global value chain and the implementation of industrial clusters policy amid the changes in the industrial ecosystem due to digitalization, automation, and the utilization of robotics during the Fourth Industrial Revolution. In addition, the session included discussions on inclusive growth and job market changes in the era of the Fourth Industrial Revolution, addressing how Smart Specialization and the outcomes of the 4IR will shift the paradigm of current job and technology capabilities, as well as employment issues in many relevant industries. In particular, the actual case studies and their related policies and regulatory trends regarding the potential risks and ethical issues of artificial intelligence were introduced. Regarding the financial services that utilize blockchain technologies and the establishment of public sector governance for such technologies, the participating experts noted difficulties in the diffusion of blockchain-based local currencies or public services, which call for a sophisticated analytical and practical framework for innovative and transparent governance. Dr. Mark Boden, the Team Leader of the EU-JRC, introduced the EU’s initiatives to promote Smart Specialization, such as its policy process, governance design, vision sharing, and priority setting, with particular emphasis on targeted support for Smart Specialization in lagging regions. Professor So Young Kim, who is the dean of the Graduate School of Science and Technology Policy and FIRIC’s Deputy Director said, “KAIST’s global role regarding the Fourth Industrial Revolution will be expanded in the process of exploring and developing innovative models of technology-policy governance while working jointly with the EU-JRC.”
2019.08.02
View 7375
Flexible User Interface Distribution for Ubiquitous Multi-Device Interaction
< Research Group of Professor Insik Shin (center) > KAIST researchers have developed mobile software platform technology that allows a mobile application (app) to be executed simultaneously and more dynamically on multiple smart devices. Its high flexibility and broad applicability can help accelerate a shift from the current single-device paradigm to a multiple one, which enables users to utilize mobile apps in ways previously unthinkable. Recent trends in mobile and IoT technologies in this era of 5G high-speed wireless communication have been hallmarked by the emergence of new display hardware and smart devices such as dual screens, foldable screens, smart watches, smart TVs, and smart cars. However, the current mobile app ecosystem is still confined to the conventional single-device paradigm in which users can employ only one screen on one device at a time. Due to this limitation, the real potential of multi-device environments has not been fully explored. A KAIST research team led by Professor Insik Shin from the School of Computing, in collaboration with Professor Steve Ko’s group from the State University of New York at Buffalo, has developed mobile software platform technology named FLUID that can flexibly distribute the user interfaces (UIs) of an app to a number of other devices in real time without needing any modifications. The proposed technology provides single-device virtualization, and ensures that the interactions between the distributed UI elements across multiple devices remain intact. This flexible multimodal interaction can be realized in diverse ubiquitous user experiences (UX), such as using live video steaming and chatting apps including YouTube, LiveMe, and AfreecaTV. FLUID can ensure that the video is not obscured by the chat window by distributing and displaying them separately on different devices respectively, which lets users enjoy the chat function while watching the video at the same time. In addition, the UI for the destination input on a navigation app can be migrated into the passenger’s device with the help of FLUID, so that the destination can be easily and safely entered by the passenger while the driver is at the wheel. FLUID can also support 5G multi-view apps – the latest service that allows sports or games to be viewed from various angles on a single device. With FLUID, the user can watch the event simultaneously from different viewpoints on multiple devices without switching between viewpoints on a single screen. PhD candidate Sangeun Oh, who is the first author, and his team implemented the prototype of FLUID on the leading open-source mobile operating system, Android, and confirmed that it can successfully deliver the new UX to 20 existing legacy apps. “This new technology can be applied to next-generation products from South Korean companies such as LG’s dual screen phone and Samsung’s foldable phone and is expected to embolden their competitiveness by giving them a head-start in the global market.” said Professor Shin. This study will be presented at the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019) October 21 through 25 in Los Cabos, Mexico. The research was supported by the National Science Foundation (NSF) (CNS-1350883 (CAREER) and CNS-1618531). Figure 1. Live video streaming and chatting app scenario Figure 2. Navigation app scenario Figure 3. 5G multi-view app scenario Publication: Sangeun Oh, Ahyeon Kim, Sunjae Lee, Kilho Lee, Dae R. Jeong, Steven Y. Ko, and Insik Shin. 2019. FLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction. To be published in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2019). ACM, New York, NY, USA. Article Number and DOI Name TBD. Video Material: https://youtu.be/lGO4GwH4enA Profile: Prof. Insik Shin, MS, PhD ishin@kaist.ac.kr https://cps.kaist.ac.kr/~ishin Professor Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Sangeun Oh, PhD Candidate ohsang1213@kaist.ac.kr https://cps.kaist.ac.kr/ PhD Candidate Cyber-Physical Systems (CPS) Lab School of Computing Korea Advanced Institute of Science and Technology (KAIST) http://kaist.ac.kr Daejeon 34141, Korea Profile: Prof. Steve Ko, PhD stevko@buffalo.edu https://nsr.cse.buffalo.edu/?page_id=272 Associate Professor Networked Systems Research Group Department of Computer Science and Engineering State University of New York at Buffalo http://www.buffalo.edu/ Buffalo 14260, USA (END)
2019.07.20
View 39334
Deep Learning-Powered 'DeepEC' Helps Accurately Understand Enzyme Functions
(Figure: Overall scheme of DeepEC) A deep learning-powered computational framework, ‘DeepEC,’ will allow the high-quality and high-throughput prediction of enzyme commission numbers, which is essential for the accurate understanding of enzyme functions. A team of Dr. Jae Yong Ryu, Professor Hyun Uk Kim, and Distinguished Professor Sang Yup Lee at KAIST reported the computational framework powered by deep learning that predicts enzyme commission (EC) numbers with high precision in a high-throughput manner. DeepEC takes a protein sequence as an input and accurately predicts EC numbers as an output. Enzymes are proteins that catalyze biochemical reactions and EC numbers consisting of four level numbers (i.e., a.b.c.d) indicate biochemical reactions. Thus, the identification of EC numbers is critical for accurately understanding enzyme functions and metabolism. EC numbers are usually given to a protein sequence encoding an enzyme during a genome annotation procedure. Because of the importance of EC numbers, several EC number prediction tools have been developed, but they have room for further improvement with respect to computation time, precision, coverage, and the total size of the files needed for the EC number prediction. DeepEC uses three convolutional neural networks (CNNs) as a major engine for the prediction of EC numbers, and also implements homology analysis for EC numbers if the three CNNs do not produce reliable EC numbers for a given protein sequence. DeepEC was developed by using a gold standard dataset covering 1,388,606 protein sequences and 4,669 EC numbers. In particular, benchmarking studies of DeepEC and five other representative EC number prediction tools showed that DeepEC made the most precise and fastest predictions for EC numbers. DeepEC also required the smallest disk space for implementation, which makes it an ideal third-party software component. Furthermore, DeepEC was the most sensitive in detecting enzymatic function loss as a result of mutations in domains/binding site residue of protein sequences; in this comparative analysis, all the domains or binding site residue were substituted with L-alanine residue in order to remove the protein function, which is known as the L-alanine scanning method. This study was published online in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) on June 20, 2019, entitled “Deep learning enables high-quality and high-throughput prediction of enzyme commission numbers.” “DeepEC can be used as an independent tool and also as a third-party software component in combination with other computational platforms that examine metabolic reactions. DeepEC is freely available online,” said Professor Kim. Distinguished Professor Lee said, “With DeepEC, it has become possible to process ever-increasing volumes of protein sequence data more efficiently and more accurately.” This work was supported by the Technology Development Program to Solve Climate Changes on Systems Metabolic Engineering for Biorefineries from the Ministry of Science and ICT through the National Research Foundation of Korea. This work was also funded by the Bio & Medical Technology Development Program of the National Research Foundation of Korea funded by the Korean government, the Ministry of Science and ICT. Profile: -Professor Hyun Uk Kim (ehukim@kaist.ac.kr) https://sites.google.com/view/ehukim Department of Chemical and Biomolecular Engineering -Distinguished Professor Sang Yup Lee (leesy@kaist.ac.kr) Department of Chemical and Biomolecular Engineering http://mbel.kaist.ac.kr
2019.07.09
View 36741
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 28