Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
Using light to throw and catch atoms to open up a new chapter for quantum computing
The technology to move and arrange atoms, the most basic component of a quantum computer, is very important to Rydberg quantum computing research. However, to place the atoms at the desired location, the atoms must be captured and transported one by one using a highly focused laser beam, commonly referred to as an optical tweezer. and, the quantum information of the atoms is likely to change midway. KAIST (President Kwang Hyung Lee) announced on the 27th that a research team led by Professor Jaewook Ahn of the Department of Physics developed a technology to throw and receive rubidium atoms one by one using a laser beam. The research team developed a method to throw and receive atoms which would minimize the time the optical tweezers are in contact with the atoms in which the quantum information the atoms carry may change. The research team used the characteristic that the rubidium atoms, which are kept at a very low temperature of 40μK below absolute zero, move very sensitively to the electromagnetic force applied by light along the focal point of the light tweezers. The research team accelerated the laser of an optical tweezer to give an optical kick to an atom to send it to a target, then caught the flying atom with another optical tweezer to stop it. The atom flew at a speed of 65 cm/s, and traveled up to 4.2 μm. Compared to the existing technique of guiding the atoms with the optical tweezers, the technique of throwing and receiving atoms eliminates the need to calculate the transporting path for the tweezers, and makes it easier to fix the defects in the atomic arrangement. As a result, it is effective in generating and maintaining a large number of atomic arrangements, and when the technology is used to throw and receive flying atom qubits, it will be used in studying new and more powerful quantum computing methods that presupposes the structural changes in quantum arrangements. "This technology will be used to develop larger and more powerful Rydberg quantum computers," says Professor Jaewook Ahn. “In a Rydberg quantum computer,” he continues, “atoms are arranged to store quantum information and interact with neighboring atoms through electromagnetic forces to perform quantum computing. The method of throwing an atom away for quick reconstruction the quantum array can be an effective way to fix an error in a quantum computer that requires a removal or replacement of an atom.” The research, which was conducted by doctoral students Hansub Hwang and Andrew Byun of the Department of Physics at KAIST and Sylvain de Léséleuc, a researcher at the National Institute of Natural Sciences in Japan, was published in the international journal, Optica, 0n March 9th. (Paper title: Optical tweezers throw and catch single atoms). This research was carried out with the support of the Samsung Science & Technology Foundation. <Figure 1> A schematic diagram of the atom catching and throwing technique. The optical tweezer on the left kicks the atom to throw it into a trajectory to have the tweezer on the right catch it to stop it.
Neuromorphic Memory Device Simulates Neurons and Synapses
Simultaneous emulation of neuronal and synaptic properties promotes the development of brain-like artificial intelligence Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices. Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices. Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory. Professor Keon Jae Lee explained, "Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.” This result entitled “Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse” was published in the May 19, 2022 issue of Nature Communications. -Publication:Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im, and Keon Jae Lee (2022) “Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse,” Nature Communications May 19, 2022 (DOI: 10.1038/s41467-022-30432-2) -Profile:Professor Keon Jae Leehttp://fand.kaist.ac.kr Department of Materials Science and EngineeringKAIST
LightPC Presents a Resilient System Using Only Non-Volatile Memory
Lightweight Persistence Centric System (LightPC) ensures both data and execution persistence for energy-efficient full system persistence A KAIST research team has developed hardware and software technology that ensures both data and execution persistence. The Lightweight Persistence Centric System (LightPC) makes the systems resilient against power failures by utilizing only non-volatile memory as the main memory. “We mounted non-volatile memory on a system board prototype and created an operating system to verify the effectiveness of LightPC,” said Professor Myoungsoo Jung. The team confirmed that LightPC validated its execution while powering up and down in the middle of execution, showing up to eight times more memory, 4.3 times faster application execution, and 73% lower power consumption compared to traditional systems. Professor Jung said that LightPC can be utilized in a variety of fields such as data centers and high-performance computing to provide large-capacity memory, high performance, low power consumption, and service reliability. In general, power failures on legacy systems can lead to the loss of data stored in the DRAM-based main memory. Unlike volatile memory such as DRAM, non-volatile memory can retain its data without power. Although non-volatile memory has the characteristics of lower power consumption and larger capacity than DRAM, non-volatile memory is typically used for the task of secondary storage due to its lower write performance. For this reason, nonvolatile memory is often used with DRAM. However, modern systems employing non-volatile memory-based main memory experience unexpected performance degradation due to the complicated memory microarchitecture. To enable both data and execution persistent in legacy systems, it is necessary to transfer the data from the volatile memory to the non-volatile memory. Checkpointing is one possible solution. It periodically transfers the data in preparation for a sudden power failure. While this technology is essential for ensuring high mobility and reliability for users, checkpointing also has fatal drawbacks. It takes additional time and power to move data and requires a data recovery process as well as restarting the system. In order to address these issues, the research team developed a processor and memory controller to raise the performance of non-volatile memory-only memory. LightPC matches the performance of DRAM by minimizing the internal volatile memory components from non-volatile memory, exposing the non-volatile memory (PRAM) media to the host, and increasing parallelism to service on-the-fly requests as soon as possible. The team also presented operating system technology that quickly makes execution states of running processes persistent without the need for a checkpointing process. The operating system prevents all modifications to execution states and data by keeping all program executions idle before transferring data in order to support consistency within a period much shorter than the standard power hold-up time of about 16 minutes. For consistency, when the power is recovered, the computer almost immediately revives itself and re-executes all the offline processes immediately without the need for a boot process. The researchers will present their work (LightPC: Hardware and Software Co-Design for Energy-Efficient Full System Persistence) at the International Symposium on Computer Architecture (ISCA) 2022 in New York in June. More information is available at the CAMELab website (http://camelab.org). -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL)http://camelab.org School of Electrical EngineeringKAIST
CXL-Based Memory Disaggregation Technology Opens Up a New Direction for Big Data Solution Frameworks
A KAIST team’s compute express link (CXL) provides new insights on memory disaggregation and ensures direct access and high-performance capabilities A team from the Computer Architecture and Memory Systems Laboratory (CAMEL) at KAIST presented a new compute express link (CXL) solution whose directly accessible, and high-performance memory disaggregation opens new directions for big data memory processing. Professor Myoungsoo Jung said the team’s technology significantly improves performance compared to existing remote direct memory access (RDMA)-based memory disaggregation. CXL is a peripheral component interconnect-express (PCIe)-based new dynamic multi-protocol made for efficiently utilizing memory devices and accelerators. Many enterprise data centers and memory vendors are paying attention to it as the next-generation multi-protocol for the era of big data. Emerging big data applications such as machine learning, graph analytics, and in-memory databases require large memory capacities. However, scaling out the memory capacity via a prior memory interface like double data rate (DDR) is limited by the number of the central processing units (CPUs) and memory controllers. Therefore, memory disaggregation, which allows connecting a host to another host’s memory or memory nodes, has appeared. RDMA is a way that a host can directly access another host’s memory via InfiniBand, the commonly used network protocol in data centers. Nowadays, most existing memory disaggregation technologies employ RDMA to get a large memory capacity. As a result, a host can share another host’s memory by transferring the data between local and remote memory. Although RDMA-based memory disaggregation provides a large memory capacity to a host, two critical problems exist. First, scaling out the memory still needs an extra CPU to be added. Since passive memory such as dynamic random-access memory (DRAM), cannot operate by itself, it should be controlled by the CPU. Second, redundant data copies and software fabric interventions for RDMA-based memory disaggregation cause longer access latency. For example, remote memory access latency in RDMA-based memory disaggregation is multiple orders of magnitude longer than local memory access. To address these issues, Professor Jung’s team developed the CXL-based memory disaggregation framework, including CXL-enabled customized CPUs, CXL devices, CXL switches, and CXL-aware operating system modules. The team’s CXL device is a pure passive and directly accessible memory node that contains multiple DRAM dual inline memory modules (DIMMs) and a CXL memory controller. Since the CXL memory controller supports the memory in the CXL device, a host can utilize the memory node without processor or software intervention. The team’s CXL switch enables scaling out a host’s memory capacity by hierarchically connecting multiple CXL devices to the CXL switch allowing more than hundreds of devices. Atop the switches and devices, the team’s CXL-enabled operating system removes redundant data copy and protocol conversion exhibited by conventional RDMA, which can significantly decrease access latency to the memory nodes. In a test comparing loading 64B (cacheline) data from memory pooling devices, CXL-based memory disaggregation showed 8.2 times higher data load performance than RDMA-based memory disaggregation and even similar performance to local DRAM memory. In the team’s evaluations for a big data benchmark such as a machine learning-based test, CXL-based memory disaggregation technology also showed a maximum of 3.7 times higher performance than prior RDMA-based memory disaggregation technologies. “Escaping from the conventional RDMA-based memory disaggregation, our CXL-based memory disaggregation framework can provide high scalability and performance for diverse datacenters and cloud service infrastructures,” said Professor Jung. He went on to stress, “Our CXL-based memory disaggregation research will bring about a new paradigm for memory solutions that will lead the era of big data.” -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL)http://camelab.org School of Electrical EngineeringKAIST
Study of T Cells from COVID-19 Convalescents Guides Vaccine Strategies
Researchers confirm that most COVID-19 patients in their convalescent stage carry stem cell-like memory T cells for months A KAIST immunology research team found that most convalescent patients of COVID-19 develop and maintain T cell memory for over 10 months regardless of the severity of their symptoms. In addition, memory T cells proliferate rapidly after encountering their cognate antigen and accomplish their multifunctional roles. This study provides new insights for effective vaccine strategies against COVID-19, considering the self-renewal capacity and multipotency of memory T cells. COVID-19 is a disease caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) infection. When patients recover from COVID-19, SARS-CoV-2-specific adaptive immune memory is developed. The adaptive immune system consists of two principal components: B cells that produce antibodies and T cells that eliminate infected cells. The current results suggest that the protective immune function of memory T cells will be implemented upon re-exposure to SARS-CoV-2. Recently, the role of memory T cells against SARS-CoV-2 has been gaining attention as neutralizing antibodies wane after recovery. Although memory T cells cannot prevent the infection itself, they play a central role in preventing the severe progression of COVID-19. However, the longevity and functional maintenance of SARS-CoV-2-specific memory T cells remain unknown. Professor Eui-Cheol Shin and his collaborators investigated the characteristics and functions of stem cell-like memory T cells, which are expected to play a crucial role in long-term immunity. Researchers analyzed the generation of stem cell-like memory T cells and multi-cytokine producing polyfunctional memory T cells, using cutting-edge immunological techniques. This research is significant in that revealing the long-term immunity of COVID-19 convalescent patients provides an indicator regarding the long-term persistence of T cell immunity, one of the main goals of future vaccine development, as well as evaluating the long-term efficacy of currently available COVID-19 vaccines. The research team is presently conducting a follow-up study to identify the memory T cell formation and functional characteristics of those who received COVID-19 vaccines, and to understand the immunological effect of COVID-19 vaccines by comparing the characteristics of memory T cells from vaccinated individuals with those of COVID-19 convalescent patients. PhD candidate Jae Hyung Jung and Dr. Min-Seok Rha, a clinical fellow at Yonsei Severance Hospital, who led the study together explained, “Our analysis will enhance the understanding of COVID-19 immunity and establish an index for COVID-19 vaccine-induced memory T cells.” “This study is the world’s longest longitudinal study on differentiation and functions of memory T cells among COVID-19 convalescent patients. The research on the temporal dynamics of immune responses has laid the groundwork for building a strategy for next-generation vaccine development,” Professor Shin added. This work was supported by the Samsung Science and Technology Foundation and KAIST, and was published in Nature Communications on June 30. -Publication: Jung, J.H., Rha, MS., Sa, M. et al. SARS-CoV-2-specific T cell memory is sustained in COVID-19 convalescent patients for 10 months with successful development of stem cell-like memory T cells. Nat Communications 12, 4043 (2021). https://doi.org/10.1038/s41467-021-24377-1 -Profile: Professor Eui-Cheol Shin Laboratory of Immunology & Infectious Diseases (http://liid.kaist.ac.kr/) Graduate School of Medical Science and Engineering KAIST
T-GPS Processes a Graph with Trillion Edges on a Single Computer
Trillion-scale graph processing simulation on a single computer presents a new concept of graph processing A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer. Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect. Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine. The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible. The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high. The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach. The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated. The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers. Professor Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, “T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm.” This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP). Publication: Park, H., et al. (2021) “Trillion-scale Graph Processing Simulation based on Top-Down Graph Upscaling,” Presented at the IEEE ICDE 2021 (April 19-22, 2021, Chania, Greece) Profile: Min-Soo Kim Associate Professor email@example.com http://infolab.kaist.ac.kr School of Computing KAIST
Professor Sue-Hyun Lee Listed Among WEF 2020 Young Scientists
Professor Sue-Hyun Lee from the Department of Bio and Brain Engineering joined the World Economic Forum (WEF)’s Young Scientists Community on May 26. The class of 2020 comprises 25 leading researchers from 14 countries across the world who are at the forefront of scientific problem-solving and social change. Professor Lee was the only Korean on this year’s roster. The WEF created the Young Scientists Community in 2008 to engage leaders from the public and private sectors with science and the role it plays in society. The WEF selects rising-star academics, 40 and under, from various fields every year, and helps them become stronger ambassadors for science, especially in tackling pressing global challenges including cybersecurity, climate change, poverty, and pandemics. Professor Lee is researching how memories are encoded, recalled, and updated, and how emotional processes affect human memory, in order to ultimately direct the development of therapeutic methods to treat mental disorders. She has made significant contributions to resolving ongoing debates over the maintenance and changes of memory traces in the brain. In recognition of her research excellence, leadership, and commitment to serving society, the President and the Dean of the College of Engineering at KAIST nominated Professor Lee to the WEF’s Class of 2020 Young Scientists Selection Committee. The Committee also acknowledged Professor Lee’s achievements and potential for expanding the boundaries of knowledge and practical applications of science, and accepted her into the Community. During her three-year membership in the Community, Professor Lee will be committed to participating in WEF-initiated activities and events related to promising therapeutic interventions for mental disorders and future directions of artificial intelligence. Seven of this year’s WEF Young Scientists are from Asia, including Professor Lee, while eight are based in Europe. Six study in the Americas, two work in South Africa, and the remaining two in the Middle East. Fourteen, more than half, of the newly announced 25 Young Scientists are women. (END)
Professor Byong-Guk Park Named Scientist of October
< Professor Byong-Guk Park > Professor Byong-Guk Park from the Department of Materials Science and Engineering was selected as the ‘Scientist of the Month’ for October 2019 by the Ministry of Science and ICT and the National Research Foundation of Korea. Professor Park was recognized for his contributions to the advancement of spin-orbit torque (SOT)-based magnetic random access memory (MRAM) technology. He received 10 million KRW in prize money. A next-generation, non-volatile memory device MRAM consists of thin magnetic films. It can be applied in “logic-in-memory” devices, in which logic and memory functionalities coexist, thus drastically improving the performance of complementary metal-oxide semiconductor (CMOS) processors. Conventional MRAM technology is limited in its ability to increase the operation speed of a memory device while maintaining a high density. Professor Park tackled this challenge by introducing a new material, antiferromagnet (IrMn), that generates a sizable amount of SOT as well as an exchange-bias field, which makes successful data writing possible without an external magnetic field. This research outcome paved the way for the development of MRAM, which has a simple device structure but features high speeds and density. Professor Park said, “I feel rewarded to have forwarded the feasibility and applicability of MRAM. I will continue devoting myself to studying further on the development of new materials that can help enhance the performance of memory devices." (END)
High-Speed Motion Core Technology for Magnetic Memory
(Professor Kab-Jin Kim of the Department of Physics) A joint research team led by Professor Kab-Jin Kim of the Department of Physics, KAIST and Professor Kyung-Jin Lee at Korea University developed technology to dramatically enhance the speed of next generation domain wall-based magnetic memory. This research was published online in Nature Materials on September 25. Currently-used memory materials, D-RAM and S-RAM, are fast but volatile, leading to memory loss when the power is switched off. Flash memory is non-volatile but slow, while hard disk drives (HDD) have greater storage but are high in energy usage and weak in physical shock tolerance. To overcome the limitations of existing memory materials, ‘domain wall-based, magnetic memory’ is being researched. The core mechanism of domain wall magnetic memory is the movement of a domain wall by the current. Non-volatility is secured by using magnetic nanowires and the lack of mechanical rotation reduced power usage. This is a new form of high density, low power next-generation memory. However, previous studies showed the speed limit of domain wall memory to be hundreds m/s at maximum due to the ‘Walker breakdown phenomenon’, which refers to velocity breakdown from the angular precession of a domain wall. Therefore, there was a need to develop core technology to remove the Walker breakdown phenomenon and increase the speed for the commercialization of domain wall memory. Most domain wall memory studies used ferromagnetic bodies, which cannot overcome the Walker breakdown phenomenon. The team discovered that the use of ‘ferrimagnetic‘ GdFeCo at certain conditions could overcome the Walker breakdown phenomenon and using this mechanism they could increase domain wall speed to over 2Km/s at room temperature. Domain wall memory is high-density, low-power, and non-volatile memory. The memory could be the leading next-generation memory with the addition of the high speed property discovered in this research. Professor Kim said, “This research is significant in discovering a new physical phenomenon at the point at which the angular momentum of a ferrimagnetic body is 0 and it is expected to advance the implementation of next-generation memory in the future.” This research was funded by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIP) (No. 2017R1C1B2009686, NRF-2016R1A5A1008184) and by the DGIST R&D Program of the Ministry of Science, ICT and Future Planning (17-BT-02). (Figure 1. Concept Map of Domain Wall Memory Material using Ferrimagnetic Body) (Figure 2. Scheme and Experimental Results of Domain Wall Speed Measurements)
KAIST Team Develops Flexible PRAM
Phase change random access memory (PRAM) is one of the strongest candidates for next-generation nonvolatile memory for flexible and wearable electronics. In order to be used as a core memory for flexible devices, the most important issue is reducing high operating current. The effective solution is to decrease cell size in sub-micron region as in commercialized conventional PRAM. However, the scaling to nano-dimension on flexible substrates is extremely difficult due to soft nature and photolithographic limits on plastics, thus practical flexible PRAM has not been realized yet. Recently, a team led by Professors Keon Jae Lee and Yeon Sik Jung of the Department of Materials Science and Engineering at KAIST has developed the first flexible PRAM enabled by self-assembled block copolymer (BCP) silica nanostructures with an ultralow current operation (below one quarter of conventional PRAM without BCP) on plastic substrates. BCP is the mixture of two different polymer materials, which can easily create self-ordered arrays of sub-20 nm features through simple spin-coating and plasma treatments. BCP silica nanostructures successfully lowered the contact area by localizing the volume change of phase-change materials and thus resulted in significant power reduction. Furthermore, the ultrathin silicon-based diodes were integrated with phase-change memories (PCM) to suppress the inter-cell interference, which demonstrated random access capability for flexible and wearable electronics. Their work was published in the March issue of ACS Nano: "Flexible One Diode-One Phase Change Memory Array Enabled by Block Copolymer Self-Assembly." Another way to achieve ultralow-powered PRAM is to utilize self-structured conductive filaments (CF) instead of the resistor-type conventional heater. The self-structured CF nanoheater originated from unipolar memristor can generate strong heat toward phase-change materials due to high current density through the nanofilament. This ground-breaking methodology shows that sub-10 nm filament heater, without using expensive and non-compatible nanolithography, achieved nanoscale switching volume of phase change materials, resulted in the PCM writing current of below 20 uA, the lowest value among top-down PCM devices. This achievement was published in the June online issue of ACS Nano: "Self-Structured Conductive Filament Nanoheater for Chalcogenide Phase Transition." In addition, due to self-structured low-power technology compatible to plastics, the research team has recently succeeded in fabricating a flexible PRAM on wearable substrates. Professor Lee said, "The demonstration of low power PRAM on plastics is one of the most important issues for next-generation wearable and flexible non-volatile memory. Our innovative and simple methodology represents the strong potential for commercializing flexible PRAM." In addition, he wrote a review paper regarding the nanotechnology-based electronic devices in the June online issue of Advanced Materials entitled "Performance Enhancement of Electronic and Energy Devices via Block Copolymer Self-Assembly." Picture Caption: Low-power nonvolatile PRAM for flexible and wearable memories enabled by (a) self-assembled BCP silica nanostructures and (b) self-structured conductive filament nanoheater.
News Article: Flexible, High-performance Nonvolatile Memory Developed with SONOS Technology
Professor Yang-Kyu Choi of KAIST’s Department of Electrical Engineering and his team presented a research paper entitled “Flexible High-performance Nonvolatile Memory by Transferring GAA Silicon Nanowire SONOS onto a Plastic Substrate” at the conference of the International Electron Devices Meeting that took place on December 15-17, 2014 in San Francisco. The Electronic Engineering Journal (http://www.eejournal.com/) recently posted an article on the paper: Electronic Engineering Journal, February 2, 2015 “A Flat-Earth Memory” Another Way to Make the Brittle Flexible http://www.techfocusmedia.net/archives/articles/20150202-flexiblegaa/?printView=true
KAIST Scientists Creates Transparent Memory Chip
--See-Through Semis Could Revolutionize Displays A group of KAIST scientists led by Prof. Jae-Woo Park and Koeng-Su Lim has created a working computer chip that is almost completely clear -- the first of its kind. The new chip, called "transparent resistive random access memory (TRRAM), is similar in type to an existing technology known as complementary metal-oxide semiconductor (CMOS) memory -- common commercial chips that provide the data storage for USB flash drives and other devices. Like CMOS devices, the new chip provides "non-volatile" memory, meaning that it stores digital information without losing data when it is powered off. Unlike CMOS devices, however, the new TRRAM chip is almost completely clear. The paper on the new technology, entitled "Transparent resistive random access memory and its characteristics for non-volatile resistive switching," was published in the December issue of the Applied Physics Letters (APL), and the American Institute of Physics, the publisher of APL, issued a press release about this breakthrough. "It is a new milestone of transparent electronic systems," says researcher Jung-Won Seo, who is the first author of the paper. "By integrating TRRAM devices with other transparent electronic components, we can create a totally see-through embedded electronic system." Technically, TRRAM devices rely upon an existing technology known as resistive random access memory (RRAM), which is already in commercial development for future electronic data storage devices. RRAM is built using metal oxide materials between equally transparent electrodes and substrates. According to the research team, TRRAM devices are easy to fabricate and may be commercially available in just 3-4 years. "We are sure that TRRAM will become one of alternative devices to current CMOS-based flash memory in the near future after its reliability is proven and once any manufacturing issues are solved," says Prof. Jae-Woo Park, who is the co-author on the paper. He adds that the new devices have the potential to be manufactured cheaply because any transparent materials can be utilized as substrate and electrode. They also may not require incorporating rare elements such as Indium.
마지막 페이지 1
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.