Receive KAIST news by email!
Type your e-mail address here.
by recently order
by view order
Tomographic Measurement of Dielectric Tensors
Dielectric tensor tomography allows the direct measurement of the 3D dielectric tensors of optically anisotropic structures A research team reported the direct measurement of dielectric tensors of anisotropic structures including the spatial variations of principal refractive indices and directors. The group also demonstrated quantitative tomographic measurements of various nematic liquid-crystal structures and their fast 3D nonequilibrium dynamics using a 3D label-free tomographic method. The method was described in Nature Materials. Light-matter interactions are described by the dielectric tensor. Despite their importance in basic science and applications, it has not been possible to measure 3D dielectric tensors directly. The main challenge was due to the vectorial nature of light scattering from a 3D anisotropic structure. Previous approaches only addressed 3D anisotropic information indirectly and were limited to two-dimensional, qualitative, strict sample conditions or assumptions. The research team developed a method enabling the tomographic reconstruction of 3D dielectric tensors without any preparation or assumptions. A sample is illuminated with a laser beam with various angles and circularly polarization states. Then, the light fields scattered from a sample are holographically measured and converted into vectorial diffraction components. Finally, by inversely solving a vectorial wave equation, the 3D dielectric tensor is reconstructed. Professor YongKeun Park said, “There were a greater number of unknowns in direct measuring than with the conventional approach. We applied our approach to measure additional holographic images by slightly tilting the incident angle.” He said that the slightly tilted illumination provides an additional orthogonal polarization, which makes the underdetermined problem become the determined problem. “Although scattered fields are dependent on the illumination angle, the Fourier differentiation theorem enables the extraction of the same dielectric tensor for the slightly tilted illumination,” Professor Park added. His team’s method was validated by reconstructing well-known liquid crystal (LC) structures, including the twisted nematic, hybrid aligned nematic, radial, and bipolar configurations. Furthermore, the research team demonstrated the experimental measurements of the non-equilibrium dynamics of annihilating, nucleating, and merging LC droplets, and the LC polymer network with repeating 3D topological defects. “This is the first experimental measurement of non-equilibrium dynamics and 3D topological defects in LC structures in a label-free manner. Our method enables the exploration of inaccessible nematic structures and interactions in non-equilibrium dynamics,” first author Dr. Seungwoo Shin explained. -PublicationSeungwoo Shin, Jonghee Eun, Sang Seok Lee, Changjae Lee, Herve Hugonnet, Dong Ki Yoon, Shin-Hyun Kim, Jongwoo Jeong, YongKeun Park, “Tomographic Measurement ofDielectric Tensors at Optical Frequency,” Nature Materials March 02, 2022 (https://doi.org/10/1038/s41563-022-01202-8) -ProfileProfessor YongKeun ParkBiomedical Optics Laboratory (http://bmol.kaist.ac.kr)Department of PhysicsCollege of Natural SciencesKAIST
Label-Free Multiplexed Microtomography of Endogenous Subcellular Dynamics Using Deep Learning
AI-based holographic microscopy allows molecular imaging without introducing exogenous labeling agents A research team upgraded the 3D microtomography observing dynamics of label-free live cells in multiplexed fluorescence imaging. The AI-powered 3D holotomographic microscopy extracts various molecular information from live unlabeled biological cells in real time without exogenous labeling or staining agents. Professor YongKeum Park’s team and the startup Tomocube encoded 3D refractive index tomograms using the refractive index as a means of measurement. Then they decoded the information with a deep learning-based model that infers multiple 3D fluorescence tomograms from the refractive index measurements of the corresponding subcellular targets, thereby achieving multiplexed micro tomography. This study was reported in Nature Cell Biology online on December 7, 2021. Fluorescence microscopy is the most widely used optical microscopy technique due to its high biochemical specificity. However, it needs to genetically manipulate or to stain cells with fluorescent labels in order to express fluorescent proteins. These labeling processes inevitably affect the intrinsic physiology of cells. It also has challenges in long-term measuring due to photobleaching and phototoxicity. The overlapped spectra of multiplexed fluorescence signals also hinder the viewing of various structures at the same time. More critically, it took several hours to observe the cells after preparing them. 3D holographic microscopy, also known as holotomography, is providing new ways to quantitatively image live cells without pretreatments such as staining. Holotomography can accurately and quickly measure the morphological and structural information of cells, but only provides limited biochemical and molecular information. The 'AI microscope' created in this process takes advantage of the features of both holographic microscopy and fluorescence microscopy. That is, a specific image from a fluorescence microscope can be obtained without a fluorescent label. Therefore, the microscope can observe many types of cellular structures in their natural state in 3D and at the same time as fast as one millisecond, and long-term measurements over several days are also possible. The Tomocube-KAIST team showed that fluorescence images can be directly and precisely predicted from holotomographic images in various cells and conditions. Using the quantitative relationship between the spatial distribution of the refractive index found by AI and the major structures in cells, it was possible to decipher the spatial distribution of the refractive index. And surprisingly, it confirmed that this relationship is constant regardless of cell type. Professor Park said, “We were able to develop a new concept microscope that combines the advantages of several microscopes with the multidisciplinary research of AI, optics, and biology. It will be immediately applicable for new types of cells not included in the existing data and is expected to be widely applicable for various biological and medical research.” When comparing the molecular image information extracted by AI with the molecular image information physically obtained by fluorescence staining in 3D space, it showed a 97% or more conformity, which is a level that is difficult to distinguish with the naked eye. “Compared to the sub-60% accuracy of the fluorescence information extracted from the model developed by the Google AI team, it showed significantly higher performance,” Professor Park added. This work was supported by the KAIST Up program, the BK21+ program, Tomocube, the National Research Foundation of Korea, and the Ministry of Science and ICT, and the Ministry of Health & Welfare. -Publication Hyun-seok Min, Won-Do Heo, YongKeun Park, et al. “Label-free multiplexed microtomography of endogenous subcellular dynamics using generalizable deep learning,” Nature Cell Biology (doi.org/10.1038/s41556-021-00802-x) published online December 07 2021. -Profile Professor YongKeun Park Biomedical Optics Laboratory Department of Physics KAIST
Observing Individual Atoms in 3D Nanomaterials and Their Surfaces
Atoms are the basic building blocks for all materials. To tailor functional properties, it is essential to accurately determine their atomic structures. KAIST researchers observed the 3D atomic structure of a nanoparticle at the atom level via neural network-assisted atomic electron tomography. Using a platinum nanoparticle as a model system, a research team led by Professor Yongsoo Yang demonstrated that an atomicity-based deep learning approach can reliably identify the 3D surface atomic structure with a precision of 15 picometers (only about 1/3 of a hydrogen atom’s radius). The atomic displacement, strain, and facet analysis revealed that the surface atomic structure and strain are related to both the shape of the nanoparticle and the particle-substrate interface. Combined with quantum mechanical calculations such as density functional theory, the ability to precisely identify surface atomic structure will serve as a powerful key for understanding catalytic performance and oxidation effect. “We solved the problem of determining the 3D surface atomic structure of nanomaterials in a reliable manner. It has been difficult to accurately measure the surface atomic structures due to the ‘missing wedge problem’ in electron tomography, which arises from geometrical limitations, allowing only part of a full tomographic angular range to be measured. We resolved the problem using a deep learning-based approach,” explained Professor Yang. The missing wedge problem results in elongation and ringing artifacts, negatively affecting the accuracy of the atomic structure determined from the tomogram, especially for identifying the surface structures. The missing wedge problem has been the main roadblock for the precise determination of the 3D surface atomic structures of nanomaterials. The team used atomic electron tomography (AET), which is basically a very high-resolution CT scan for nanomaterials using transmission electron microscopes. AET allows individual atom level 3D atomic structural determination. “The main idea behind this deep learning-based approach is atomicity—the fact that all matter is composed of atoms. This means that true atomic resolution electron tomogram should only contain sharp 3D atomic potentials convolved with the electron beam profile,” said Professor Yang. “A deep neural network can be trained using simulated tomograms that suffer from missing wedges as inputs, and the ground truth 3D atomic volumes as targets. The trained deep learning network effectively augments the imperfect tomograms and removes the artifacts resulting from the missing wedge problem.” The precision of 3D atomic structure can be enhanced by nearly 70% by applying the deep learning-based augmentation. The accuracy of surface atom identification was also significantly improved. Structure-property relationships of functional nanomaterials, especially the ones that strongly depend on the surface structures, such as catalytic properties for fuel-cell applications, can now be revealed at one of the most fundamental scales: the atomic scale. Professor Yang concluded, “We would like to fully map out the 3D atomic structure with higher precision and better elemental specificity. And not being limited to atomic structures, we aim to measure the physical, chemical, and functional properties of nanomaterials at the 3D atomic scale by further advancing electron tomography techniques.” This research, reported at Nature Communications, was funded by the National Research Foundation of Korea and the KAIST Global Singularity Research M3I3 Project. -Publication Juhyeok Lee, Chaehwa Jeong & Yongsoo Yang “Single-atom level determination of 3-dimensional surface atomic structure via neural network-assisted atomic electron tomography” Nature Communications -Profile Professor Yongsoo Yang Department of Physics Multi-Dimensional Atomic Imaging Lab (MDAIL) http://mdail.kaist.ac.kr KAIST
Deep-Learning and 3D Holographic Microscopy Beats Scientists at Analyzing Cancer Immunotherapy
Live tracking and analyzing of the dynamics of chimeric antigen receptor (CAR) T-cells targeting cancer cells can open new avenues for the development of cancer immunotherapy. However, imaging via conventional microscopy approaches can result in cellular damage, and assessments of cell-to-cell interactions are extremely difficult and labor-intensive. When researchers applied deep learning and 3D holographic microscopy to the task, however, they not only avoided these difficultues but found that AI was better at it than humans were. Artificial intelligence (AI) is helping researchers decipher images from a new holographic microscopy technique needed to investigate a key process in cancer immunotherapy “live” as it takes place. The AI transformed work that, if performed manually by scientists, would otherwise be incredibly labor-intensive and time-consuming into one that is not only effortless but done better than they could have done it themselves. The research, conducted by the team of Professor YongKeun Park from the Department of Physics, appeared in the journal eLife last December. A critical stage in the development of the human immune system’s ability to respond not just generally to any invader (such as pathogens or cancer cells) but specifically to that particular type of invader and remember it should it attempt to invade again is the formation of a junction between an immune cell called a T-cell and a cell that presents the antigen, or part of the invader that is causing the problem, to it. This process is like when a picture of a suspect is sent to a police car so that the officers can recognize the criminal they are trying to track down. The junction between the two cells, called the immunological synapse, or IS, is the key process in teaching the immune system how to recognize a specific type of invader. Since the formation of the IS junction is such a critical step for the initiation of an antigen-specific immune response, various techniques allowing researchers to observe the process as it happens have been used to study its dynamics. Most of these live imaging techniques rely on fluorescence microscopy, where genetic tweaking causes part of a protein from a cell to fluoresce, in turn allowing the subject to be tracked via fluorescence rather than via the reflected light used in many conventional microscopy techniques. However, fluorescence-based imaging can suffer from effects such as photo-bleaching and photo-toxicity, preventing the assessment of dynamic changes in the IS junction process over the long term. Fluorescence-based imaging still involves illumination, whereupon the fluorophores (chemical compounds that cause the fluorescence) emit light of a different color. Photo-bleaching or photo-toxicity occur when the subject is exposed to too much illumination, resulting in chemical alteration or cellular damage. One recent option that does away with fluorescent labelling and thereby avoids such problems is 3D holographic microscopy or holotomography (HT). In this technique, the refractive index (the way that light changes direction when encountering a substance with a different density—why a straw looks like it bends in a glass of water) is recorded in 3D as a hologram. Until now, HT has been used to study single cells, but never cell-cell interactions involved in immune responses. One of the main reasons is the difficulty of “segmentation,” or distinguishing the different parts of a cell and thus distinguishing between the interacting cells; in other words, deciphering which part belongs to which cell. Manual segmentation, or marking out the different parts manually, is one option, but it is difficult and time-consuming, especially in three dimensions. To overcome this problem, automatic segmentation has been developed in which simple computer algorithms perform the identification. “But these basic algorithms often make mistakes,” explained Professor YongKeun Park, “particularly with respect to adjoining segmentation, which of course is exactly what is occurring here in the immune response we’re most interested in.” So, the researchers applied a deep learning framework to the HT segmentation problem. Deep learning is a type of machine learning in which artificial neural networks based on the human brain recognize patterns in a way that is similar to how humans do this. Regular machine learning requires data as an input that has already been labelled. The AI “learns” by understanding the labeled data and then recognizes the concept that has been labelled when it is fed novel data. For example, AI trained on a thousand images of cats labelled “cat” should be able to recognize a cat the next time it encounters an image with a cat in it. Deep learning involves multiple layers of artificial neural networks attacking much larger, but unlabeled datasets, in which the AI develops its own ‘labels’ for concepts it encounters. In essence, the deep learning framework that KAIST researchers developed, called DeepIS, came up with its own concepts by which it distinguishes the different parts of the IS junction process. To validate this method, the research team applied it to the dynamics of a particular IS junction formed between chimeric antigen receptor (CAR) T-cells and target cancer cells. They then compared the results to what they would normally have done: the laborious process of performing the segmentation manually. They found not only that DeepIS was able to define areas within the IS with high accuracy, but that the technique was even able to capture information about the total distribution of proteins within the IS that may not have been easily measured using conventional techniques. “In addition to allowing us to avoid the drudgery of manual segmentation and the problems of photo-bleaching and photo-toxicity, we found that the AI actually did a better job,” Professor Park added. The next step will be to combine the technique with methods of measuring how much physical force is applied by different parts of the IS junction, such as holographic optical tweezers or traction force microscopy. -Profile Professor YongKeun Park Department of Physics Biomedical Optics Laboratory http://bmol.kaist.ac.kr KAIST
Professor YongKeun Park Elected as a Fellow of the Optical Society
Professor YongKeun Park, from the Department of Physics at KAIST, was elected as a fellow member of the Optical Society (OSA) in Washington, D.C. on September 12. Fellow membership is given to members who have made a significant contribution to the advancement of optics and photonics. Professor Park was recognized for his research on digital holography and wavefront control technology. Professor Park has been producing outstanding research outcomes in the field of holographic technology and light scattering control since joining KAIST in 2010. In particular, he developed and commercialized technology for a holographic telescope. He applied it to various medical and biological research projects, leading the field worldwide. In the past, cells needed to be dyed with fluorescent materials to capture a 3-D image. However, Professor Park’s holotomography (HT) technology can capture 3-D images of living cells and tissues in real time without color dyeing. This technology allows diversified research in the biological and medical field. Professor Park established a company, Tomocube, Inc. in 2015 to commercialize the technology. In 2016, he received funding from SoftBank Ventures and Hanmi Pharmaceutical. Currently, major institutes, including MIT, the University of Pittsburgh, the German Cancer Research Center, and Seoul National University Hospital are using his equipment. Recently, Professor Park and his team developed technology based on light scattering measurements. With this technology, they established a company called The Wave Talk and received funding from various organizations, such as NAVER. Its first product is about to be released. Professor Park said, “I am glad to become a fellow member based on the research outcomes I produced since I was appointed as a professor at KAIST. I would like to thank the excellent researchers as well as the school for its support. I will devote myself to continuously producing novel outcomes in both basic and applied fields.” Professor Park has published nearly 100 papers in renowned journals including Nature Photonics, Nature Communications, Science Advances, and Physical Review Letters.
High Resolution 3D Blood Vessel Endoscope System Developed
Professor Wangyeol Oh of KAIST’s Mechanical Engineering Department has succeeded in developing an optical imaging endoscope system that employs an imaging velocity, which is up to 3.5 times faster than the previous systems. Furthermore, he has utilized this endoscope to acquire the world’s first high-resolution 3D images of the insides of in vivo blood vessel. Professor Oh’s work is Korea’s first development of blood vessel endoscope system, possessing an imaging speed, resolution, imaging quality, and image-capture area. The system can also simultaneously perform a functional imaging, such as polarized imaging, which is advantageous for identifying the vulnerability of the blood vessel walls. The Endoscopic Optical Coherence Tomography (OCT) System provides the highest resolution that is used to diagnose cardiovascular diseases, represented mainly by myocardial infarction. However, the previous system was not fast enough to take images inside of the vessels, and therefore it was often impossible to accurately identify and analyze the vessel condition. To achieve an in vivo blood vessel optical imaging in clinical trials, the endoscope needed to be inserted, after which a clear liquid flows instantly, and pictures can be taken in only a few seconds. The KAIST research team proposed a solution for such problem by developing a high-speed, high-resolution optical tomographic imaging system, a flexible endoscope with a diameter of 0.8 mm, as well as a device that can scan the imaging light within the blood vessels at high speed. Then, these devices were combined to visualize the internal structure of the vessel wall. Using the developed system, the researchers were able to obtain high-resolution images of about 7 cm blood vessels of a rabbit’s aorta, which is similar size to human’s coronary arteries. The tomography scan took only 5.8 seconds, at a speed of 350 scans per second in all three directions with a resolution of 10~35㎛. If the images are taken every 200 ㎛, like the currently available commercial vascular imaging endoscopes, a 7cm length vessel can be imaged in only one second. Professor Wangyeol Oh said, “Our newly developed blood vessel endoscope system was tested by imaging a live animal’s blood vessels, which is similar to human blood vessels. The result was very successful.” “Collaborating closely with hospitals, we are preparing to produce the imaging of an animal’s coronary arteries, which is similar in size to the human heart,” commented Professor Oh on the future clinical application and commercialization of the endoscope system. He added, “After such procedures, the technique can be applied in clinical patients within a few years.” Professor Oh’s research was supported by the National Research Foundation of Korea and the Global Frontier Project by the Korean government. The research results were published in the 2014 January’s edition of Biomedical Optics Express. Figure 1: End portion of optical endoscope (upper left) Figure 2: High-speed optical scanning unit of the endoscope (top right) Figure 3: High-resolution images of the inside of in vivo animal blood vessels (in the direction of vascular circumference and length) Figure 4: High-resolution images of the inside of in vivo animal blood vessels (in the direction of the vein depth)
The key to Alzheimer disease, PET-MRI made in Korea
Professor Kyu-Sung Cho - Simultaneous PET-MRI imaging system commercialization technology developed purely from domestic technology - - Inspiring achievement by KAIST, National NanoFab Center, Sogang University, Seoul National University Hospital – Hopes are high for the potential of producing domestic products in the field of state-of-the-art medical imaging equipment that used to rely on imported products. The joint research team (KAIST, Sogang University and Seoul National University) with KAIST Department of Nuclear and Quantum Engineering Professor Kyu-Sung Cho in charge, together with National Nanofab Institution (NNFC; Director Jae-Young Lee), has developed PET-MRI simultaneous imaging system with domestic technology only. The team successfully acquired brain images of 3 volunteers with the newly developed system. PET-MRI is integrated state-of-the-art medical imaging equipment that combines the advantages of Magnetic Resonance Imaging (MRI) that shows anatomical images of the body and Position Emission Tomography (PET) that analyses cell activity and metabolism. Since the anatomical information and functional information can be seen simultaneously, the device can be used to diagnose early onset Alzheimer’s disease and is essential in biological science research, such as new medicine development. The existing equipment used to take MRI and PET images separately due to the strong magnetic field generated by MRI and combine the images. Hence, it was time consuming and error-prone due to patient’s movement. There was a need to develop PET that functions within a magnetic field to create a simultaneous imaging system. The newly developed integral PET-MRI has 3 technical characteristics: 1. PET detector without magnetic interference, 2. PET-MRI integration system, 3.PET-MRI imaging processing. The PET detector is the most important factor and accounts for half the cost of the whole system. KAIST Professor Cho and NNFC Doctor Woo-Suk Seol’s team successfully developed the Silicon Photomultiplier (amplifies light coming into the radiation detector) that can be used in strong magnetic fields. The developed sensor has a global competitive edge since it optimises semiconductor processing to yield over 95% productivity and around 10% gamma radiation energy resolving power. Sogang University Department and Electrical Engineering Professor Yong Choi developed cutting edge PET system using a new concept of electric charge signal transmission method and imaging location distinction circuit. The creativity and excellence of the research findings were recognised and hence published on the cover of Medical Physics in June. Seoul National University Hospital Department of Nuclear Medicine Professor Jae-Sung Lee developed the Silicon Photomultiplier sensor based PET imaging reconstitution programme, MRI imaging based PET imaging revision technology and PET-MRI imaging integration software. Furthermore, KAIST Department of Electrical Engineering Professor Hyun-Wook Park was responsible for the development of RF Shielding technology that enables simultaneous installation of PET and MRI and using this technology, he developed a head coil for the brain that can be connected to PET for installation. Based on the technology describe above, the joint research team successfully developed PET-MRI system for brains and acquired PET-MRI integrated brain images from 3 volunteers last June. In particular, this system has the distinct feature of a detachable PET module and MRI head coil to the existing whole body MRI, so that PET-MRI simultaneous imaging is possible with low installation cost. Professor Cho said, “We have prepared the foundation of domestic commercial PET and the system has a competitive edge in the global market of PET-MRI system technology.” He continued, “It can reduce the cost of the increasing brain related disease diagnosis, including Alzheimer’s, dramatically.” Funded by Ministry of Trade, Industry and Energy as an Industrial Foundation Technology Development Project (98 billion won in 7 years), the research applied for over 20 patents and 20 CSI theses. Figure 1.Brain phantom images from developed PET-MRI system Figure 2. Brain images from developed PET-MRI system Figure 3. Domestic PET-MRI clinical trial Figure 4. Head RF coil and PET detector inserted in MRI Figure 5. Insertion type PET detector module Figure 6. Silicon Photomultiplier sensor (Left) and flash crystal block (right) Figure7. Silicon Photomultiplier sensor Figure 8. PET detection principle
마지막 페이지 1
KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
Copyright(C) 2020, Korea Advanced Institute of Science and Technology,
All Rights Reserved.