Deep-learning cell imaging through Anderson localizing optical fiber
- Publication Highlights
- Dec. 6, 2019
Real-time visualizations of cell morphology and tissue architecture in vivo are of great importance to both biomedical research and clinical practice. It often involves an imaging process after deep penetration into organs or tissues, which is a formidable task to conventional microscopy. While fiber-optic imaging systems (FOISs) have been applied in this area due to their miniature sizes and flexible image transfer capabilities, conventional FOISs are faced with several challenges, such as poor compatibility with broadband illumination, bulky distal optics, low imaging quality and speed, and extreme sensitivity to perturbations. Current limitations are mainly attributed to the physical properties of optical fibers and image processing techniques. Widely-used optical fibers, such as multicore optical fibers and multimode optical fibers, suffer from strong mode coupling and low mode densities. Any external mechanical or thermal perturbation can modify the mode coupling and degrade the imaging quality. Many existing image processing techniques require complicated, expensive, and noise-sensitive experimental configurations, which are typically incompatible with broadband illumination and result in slow imaging speed.
The glass-air Anderson localizing optical fiber (GALOF) features many advantages over commercially available optical fibers. It is a potential solution to break through current limitations in the next generation of FOIS. Although the GALOF is also a highly multimode system, the modes are strongly localized in the transverse plane and show single-mode-like properties such as nearly diffraction-limited beam quality. In contrast to conventional multicore or multimode optical fibers, the GALOF is exceptionally robust to perturbations and compatible with broadband illuminations due to the wavelength independence of the transverse Anderson localization. Recently, a team of researchers led by Prof. Axel Schülzgen at CREOL, the College of Optics and Photonics at the University of Central Florida and Dr. Jian Zhao at the Boston University Photonics Center have developed a new FOIS enabled by integrating image processing based on a deep convolutional neural network (DCNN) with the GALOF. Their DCNN-GALOF system represents a simple, robust, and low-cost configuration. This novel FOIS transmits almost artifact-free video-rate cell images at high imaging speeds (~20 Hz) utilizing broadband and incoherent light-emitting diode illumination. The high-quality imaging process is able to tolerate both strong mechanical bending and large temperature variations. The imaging depth can reach several millimeters (near 4 mm) from the fiber facet even without any distal-end optics. This lensless imaging capability will minimize collateral penetration damage for living objects. In addition, their studies show that the learning capability of this system can be transferred among different types of cells without any re-training process, which increases its potential as a highly functionalized imaging tool.
a) Schematic of the imaging process through the DCNN-GALOF system. b) Cell imaging results of different types of cells.
This work dramatically boosts the system performance of state-of-the-art FOISs and gets FOISs closer to the demanding requirements of real-world applications. Future FOISs based on the DCNN-GALOF combination should have the potential to probe biological objects in vivo. Such tools could assist with answering fundamental scientific questions as well as improving the accuracy of clinical practitioners' diagnosis.