Convolutional neural networks (CNNs) have shown significant success in image recognition and segmentation. Based on a CNN-like U-Net architecture, such a model can effectively predict subcellular structures from transmitted light (TL) images after learning the relationships between TL images and fluorescent-labeled images. In this paper, we focused on building corresponding models of subcellular mitochondrial structures using the CNN method and compared the prediction results derived from confocal microscopic, Airyscan microscopic, z-stack, and time-series images. With multi-model combined prediction, it is possible to generate integrated images using only TL inputs, which reduces the time required for sample preparation and increases the temporal resolution. This enables visualization, measurement, and understanding of the morphology and dynamics of mitochondria and mitochondrial DNA.
|