Poster + Paper
12 March 2024 Interpreting microscopic structures in virtually stained histological sections for veterinary oncology applications
Author Affiliations +
Conference Poster
Abstract
We employed a Pix2Pix generative adversarial network to translate the multispectral fluorescence images into colored brightfield representations resembling H&E staining. The model underwent training using 512x512 pixel paired image patches, with a manually stained image serving as the reference and the fluorescence images serving as the processing input. The baseline model, without any modifications, did not achieve high microscopic accuracy, manifesting incorrect color attribution to various biological structures and the addition or removal of image features. However, through the substitution of simple convolutions with Dense convolution units in the U-Net Generator, we observed an increase in the similarity of microscopic structures and the color balance between the paired images. The resulting improvements underscore the potential utility of virtual staining in histopathological analysis for veterinary oncology applications.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Diāna Dupļevska, Romans Maļiks, Mindaugas Tamošiūnas, Mikus Melderis, Daira Viškere, Blaž Cugmas, Roberts Kadiķis, and Ilze Matīse-van Houtana "Interpreting microscopic structures in virtually stained histological sections for veterinary oncology applications", Proc. SPIE 12831, Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XXII, 128310K (12 March 2024); https://doi.org/10.1117/12.3023420
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Tumors

Fluorescence

Education and training

Tissues

Autofluorescence

Image processing

Neural networks

Back to Top