We present a fast virtual-staining framework for defocused autofluorescence images of unlabeled tissue, matching the performance of standard virtual-staining models using in-focus label-free images. For this, we introduced a virtual-autofocusing network to digitally refocus the defocused images. Subsequently, these refocused images were transformed into virtually-stained H&E images using a successive neural network. Using coarsely-focused autofluorescence images, with 4-fold fewer focus points and 2-fold lower focusing precision, we achieved equivalent virtual-staining performance to standard H&E virtual-staining networks that utilize finely-focused images, helping us decrease the total image acquisition time by ~32% and the autofocusing time by ~89% for each whole-slide image.
|