In computational pathology, training and inference of conventional deep convolutional neural networks (CNN) are usually limited to patches of small sizes (e.g., 256 × 256) sampled from whole slide images. In practice, however, diagnostic and prognostic information could lie within the context of tumor microenvironment across multiple regions, far beyond the scope of individual patches. For instance, the spatial relationship of tumor-infiltrating lymphocytes (TIL) across regions of interest might be prognostic for non-small cell lung cancer (NSCLC). This poses a multi-instance learning (MIL) problem, and a single-patch-driven CNN typically fails to learn spatial information and context between multiple patches, especially their spatial relationship. In this work, we present a cell graph-based MIL framework to predict the risk of death for early-stage NSCLC by aggregating feature representation of TIL-enclosing patches according to their spatial relationship. Inspired by PATCHY-SAN, a graph-embedding framework for CNNs, we use graph kernel-based approaches to embed a bag of patches into a sequence with their spatial information encoded into the sequence order. A transformer model was then trained to aggregate patch-level features based on spatial information. We demonstrate the capability of this framework to predict the likelihood of the patient with NSCLC in two cohorts (n=240) to survive for more than 5 years. The training cohort (n=195) comprised hematoxylin and eosin (H&E)-stained whole slide images (WSI), while the testing cohort (n=45) comprised H&E-stained tumor microarrays (TMA). We show that, with the spatial context of multiple patches encoded as an ordered patch sequence, the performance in the testing cohort of our approach achieves an area under the receiver operating characteristic curve (AUC) of 0.836 (p=0.009; HR=5.62), as opposed to a baseline conventional CNN with an AUC of 0.542 (p=0.105; HR=1.66). The results suggest that the Transformer is a generic spatial information aware MIL framework that can learn the spatial relationship of multiple TIL-enclosing patches from the graph representation of immune cells.
The tumor microenvironment (TME) is comprised of multiple cell types, with their spatial organization having been previously studied to identify associations with disease progression and response to therapy. These works, however, have focused on spatial interactions of a single cell type, ignoring spatial interplay between the remaining cells. Here, we introduce a framework to quantify complex spatial interactions on H&E-stained image between multiple cell families simultaneously within the TME, called spatial connectivity of tumor and associated cells (SpaCell). First, nuclei are segmented and classified into different families (e.g., cancerous cells and lymphocytes) using a combination of image processing and machine learning techniques. Local clusters of proximal nuclei are then built for each family. Next, quantitative metrics are extracted from these clusters to capture inter- and intra-family relationships, namely: density of clusters, area intersected between clusters, diversity of clusters surrounding a cluster, architecture of clusters, among others. When evaluated for predicting risk of recurrence in HPV-associated oropharyngeal squamous cell carcinoma (n=233, 107 vs 126 patients for training vs testing) and non-small cell lung cancer (n=186, 70 vs 116 patients for training vs for testing), SpaCell was able to differentiate between patients at high and low risk of recurrence (p=0.03 and p=0.02, respectively). SpaCell was compared against a deep learning and a state-of-the-art approach that uses single-family cell cluster graphs (CGG). CCG extracted metrics were not prognostic of disease-free survival (DFS) for oropharyngeal (p=0.98) nor lung (p=0.15) cancer, and deep learning was prognostic of DFS for lung (p=0.03) but not for oropharyngeal cancer (p=0.26). SpaCell was not only prognostic for both cancer types but also provides more explainability in terms of tumor biology.
Non-destructive 3D microscopy enables the accurate characterization of diagnostically and prognostically significant microstructures in clinical specimens with significantly increased volumetric coverage than traditional 2D histology. We are using open-top light-sheet microscopy to image prostate cancer biopsies and investigating the prognostic significance of 3D spatial features of nuclei within prostate cancer microstructures. Using a previously published 3D nuclear segmentation workflow, we identify a preliminary set of 3D graph-based nuclear features to quantify the 3D spatial arrangement of nuclei in prostate cancer biopsies. Using a machine classifier, we identify the features which prognosticate prostate cancer risk and demonstrate agreement with patient outcomes.
Glandular architecture is currently the basis for the Gleason grading of prostate biopsies. To visualize and computationally analyze glandular features in large 3D pathology datasets, we developed an annotation-free segmentation method for 3D prostate glands that relies upon synthetic 3D immunofluorescence (IF) enabled by generative adversarial networks. By using a fluorescent analog of H and E (cheap and fast stain) as an input, our strategy allows for accurate glandular segmentation that does not rely upon subjective and tedious human annotations or slow and expensive 3D immunolabeling. We aim to demonstrate that this 3D segmentation will enable improved prostate cancer prognostication.
Glandular features play an important role in the evaluation of prostate cancer. There has been significant interest in the use of 2D pathomics (feature extraction) approaches for detection, diagnosis, and characterization of prostate cancer on digitized tissue slide images. With the development of 3D microscopy techniques, such as open-top light-sheet (OTLS), there is an opportunity for rapid 3D imaging of large tissue specimens such as whole biopsies. In this study, we sought to investigate whether 3D features of gland morphology, namely volume and surface curvature, from OTLS images offer superior discrimination between malignant and benign glands compared to the traditional 2D gland features, namely area and curvature, alone. In this study, a cohort of 8 de-identified fresh prostate biopsies comprehensively imaged in 3D via the OTLS platform. A total of 367 glands were segmented from these images, of which 79 were identified as benign and 288 were identified as malignant. Glands were segmented using a 3D watershed algorithm followed by post-processing steps to filter out falsepositive regions. The 2D and 3D features were compared quantitatively and qualitatively. Our experiments demonstrated that a model using 3D features outperformed one using 2D features in differentiating benign and malignant glands. In 3D, both features, gland volume (p = 1.45 × 10−3) and surface curvature (p = 3.2 × 10−3), were found to be informative whereas in 2D, only gland area (p = 9 × 10−18) was found to be discriminating (p = 0.79 for 2D curvature). Notable visual and quantitative differences between 3D benign/malignant glands encourage the development of additional more sophisticated features in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.