Prostate cancer (PCa) is one of the most frequent cancers in men. Its grading is required before initiating its treatment. The Gleason Score (GS) aims at describing and measuring the regularity in gland patterns observed by a pathologist on the microscopic or digital images of prostate biopsies and prostatectomies. Deep Learning based (DL) models are the state-of-the-art computer vision techniques for Gleason grading, learning high-level features with high classification power. However, for obtaining robust models with clinical-grade performance, a large number of local annotations are needed. Previous research showed that it is feasible to detect low and high-grade PCa from digitized tissue slides relying only on the less expensive report{level (weakly) supervised labels, thus global rather than local labels. Despite this, few articles focus on classifying the finer-grained GS classes with weakly supervised models. The objective of this paper is to compare weakly supervised strategies for classification of the five classes of the GS from the whole slide image, using the global diagnostic label from the pathology reports as the only source of supervision. We compare different models trained on handcrafted features, shallow and deep learning representations. The training and evaluation are done on the publicly available TCGA-PRAD dataset, comprising of 341 whole slide images of radical prostatectomies, where small patches are extracted within tissue areas and assigned the global report label as ground truth. Our results show that DL networks and class-wise data augmentation outperform other strategies and their combinations, reaching a kappa score of κ = 0:44, which could be further improved with a larger dataset or combining both strong and weakly supervised models.
The overall lower survival rate of patients with rare cancers can be explained, among other factors, by the limitations resulting from the scarce available information about them. Large biomedical data repositories, such as PubMed Central Open Access (PMC-OA), have been made freely available to the scientific community and could be exploited to advance the clinical assessment of these diseases. A multimodal approach using visual deep learning and natural language processing methods was developed to mine out 15,028 light microscopy human rare cancer images. The resulting data set is expected to foster the development of novel clinical research in this field and help researchers to build resources for machine learning.
Grading whole slide images (WSIs) from patient tissue samples is an important task in digital pathology, particularly for diagnosis and treatment planning. However, this visual inspection task, performed by pathologists, is inherently subjective and has limited reproducibility. Moreover, grading of WSIs is time consuming and expensive. Designing a robust and automatic solution for quantitative decision support can improve the objectivity and reproducibility of this task. This paper presents a fully automatic pipeline for tumor proliferation assessment based on mitosis counting. The approach consists of three steps: i) region of interest selection based on tumor color characteristics, ii) mitosis counting using a deep network based detector, and iii) grade prediction from ROI mitosis counts. The full strategy was submitted and evaluated during the Tumor Proliferation Assessment Challenge (TUPAC) 2016. TUPAC is the first digital pathology challenge grading whole slide images, thus mimicking more closely a real case scenario. The pipeline is extremely fast and obtained the 2nd place for the tumor proliferation assessment task and the 3rd place in the mitosis counting task, among 17 participants. The performance of this fully automatic method is similar to the performance of pathologists and this shows the high quality of automatic solutions for decision support.
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the
outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol
performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when
evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and
reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional
neural networks are a promising approach for the automatic classification of histopathology images and can
hierarchically learn subtle visual features from the data. However, a large number of manual annotations from
pathologists are commonly required to obtain sufficient statistical generalization when training new models that
can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects
prostatectomy WSIs with high–grade Gleason score is proposed. We evaluate the performance of various deep
learning architectures training them with patches extracted from automatically generated regions–of–interest
rather than from manually segmented ones. Relevant parameters for training the deep learning model such as
size and number of patches as well as the inclusion or not of data augmentation are compared between the tested
deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available
TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with
different Gleason grades in a 2–class decision: high vs. low Gleason grade. Grades 7–8, which represent the
boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data
sets with straightforward re–training of the model to include data from multiple sources, scanners and acquisition
techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches
when training networks for big data sets and to guide the visual inspection of these images.
Medical images contain a large amount of visual information about structures and anomalies in the human body. To make sense of this information, human interpretation is often essential. On the other hand, computer-based approaches can exploit information contained in the images by numerically measuring and quantifying specific visual features. Annotation of organs and other anatomical regions is an important step before computing numerical features on medical images. In this paper, a texture-based organ classification algorithm is presented, which can be used to reduce the time required for annotating medical images. The texture of organs is analyzed using a combination of state-of-the-art techniques: the Riesz transform and a bag of meaningful visual words. The effect of a meaningfulness transformation in the visual word space yields two important advantages that can be seen in the results. The number of descriptors is enormously reduced down to 10% of the original size, whereas classification accuracy is improved by up to 25% with respect to the baseline approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.