Analysis of retinal fundus images is essential for physicians, optometrists and ophthalmologists in the diagnosis, care and treatment of patients. The first step of almost all forms of automated fundus analysis begins with the segmentation and subtraction of the retinal vasculature, while analysis of that same structure can aid in the diagnosis of certain retinal and cardiovascular conditions, such as diabetes or stroke. This paper investigates the use of a Convolutional Neural Network as a multi-channel classifier of retinal vessels using DRIVE, a database of fundus images. The result of the network with the application of a confidence threshold was slightly below the 2nd observer and gold standard, with an accuracy of 0.9419 and ROC of 0.9707. The output of the network with on post-processing boasted the highest sensitivity found in the literature with a score of 0.9568 and a good ROC score of 0.9689. The high sensitivity of the system makes it suitable for longitudinal morphology assessments, disease detection and other similar tasks.
The segmentation of retinal morphology has numerous applications in assessing ophthalmologic and cardiovascular disease pathologies. The early detection of many such conditions is often the most effective method for reducing patient risk. Computer aided segmentation of the vasculature has proven to be a challenge, mainly due to inconsistencies such as noise, variations in hue and brightness that can greatly reduce the quality of fundus images. Accurate fundus and/or retinal vessel maps give rise to longitudinal studies able to utilize multimodal image registration and disease/condition status measurements, as well as applications in surgery preparation and biometrics. This paper further investigates the use of a Convolutional Neural Network as a multi-channel classifier of retinal vessels using the Digital Retinal Images for Vessel Extraction database, a standardized set of fundus images used to gauge the effectiveness of classification algorithms. The CNN has a feed-forward architecture and varies from other published architectures in its combination of: max-pooling, zero-padding, ReLU layers, batch normalization, two dense layers and finally a Softmax activation function. Notably, the use of Adam to optimize training the CNN on retinal fundus images has not been found in prior review. This work builds on prior work of the authors, exploring the use of Gabor filters to boost the accuracy of the system to 0.9478 during post processing. The mean of a series of Gabor filters with varying frequencies and sigma values are applied to the output of the network and used to determine whether a pixel represents a vessel or non-vessel.
To register three or more images together, current approaches involve registering them two at a time. This pairwise approach can lead to registration inconsistencies. It can also result in diminished accuracy because only a fraction of the total data is being used at any given time. We propose a registration method that simultaneously registers the entire ensemble of images. This ensemble registration of multi-sensor datasets is done using clustering in the joint intensity space. Experiments demonstrate that the ensemble registration method overcomes serious issues that hinder pairwise multi-sensor registration methods.
In this paper, we investigate the use of the Stockwell Transform for image compression. The proposed technique
uses the Discrete Orthogonal Stockwell Transform (DOST), an orthogonal version of the Discrete Stockwell
Transform (DST). These mathematical transforms provide a multiresolution spatial-frequency representation of
a signal or image.
First, we give a brief introduction for the Stockwell transform and the DOST. Then we outline a simplistic
compression method based on setting the smallest coefficients to zero. In an experiment, we use this compression
strategy on three different transforms: the Fast Fourier transform, the Daubechies wavelet transform and the
DOST. The results show that the DOST outperforms the two other methods.
It has been shown that the presence of a blood oxygen level dependent (BOLD) signal in high-field (3T and higher) fMRI datasets can cause stimulus-correlated registration errors, especially when using a least-squares registration method. These errors can result in systematic inaccuracies in activation detection. The authors have recently proposed a new method to solve both the registration and activation detection least-squares problems simultaneously. This paper gives an outline of the new method, and demonstrates its robustness on simulated fMRI datasets containing various combinations of motion and activation. In addition to a discussion of the merits of the method and details on how it can be efficiently implemented, it is shown that, compared to the standard approach, the new method
consistently reduces false-positive activations by two thirds and reduces false-negative activations by one third.
Other researchers have proposed that the brain parenchymal fraction (or brain atrophy) may be a good surrogate measure for disease progression in patients with Multiple Sclerosis. This paper considers various factors influencing the measure of the brain parenchymal fraction obtained from dual spin-echo PD and T2-weighted head MRI scans. We investigate the robustness of the brain parenchymal fraction with respect to two factors: brain-mask border placement which determines the brain intra-dural volume, and brain scan incompleteness. We show that an automatic method for brain segmentation produces an atrophy measure which is fairly sensitive to the brain-mask placement. We also show that a robust, reproducible brain atrophy measure can be obtained from incomplete brain scans, using data in a centrally placed subvolume of the brain.
KEYWORDS: Brain, Magnetic resonance imaging, Image segmentation, Neuroimaging, Head, Data modeling, Evolutionary algorithms, Data centers, Data corrections, Internet
This paper looks at the difficulties that can confound published T1-weighted Magnetic Resonance Imaging (MRI) brain segmentation methods, and compares their strengths and weaknesses. Using data from the Internet Brain Segmentation Repository (IBSR) as a gold standard, we ran three different segmentation methods with and without correcting for intensity inhomogeneity. We then calculated the similarity index between the brain masks produced by the segmentation methods and the mask provided by the IBSR. The intensity histograms under the segmented masks were also analyzed to see if a Bi-Gaussian model could be fit onto T1 brain data. Contrary to our initial beliefs, our study found that intensity based T1-weighted segmentation methods were comparable or even superior to, methods utilizing spatial information. All methods appear to have parameters that need adjustment depending on the data set used. Furthermore, it seems that the methods we tested for intensity inhomogeneity did not improve the segmentations due to the nature of the IBSR data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.