Cystoscopy is the standard procedure for clinical diagnosis of bladder cancer diagnosis. Bladder carcinoma in situ are often multifocal and spread over large areas. In vivo, localization and follow-up of these tumors and their nearby sites is necessary. But, due to the small field of view (FOV) of the cystoscopic video images, urologists cannot easily interpret the scene. Bladder mosaicing using image registration facilitates this interpretation through the visualization of entire lesions with respect to anatomical landmarks. The reference white light (WL) modality is affected by a strong variability in terms of texture, illumination conditions and motion blur. Moreover, in the complementary fluorescence light (FL) modality, the texture is visually different from that of the WL. Existing algorithms were developed for a particular modality and scene conditions. This paper proposes a more general on fly image registration approach for dealing with these variability issues in cystoscopy. To do so, we present a novel, robust and accurate image registration scheme by redefining the data-term of the classical total variational (TV) approach. Quantitative results on realistic bladder phantom images are used for verifying accuracy and robustness of the proposed model. This method is also qualitatively assessed with patient data mosaicing for both WL and FL modalities.
Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.
Diabetic macular edema (DME) characterized by discrete white{yellow lipid deposits due to
vascular leakage is one of the most severe complication seen in diabetic patients that cause
vision loss in affected areas. Such vascular leakage can be treated by laser surgery. A regular
follow{up and laser photocoagulation can reduce the risk of blindness by 90%. In an automated
retina screening system, it is thus very crucial to make the segmentation of such hard exudates
accurate and register these images taken over time to a reference co-ordinate system to make the
necessary follow-ups more precise. We introduce a novel method of ethnicity based statistical
atlas for exudates segmentation and follow-up. Ethnic background plays a significant role in
retinal pigment epithelium, visibility of the choroidal vasculature and overall retinal luminance in
patients and retinal images. Such statistical atlas can thus help to provide a solution, simplify the
image processing steps and increase the detection rate. In this paper, bright lesion segmentation
is investigated and experimentally verified for the gold standard built from African American
fundus images.
40 automatically generated landmark points on the major vessel arches with macula and
optic centers are used to warp the retinal images. PCA is used to obtain a mean shape of the
retinal major arches (both lower and upper). The mean of the co-ordinates of the macula and
optic disk center are obtained resulting 42 landmark points and together they provide a reference
co-ordinate frame ( or the atlas co-ordinate frame) for the images. The retinal funds images of an
ethnic group without any artifact or lesion are warped to this reference co-ordinate frame from
which we obtain a mean image representing the statistical measure of the chromatic distribution
of the pigments in the eye of that particular ethnic group.
400 images of African American eye has been used to build such a gold standard for this ethnic
group. Any test image of the patient of that ethnic group is first warped to the reference frame
and then a distance map is obtained with this mean image. Finally, the post-processing schemes
are applied on the distance map image to enhance the edges of the exudates. A multi-scale and
multi-directional steerable filters along with the Kirsch edge detector was found to be promising.
Experiments with the publicly available HEI-MED dataset showed the good performance of the
proposed method. We achieved the lesion localization fraction (LLF) of 82.5% at 35% of
non{lesion localization fraction (NLF) on the FROC curve.
Comparing several series of images is not always easy as the corresponding slices often need
to be selected manually. In times where series contain an ever-increasing number of slices this
can mean manual work when moving several series to the corresponding slice. Particularly two
situations were identified in this context: (1) patients with a large number of image series over
time (such as patients with cancers that are monitored) frequently need to compare the series,
for example to compare tumor growth over time. Manually adapting two series is possible but
with four or more series this can mean loosing time. Having automatically the closest slice
by comparing visual similarity also in older series with differing slice thickness and inter slice
distance can save time and synchronize the viewing instantly. (2) analyzing visually similar
image series of several patients can profit from being viewed in a synchronized way to compare
the cases, so when sliding through the slices in one volume, the corresponding slices in the other
volumes are shown. This application could be employed after content-based 3D image retrieval
has found similar series, for example. Synchronized viewing can help finding or confirming the
most relevant cases quickly.
To allow for synchronized viewing of several image volumes, the test image series are first
registered applying affine transformation for the global registration of images followed by diffeomorphic
image registration. Then corresponding slices in the two volumes are estimated based
on a visual similarity. Once the registration is finished, the user can subsequently move inside
the slices of one volume (reference volume) and can view the corresponding slices in the other
volumes. These corresponding slices are obtained after a correspondence match in the registration
procedure. These volumes are synchronized in that the slice closest to the original reference
volume is shown even when the slice thicknesses or inter slice distances differ, and this is automatically
done by comparing the visual image content of the slices. The tool has the potential to
help in a variety of situations and it is currently being made available as a plugin for the popular
Osirix image viewer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.