In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme.
The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence.
The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter.
No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
The main goal of this work is to assess the overall imaging performance of dedicated new solid state devices compared to a traditional scintillation camera for use in SPECT imaging. A solid state detector with a rotating slat collimator will be compared with the same detector mounted with a classical collimator as opposed to a traditional Anger camera. A better energy resolution characterizes the solid state materials while the rotating slat collimator promises a better sensitivity-resolution tradeoff. The evaluation of the different imaging modalities is done using GATE, a recently developed Monte Carlo code. Several features for imaging performance evaluation were addressed: spatial resolution, energy resolution, sensitivity, and a ROC analysis was performed to evaluate the hot spot detectability. In this way a difference in perfromance was concluded for the diverse imaging techniques which allows a task dependent application of these modalities in future clinical practice.
In this paper, we will describe a theoretical model of the spatial uncertainty for a line of response, due to the imperfect localization of events on the detector heads of the Positron Emission Tomography (PET) camera. We assume a Gaussian distribution of the position of interaction on a detector head, centered at the measured position. The probability that an event originates from a certain point in the FOV is calculated by integrating all the possible LORs through this point, weighted with the Gaussian probability of detection at the LORs end points. We have calculated these probabilities both for perpendicular and oblique coincidences. For the oblique coincidence case it was necessary to incorporate the effect of the crystal thickness in the calculations. We found that the probability function can not be analytically expressed in a closed form, and it was thus calculated by means of numerical integration. A Gaussian was fitted to the probability profiles for a given distance to the detectors. From these fits, we can conclude that the profiles can be accurately approximated by a Gaussian, both for perpendicular as for oblique coincidences. The FWHM reaches a maximum at the detector heads, and decreases towards the center of the FOV, as was expected.
The accurate quantification of brain perfusion for emission computed tomography data (PET-SPECT) is limited by partial volume effects (PVE). This study presents a new approach to estimate accurately the true tissue tracer activity within the grey matter tissue compartment. The methodology is based on the availability of additional anatomical side information and on the assumption that activity concentration within the white matter tissue compartment is constant. Starting from an initial estimate for the white matter grey matter activity, the true tracer activity within the grey matter tissue compartment is estimated by an alternating ML-EM-algorithm. During the updating step the constant activity concentration within the white matter compartment is modelled in the forward projection in order to reconstruct the true activity distribution within the grey matter tissue compartment, hence reducing partial volume averaging. Consequently the estimate for the constant activity in the white matter tissue compartment is updated based on the new estimated activity distribution in the grey matter tissue compartment. We have tested this methodology by means of computer simulations. A T1-weighted MR brainscan of a patient was segmented into white matter, grey matter and cerebrospinal fluid, using the segmentation package of the SPM-software (Statistical Parametric Mapping). The segmented grey and white matter were used to simulate a SPECT acquisition, modelling the noise and the distance dependant detector response. Scatter and attenuation were ignored. Following the above described strategy, simulations have shown it is possible to reconstruct the true activity distribution for the grey matter tissue compartment (activity/tissue volume), assuming constant activity in the white matter tissue compartment.
In this paper we introduce a new speckle suppression technique for medical ultrasound images that incorporates morphological properties of speckle as well as tissue classifying parameters. Each individual speckles is located, and, exploiting our prior knowledge on the tissue classification, it is determined whether this speckle is noise or a medically relevant detail. We apply the technique on images of neonatal brains affected by White Matter Damage (leukomalacia). The results show that applying an active contour on a processed image, in order to segment the affected areas, yields a segmentation much closer to that of an expert.
The current state of the art in content description (MPEG-7) does not provide a rich set of tools to create functional metadata (metadata that contains not only the description of the content but also a set of methods that can be used to interpret, change or analyze the content). This paper presents a framework of which the primary goal is the integration of functional metadata into the existing standards. Whenever it is not only important what is in the multimedia content, but also what is happening with the information in the content, functional metadata can be used to describe this. Some examples are: news tickers, sport results, online auctions. In order to extend content description schemes with extra functionality, MPEG-7 based descriptors are defined to allow the content creator to add his own properties and methods to the multimedia data, thus making the multimedia data self describing and manipulatable. These descriptors incorporate concepts from object technology such as objects, interfaces and events. Descriptors allow the content creator to add properties to these objects and interfaces, methods can be defined using a descriptor and activated using events. The generic use of these properties and methods are the core of the functional metadata framework. A complete set of MPEG-7 based descriptors and descriptor schemes is presented, enabling the content creator to add functional metadata to the multimedia data. An implementation of the proposed framework has been created proving the principles of functional metadata. This paper presents a method for adding extra functionality to metadata and hence to multimedia data. It is shown that doing so preserves existing content description methods and that the functional metadata extends the possibilities of the use of content description.
We developed an iterative reconstruction method for SPECT which uses list-mode data instead of binned data. It uses a more accurate model of the collimator structure. The purpose of the study was to evaluate the resolution recovery and to compare its performance to other iterative resolution recovery methods in the case of high noise levels The source distribution is projected onto an intermediate layer. Doing this we obtain the complete emission radiance distribution as an angular sinogram. This step is independent of the acquisition system. To incorporate the resolution of the system we project the individual list-mode events over the collimator wells to the intermediate layer. This projection onto the angular sinogram will define the probability a photon from the source distribution will reach this specific location on the surface of the crystal, thus being accepted by the collimator hole. We compared the SPECT list-mode reconstruction to MLEM, OSEM and RBI. We used Gaussian shaped point sources with different FWHM at different noise levels. For these distributions we calculated the reconstructed images at different number of iterations. The modeling of the resolution in this algorithm leads to a better resolution recovery compared to other methods, which tend to overcorrect.
KEYWORDS: Brain, Photons, Data acquisition, Single photon emission computed tomography, Cameras, Imaging systems, Neuroimaging, Monte Carlo methods, Data corrections, Signal attenuation
A practical method for scatter compensation in SPECT imaging is the triple energy window technique (TEW) which estimates the fraction of scattered photons in the projection data pixel by pixel. This technique requires an acquisition of counts in three windows of the energy spectrum for each projection bin, which is not possible on every gamma camera. The aim of this study is to set up a scatter template for brain perfusion SPECT imaging by means of the scatter data acquired with the triple energy window technique. This scatter template can be used for scatter correction as follows: the scatter template is realigned with the acquired, by scatter degraded and reconstructed image by means of the corresponding emission template, which also includes scatter counts. The ratios between the voxel values of this emission template and the acquired and reconstructed image are used to locally adjust the scatter template. Finally the acquired and reconstructed image is corrected for scatter by subtracting the thus obtained scatter estimates. We compared the template based approach with the TEW scatter correction technique for data acquired with same gamma camera system and found a similar performance for both correction methods.
Simulations and measurements of triple head PET acquisitions of a hot sphere phantom were performed to evaluate the performance of two different reconstruction algorithms (projection based ML-EM and listmode ML-EM)for triple head gamma camera coincidence systems. A geometric simulator assuming a detector with 100 percent detection efficiency and only detection of trues was used. The resolution was equal to the camera system. The measurements were performed with a triple headed gamma camera. Simulated and measured data were stored in listmode format, which allowed the flexibility for different reconstruction algorithms. As a measure for the performance the hot spot detectability was taken because tumor imaging is the most important clinical application for gamma camera coincidence systems. The detectability was evaluated by calculating the recovered contrast and the contrast-to-noise ratio. Results show a slightly improved contrast but a clearly higher contrast-to-noise ratio for list mode reconstruction.
Gamma camera PET (Positron Emission Tomography) offers a low-cost alternative for dedicated PET scanners. However, sensitivity and count rate capabilities of dual-headed gamma cameras with PET capabilities are still limited compared to full-ring dedicated PET scanners. To improve the geometric sensitivity of these systems, triple-headed gamma camera PET has been proposed. As is the case for dual-headed PET, the sensitivity of these devices varies with the position within the field of view (FOV) of the camera. This variation should be corrected for when reconstructing the images. In earlier work, we calculated the two-dimensional sensitivity variation for any triple-headed configuration. This can be used to correct the data if the acquisition is done using axial filters, which effectively limit the axial angle of incidence of the photons, comparable to 2D dedicated PET. More recently, these results were extended to a fully 3D calculation of the geometric sensitivity variation. In this work, the results of these calculations are compared to the standard approach to correct for 3D geometric sensitivity variation. Current implementations of triple-headed gamma camera PET use two independent corrections to account for three-dimensional sensitivity variations: one in the transaxial direction and one in the axial direction. This approach implicitly assumes that the actual variation is separable in two independent components. We recently derived a theoretical expression for the 3D sensitivity variation, and in this work we investigate the separability of our result. To investigate the separability of the sensitivity variations, an axial and transaxial profile through the calculated variation was taken, and these two were multiplied, thus creating a separable function. If the variation were perfectly separable, this function would be identical to the calculated variation. As a measure of separability, we calculated the percentual deviation of the separable function to the original variation. We investigated the separability for several camera configurations and rotation radii. We found that, for all configurations, the variation is not separable , and becomes less separable as the rotation radius tends to smaller values. This indicates that in this case, our sensitivity correction will give better results than the separable correction now applied.
A new fuzzy filter is presented for the noise reduction of images corrupted with additive Gaussian noise. The filter consists of two stages. The first stage computes a fuzzy gradient for eight different directions around the currently processed pixel. The second stage uses the fuzzy gradient to perform fuzzy smoothing by taking different contributions of neighboring pixel values. The two stages are both based on fuzzy rules which make use of membership functions. The filter can be applied iteratively to effectively reduce heavy noise. The shape of the membership functions is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. The fuzzy operators are implemented by the classical min/max. Experimental results are obtained to show the feasibility of the proposed approach. These results are also compared to other filters by numerical measures and visual inspection.
An XML-based application was developed, allowing to access multimedia/radiological data over a network and to visualize them in an integrated way within a standard web browser. Four types of data are considered: radiological images, the corresponding speech and text files produced by the radiologist, and administrative data concerning the study (patient name, radiologist's name, date, etc.). Although these different types of data are typically stored on different file systems, their relationship (e.g., image file X corresponds to speech file Y) is described in a global relational database. The administrative data are referred to in an XML file, while links between the corresponding images, speech, and text files (e.g., links between limited text fragments within the text tile, the corresponding fragment in the speech file, and the corresponding subset of images) are described as well. Users are able to access all data through a web browser by submitting a form-based request to the server. By using scripting technology, a HTML document containing all data is produced on the fly, which can be presented within the browser of the user. Our application was tested for a real set of clinical data, and it was proven that the goals that were defined above are realized.
In the near future broadband networks will become available to large groups of people. The amount of bandwidth available to these users in the future will be much more than it is now. The availability of bandwidth will give birth to a number of new applications. Application developers will need a framework that enable them to utilize the possibilities of these new networks. In this article we present a document type that will allow the addition of (meta-) information to data streams and the synchronization of a different data streams. It is called SXML (Streaming XML) and is based on the eXtensible Markup Language (XML). The SXML grammar is defined in a document type definition (SXML-DTD). The content of an SXML document can be processed real time or can be retrieved from disk. XML is being used in a complete new manner and in a totally different environment in order to easily describe the structure of the stream. Finally, a preliminary implementation has been developed and is being tested.
The 3D acquisition data from positron coincidence detection on a gamma camera, can be stored in list-mode or histogram format. The standard processing of the list mode-data is Single Slice Rebinning (with a maximum acceptance angle) to 2D histogrammed projections followed by Ordered Subsets Expectation Maximization reconstruction. This method has several disadvantages: sampling accuracy is lost by histogramming events, axial resolution degrades with increasing distance from the center of rotation and useful events, with angle bigger than the acceptance angle, are not included in the reconstruction. Therefore an iterative reconstruction algorithm, operating directly on list-mode data, has been implemented. The 2D and 3D version of this iterative list-mode algorithm have been compared with the aforementioned standard reconstruction method. A higher number of events is used in the reconstruction, which results in a lower standard deviation. Resolution is fairly constant over the Field of View. The use of a fast projector and backprojector reduces the reconstruction time to clinical acceptable times.
KEYWORDS: Magnetic resonance imaging, Signal processing, In vivo imaging, Magnetism, Image processing, Image restoration, Spatial frequencies, Demodulation, Head, Time metrology
In this paper, we will introduce a resampling method for in vivo projection reconstruction (PR) magnetic resonance (MR) signals. We will describe the physical processes causing the inaccurate sampling of these signals. Based on the theoretical properties of the signal, a technique to reduce the influence of this effect on the signals will be proposed. The method will be validated using simulations and in vivo MR signals. The corrected signals will be shown to be a better approximation of the signals that would be expected on a theoretical basis.
In the near future, it will be possible to perform coincidence detection on a gamma camera with three heads, which increases the geometric sensitivity of the system. Different geometric configurations are possible, and each configuration yields a different geometric sensitivity. The purpose of this work was to calculate the sensitivities for different three-headed configurations as a function of the position in the field of view, the dimensions of the detector heads and the distance of the heads from the center of the field of view. The configurations that were compared are: a regular two headed configuration (180 deg. opposed), a triple-headed configuration with the three heads in an equilateral triangle (120 deg.), and a triple-headed configuration with two heads in a regular two headed configuration, and the third perpendicular between the first two, which makes a U-shaped configuration. An expression was derived for any planar detector configuration to calculate the geometric sensitivity for each Line Of Response (LOR). This sensitivity was integrated to get the sensitivity profile, which gives the geometric sensitivity at a certain distance from the center of rotation. We found that the triangular configuration gave the best sensitivities when placed very near to each other (nearly full ring configuration), but for larger fields of view, the U-shaped configuration performed better.
KEYWORDS: Video compression, Video, Image compression, Medical imaging, 3D modeling, 3D image processing, Data compression, 3D video compression, Image quality standards, Magnetic resonance imaging
Recent advances in digital technology have caused a huge increase in the use of 3D image data. In order to cope with large storage and transmission requirements, data compression is necessary. Although lossy techniques have shown to achieve higher compression ratios than lossless techniques, the latter are sometimes required, e.g., in medical environments. A lot of lossless image compressors do exist, but most of them don't exploit interframe correlations. In this paper we extend and refine a recently proposed technique which combines intraframe prediction and interframe modeling. However its current performance was still significantly worse than current state- of-the-art intraframe methods. After adding techniques often used in those state-of-the-art schemes and other refinements, a fair comparison with state-of-the-art intraframe coders is made. It shows that the refined method achieves considerable gains on video and medical images compared to these purely intraframe methods. The method also shows some good properties such as graceful compression ratio degradation when the interframe gap (medical volume data) or interframe delay (video) increases.
We propose a small field-of-view color image acquisition system for the imaging and measurement of skin lesion and its properties in dermatology. The system consists of a 3 chip CCD camera, a frame grabber, a high-quality halogen annular light source and a pentium PC. The output images are in a standard device-dependent color space called sRGB or ITU-R BT.709 which has a known relation to the device-independent CIE XYZ color space and provides a fairly realistic view on a modern CRT- based monitor. In order to transform the images from the unknown and variable input RGB color space of the acquisition system to the sRGB space a profile of the acquisition system is determined based on 24 color targets with known properties. Determination of this profile is simple and quick, and it remains valid for many hours of operation (weeks or even months of normal use). Precision or reproducibility of the system is very good, both short-term (consecutive measurements) <(Delta) E*ab> equals 0.04 with (Delta) E*ab less than 0.1, medium-term (measurements under one profile but on different warm-up cycles) <(Delta) E*ab> equals 0.34 with (Delta) E*ab less than 1.2. Long-term precision (measurements under different profiles) is of the same order. Accuracy was evaluated for profiles based on different RGB to sRGB polynomial transforms computed both by linear least-squares in the sRGB space and by non-linear optimization in CIE L*a*b* color space. Results show that, using a set of test targets consisting of 15 paper color targets and 12 real measurements of human skin, the simple linear transform outperforms higher order polynomials and has <(Delta) E*ab> equals 6.53, with (Delta) E*ab less than 11.21. A small study of the pigmentation of the human skin after UV-radiation shows that when measuring areas of at least a few hundred pixels differences of more than 2 - 3 dE units are statistically significant.
In todays digital prepress workflow images are most often sorted in the CMYK color representation. In the lossy compression of CMYK color imags, most techniques do not take the tonal correlation between the color channels into account or they are not able to perform a proper color decorrelation in four dimensions. In a first stage a compression method has been developed that takes this type of redundancy into account. The basic idea is to divide the image into blocks. The color information in those blocks is then transformed from the original CMYK color space into a decorrelated color space. In this new color space not all components are of the same magnitude so here the gain for compression purposes becomes clear. After the color transformation step any regular compression scheme meant to reduce the spatial redundancy can be applied. In this paper a more advanced approach for the utilization procedure in the compression algorithm is presented. The proposed scheme tries to control the quantization parameters differently for all blocks and color components. Therefore the influence on the CIELab (Delta) E measure is investigated when making a small shift in the four main directions of the decorrelated color space.
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
On the Internet, transmission time of large images is still an important issue. In order to reduce transmission time this paper introduces an efficient method to send 8-bit greyscale images across the Internet. The method allows progressive transmission up to lossless reconstruction. It also allows the user to select a region of interest. This method is particularly useful when image quality and transmission speed are two desired properties. The method uses TCP-IP as a transport protocol.
Lossless image compression algorithms used in the prepress workflow suffer from the disadvantage that only moderate compression ratios can be achieved. Most lossy compression schemes achieve much higher compression ratios but there is no easy way to limit difference they introduce. Near- lossless image compression schemes are based on lossless techniques, but they give an opportunity to put constraints on the unavoidable pixel loss. The constraints are usually expressed in terms of differences within the individual CMYK separations and this error criterion does not match the human visual system. In this paper. we present a near- lossless image compression scheme which aims at limiting the pixel difference such as observed by the human visual system. It uses the subjectively equidistant CIEL*a*b*-space to express allowable color differences. Since the CMYK to CIEL*a*b* transform maps a 4D space onto a 3D space, singularities would occur resulting in a loss of the gray component replacement information; therefore an additional dimension is added. The error quantization is based on an estimated linearization of the CIEL*a*b* transform and on the singular value decomposition of the resulting Jacobian matrix. Experimental results on some representative CMYK test images show that the visual image quality is improved and that higher compression ratios can be achieved before the visual difference is detected by a human observer.
Moire formation is often a major problem in the printing applications. These artifacts introduce new low frequency components which are very disturbing. Some printing techniques, e.g. gravure printing, are very sensitive to moire. The halftoning scheme used for gravure printing can basically be seen as a 2D non-isotropic subsampling process. The more problem is much more important in gravure printing than in conventional digital halftoning since the degree of freedom in constructing halftone dots is much more limited due to the physical constraints of the engraving mechanism.
CMYK color images are used extensively in pre-press applications. When compressing those color images one has to deal with four different color channels. Usually compression algorithms only take into account the spatial redundancy that is present in the image data. This approach doesn't yield an optimal data reduction since there exists a high correlation between the different colors in natural images that is not taken into account. This paper shows that a significant gain in data reduction can be achieved by exploiting this color redundancy. Some popular transform coders, including DCT-based JPEG and the SPIHT wavelet coder, were used for reducing the spatial redundancy. The performance of the algorithms was evaluated using a quality criterium based on human perception like the CIELab (Delta) E error.
Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG achieve only moderate compression ratios because they do not fully take into account the special image characteristics. In this paper, we present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the classical halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the sorted autocorrelation function of a part of the image. We have experimented with classical halftones of different resolutions and sizes and screened under different angles as well as with stochastic halftones. For classical halftones, the global improvement with respect to JBIG in its best mode is about 30% to 50%; binary tree modeling increases this by another 5% to 10%. For stochastic halftones, the autocorrelation-based template gives no improvement, though an exhaustive search technique shows that even bigger improvements are feasible using the context modeling technique; introducing binary tree modeling increases the compression ratio with about 10%.
To make the archival and transmission of medical images in PACS (Picture Archiving and Communication Systems) and teleradiology systems user-friendly and economically profitable, the adoption of an efficient compression technique is an important feature in the design of such systems. An important category of lossy compression techniques uses the wavelet transformation for decorrelation of the pixel values, prior to quantization and entropy coding. For the coding of sets of images, the images are mostly independently compressed with a two-dimensional compression scheme. In this way, one discards the similarity between adjacent slices. The aim of this paper is to compare the performance of some two- dimensional and three-dimensional implementations of wavelet compression techniques and investigate some design issues as decomposition depth, the choice of wavelet filters and entropy coding.
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only listless compression is acceptable. Most existing listless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper a new listless color transform is proposed, based on the Karhunen- Loeve Transform (KLT). This transform removes redundancies in the color representation of each pixel and can be combined with many existing compression schemes. In this paper it is combined with a prediction scheme that exploits spatial redundancies. The results proposed in this paper show that the color transform effectively decorrelates the color components and that it typically saves about a half to two bit per pixel, compared to a purely predictive scheme.
Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG only achieve moderate compression ratios due to the periodic dot structure of classical halftones. In this paper, we present two improvements on the context modeling scheme. Firstly, we adapt the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. Secondly, increasing the order of the model produces an additional gain in compression. We have experimented with classical halftones of different resolutions and sizes, screened under different angles, and produced by either a digital halftoning algorithm or a digital scanner. The additional coding time for context selection is negligible, while the decoding time remains the same. The global improvement with respect to JBIG (which has features like one adaptive pixel, histogram rescaling and multiresolution decomposition) is about 30% to 65%, depending on the image.
For the compression of color images, the lossy compression standard JPEG has two major shortcomings. Firstly, it codes the color components separately. This reduces the possible compression ratio. Secondly, the image quality at very low bit rate is seriously degraded by blocking artifacts. The paper describes an extension of an existing segmented image coding (SIC) method for monochrome images to images containing a limited number of colors. The new technique codes the colors more efficiently and performs better at high compression. After conversion to the YUV color space, the corresponding luminance image is compressed using monochrome SIC. At the decoder, this image is reconstructed and the gray-values are translated into colors. Because different colors in the image can have the same gray-values and because SIC is lossy, some gray-values may be wrongly reconstructed in the decompressed image. To prevent the introduction of foreign colors in a region, i.e. colors which are not present in the region in the original image, bit vectors are constructed indicating which colors are present. The decoder uses this information to replace foreign colors by colors of the region. The bit vectors are compressed using a lossless bi-level coding scheme. The paper presents experimental results which show that the new method produces a much better subjective image quality than JPEG at high compression due to the absence of block distortion.
Screening of color-separated continuous-tone photographic images produces large high-resolution black-and-white images (up to 5000 dpi). Storing such images on disk or transmitting them to a remote imagesetter is an expensive and time-consuming task, which makes lossless compression desirable. Since a screened photographic image may be viewed as a rotated rectangular grid of large half-tone dots, each of them being made up of an amount of microdots, we suspect that compression results obtained on the CCITT test images might not apply to high-resolution screened images and that the default parameters of many existing compression algorithms may not be optimal. In this paper we compare the performance of lossless one-dimensional general-purpose byte-oriented statistical and dictionary-based coders as well as lossless coders designed for compression of two- dimensional bilevel images on high-resolution screened images. The general-purpose coders are: GZIP (LZ77 by GNU), TIFF LZW and STAT (an optimized PPM compressor by Bellard). The non-adaptive two-dimensional black-and-white coders are: TIFF Group 3 and TIFF Group 4 (former published fax- standards by CCITT). The adaptive two-dimensional coders are: BILEVEL coding (by Witten et al.) and JBIG (latest fax- standard). First we compared the methods without tuning their parameters. We found that both in compression ratio (CR) and speed, JBIG (CR 7.3) was best, followed by STAT (CR 6.3) and BILEVEL coding (CR 6.0). Some results are remarkable: STAT works very well, despite its one- dimensional approach; JBIG beats BILEVEL coding on high- resolution images though BILEVEL coding is better on the CCITT images, and finally, TIFF Group 4 (CR 3.2) and TIFF Group 3 (2.7) can't compete with any of these three methods. Next, we fine-tuned the parameters for JBIG and BILEVEL coding, and this resulted in an increased compression ratio of 8.0 and 6.7 respectively.
The JPEG lossy compression technique in medical imagery has several disadvantages (at higher compression ratios), mainly due to block-distortion. We therefore investigated two methods, the lapped orthogonal transform (LOT) and the DCT/DST coder, for the use on medical image data. These techniques are block-based but they reduce the block- distortion by spreading it out over the entire image. These compression techniques were applied on four different types of medical images (MRI image, x-ray image, angiogram and CT- scan). They were then compared with results from JPEG and variable block size DCT coders. At a first stage, we determined the optimal block size for each image and for each technique. It was found that for a specific image, the optimal block size was independent of the different transform coders. For the x-ray image, the CT-scan and the angiogram an optimal block size of 32 by 32 was found, while for the MRI image the optimal block size was 16 by 16. Afterwards, for all images the rate-distortion curves of the different techniques were calculated, using the optimal block size. The overall conclusion from our experiments is that the LOT is the best transform among the ones being investigated for compressing medical images of many different kinds. However, JPEG should be used for very high image qualities, as it then requires almost the same bit rate as the LOT and as it requires fewer computations than the LOT technique.
In positron emission tomography (PET) images have to be reconstructed from noisy projection data. The noise on the PET data can be modeled by a Poisson distribution. The development of statistical (iterative) reconstruction techniques addresses the problem of noise. In this paper we present the results of introducing the simulated annealing technique as a statistical reconstruction algorithm for PET. We have successfully implemented a reconstruction algorithm based upon simulated annealing, with paying particular attention to the fine-tuning of various parameters (cooling schedule, granularity, stopping rule, ...). In addition, we have developed a cost function more appropriate to the noise statistics (e.g. Poisson) and the reconstruction method (e.g. ML). The comparison with other reconstruction methods using computer phantom studies proves the potential power of the simulated annealing technique for the reconstruction of PET-images.
Due to its iterative nature, the execution of the maximum likelihood expectation maximization (ML-EM) reconstruction algorithm requires a long computation time. To overcome this problem, multiprocessor machines could be used. In this paper, a parallel implementation of the algorithm for positron emission tomography (PET) images is presented. To cope with the difficulties involved with parallel programming a programming environment based on a visual language has been used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.