Anew color image segmentation algorithm is presented in this paper. This algorithm is invariant to highlights and shading. This is accomplished in two steps. First, the average pixel intensity is removed form each RGB coordinate. This transformation mitigates the effects of highlights. Next, the Mixture of Principal Components algorithm is used to perform the segmentation. The MPC is implicitly invariant to shading due to the inner vector product or vector angle being used as similarity measure. Since the new coordinate system contains negative numbers, it is necessary to modify the MPC algorithm since in its original form it does not distinguish between positive and negative color space coordinates. Results on artificial and real images illustrate the effectiveness of the method. Finally, the use of the total within-cluster variance is investigated as possible criterion for selecting the number of clusters for the new algorithm.
A number of novel adaptive image compression methods have been developed using a new approach to data representation, a mixture of principal components (MPC). MPC, together with principal component analysis (PCA) and vector quantization (VQ), form a spectrum of representations. The MPC approach still suffers from block effect distortion. While existing lapped transforms eliminate this distortion, they not take into account the need for adaptation on a block-to-block basis. Further, the basis vectors are fixed so they cannot be adapted in any optimal fashion from one image to another. In this paper, a lapped orthogonal projection is used to generate subblocks for both the classic Karhunen-Loeve transform (KLT) and the adaptive MPC. The resulting images are free of block effect distortion. Further, the squared error can be reduced. Therefore, both the nonadaptive and adaptive methods under the projection tend to outperform the respective block methods both in terms of subjective criteria and squared error.
A number of novel adaptive image compression methods have been developed using a new approach to data representation, a mixture of principal components (MPC). MPC, together with principal component analysis and vector quantization, form a spectrum of representations. The MPC network partitions the space into a number of regions or subspaces. Within each subspace the data are represented by the M principal components of the subspace. While Hebbian learning has been effectively used to extract principal components for the MPC, its stability is still a concern in practice. As a result, computationally more expensive methods such as batch eigendecomposition have produced more consistent results. This paper compares the performance of a number of Hebbian- based training schemes for the MPC network. These include training the entire network, network growing techniques, and a new tree-structured method. In the new tree-structured approach, each level in the tree, M, corresponds to an M- dimensional representation. A node and all its M - 1 parents represents a single M-dimensional subspace or class. The evaluation shows that the use of tree-structured approach improves training and results in reduced squared error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.