Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed
points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.
We recently introduced a class of highly nonlinear associative memories called morphological associative memories (MAMs). Notable features of autoassociative morphological memories (AMMs) include optimal absolute storage capacity and one-step convergence. The fixed points can be characterized exactly in terms of the original patterns. Unfortunately, AMM fixed points include a large number of spurious memories. In this paper, we use a combination of a basic AMM model and the kernel method in order to eliminate most of the spurious memories while leaving other AMM properties intact. Furthermore, our new AMM model is more tolerant to noise than a basic AMM model and less dependent on kernel selection than the original kernel method.
KEYWORDS: Silicon, Image processing, Linear algebra, Digital imaging, Image storage, Image restoration, Information operations, Information technology, Digital image processing, Image analysis
The purpose of this paper is to present a radically new image transform, called the Minimax Eigenvector Decomposition (MED) transform. This novel transform is based on the minimax product of two matrices and is an analogue of the Singular Value Decomposition (SVD) transform of linear algebra. In comparison to the SVD transform, in the MED transform eigenvalues need not be computed as they turn out to be zero. Furthermore, computation of eigenvectors is trivial. This makes the use of the MED transform more desirable as the major problem associated with the SVD transform is the computation of the singular values and eigenvectors. These are computationally extensive and often lead to significant numerical errors.
Conference Committee Involvement (1)
Mathematical Methods in Pattern and Image Analysis
3 August 2005 | San Diego, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.