PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The digital structure of 3D discrete volumes induces many difficulties in the exploitation and study of these objects due to the
huge volume of data stored. The general idea to solve those problems
is to transform the discrete surfaces of those volumes into polygonal
surfaces in a reversible way (the original object can be retrieved
from the polygonal surface). The aim of this article is to present a
first reversible and topologically correct solution using both Marching Cubes and digital plane segmentation processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The common problem of volumetric image data is that the amount of data is too much. This paper proposes a method, which selects the important data and models it by using wavelet transformation and Generalized Asymptotic Decider Criterion. In the pre-processing stage, the important data are selected based on the coefficients of wavelet from the cuberille data. We interpret the scattered data interpolation as surface interpolation to the given data. To perform scattered data domain of definition of our 4-D surface is the union of tetrahedral. These geometric domains allow us to define surface interpolants for irregularly located points. In trivate surface interpolation, we need to define a tetrahedral domain decomposition. The initial tetrahedral domain is constructed by using Delaunay tetrahedrization. In the case of ambiguity, asymptotic decider criterion is used instead of sphere criterion. In this paper, asymptotic decider criterion is applied to tetrahedral domain instead of cubical domain. To apply this idea to tetrahedral instead of cubes, asymptotic decider criterion needs to be generalized. In the generalized asymptotic decider criterion, the intersection of two diagonal of quadrilateral does not necessary to be orthogonal. The value of oblique asymptotes is obtained instead of vertical and horizontal asymptotes. The value of oblique asymptotes is obtained from interpolation over quadrilateral instead of rectangle by using barycentric coordinates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has recently been claimed that some painters in the early Renaissance employed optical devices, specifically concave mirrors, to project images onto their canvas or other support (paper, oak panel, etc.) which they then traced or painted over. In this way, according to the theory, artists achieved their newfound heightened realism. We apply geometric image analysis to the parts of two paintings specifically adduced as evidence supporting this bold theory: the splendid, meticulous chandelier in Jan van Eyck's “Portrait of Arnolfini and his wife,” and the trellis in the right panel of Robert Campin's “Merode Altarpiece.” It has further been claimed that this trellis is the earliest surviving image captured using the projection of any optical device - a claim that, if correct, would have profound import for the histories of art, science and optical technology. Our analyses show that the Arnolfini chandelier fails standard tests of perspective coherence that would indicate an optical projection. Or more specifically, for the physical Arnolfini chandelier to be consistent with an optical projection, that chandelier would have to be implausibly irregular, as judged in comparison to surviving chandeliers and candelabras from the same 15th-century European schools. We find that had Campin painted the trellis using projections, he would have performed an extraordinarily precise and complex procedure using the most sophisticated optical system of his day (for which there is no documentary evidence), a conclusion supported by an attempted “re-enactment.” We provide a far more simple, parsimonious and plausible explanation, which we demonstrate by a simple experiment. Our analyses lead us to reject the optical projection theory for these paintings, a conclusion that comports with the vast scholarly consensus on Renaissance working methods and the lack of documentary evidence for optical projections onto a screen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The novel approach to angular projection calculation on discrete binary images, regarded to as the Hough-Green Transform (HGT), based on tracing object contours is introduced. This approach is usually much more computationally effective than the conventional Standard HT (SHT) on objects with a lot of interior pixels, as the acceleration factor is proportional to area by circumference ratio. The HGT is also more accurate than the SHT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution we study the number of possible configurations up to discrete rotations of hyperedges built by using image neighborhood hypergraphs. Some results for texture classification are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate curvature estimation in mesh surfaces is an important problem with numerous applications. Curvature is a significant indicator of ridges and can be used in applications such as: recognition, adaptive smoothing, and compression. Common methods for curvature estimation are based on quadric surface fitting to local neighborhoods. While the use of quadric surface patches simplifies computations, due to its simplicity the quadric surface model is incapable of modeling accurately the local surface geometry and introduces a strong element of smoothing into the computations. Hence, reducing the accuracy of the curvature estimation. The method proposed in this paper models the local surface geometry by using a set of quadratic curves. Consequently, as the proposed model has a large number of degrees of freedom, it is capable of modeling the local surface geometry much more accurately compared to quadric surface fitting. The experimental setup for evaluating the proposed approach is composed of randomly generated Bezier surfaces for which the curvature is known with various levels of additive Gaussian noise. We compare the results obtained by the proposed approach to those obtained by other techniques. It is demonstrated that the proposed approach produces significantly improved estimation results which are less sensitive to noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel mathematical framework for topological triangle characterization in 2D meshes is the basis of a system for object detection from images. The system relies on a set of topological operators and their supporting topological data structure to guarantee a precise control of topological changes introduced as a result of inserting and removing triangles from mesh models. The approach enables object models to be created directly from the images without a previous segmentation step. Automatic approaches for modeling objects from images are scarce partly because the process of creating the models typically involves a costly (and generally user-driven) segmentation step to obtain the necessary geometrical information. This issue is critical, for example, in procedures such as surgical planning and physiological studies, or in the simulation of elastic deformation and fluid flow. Our approach is a first step towards automatic mesh generation from images, which may represent a significant progress to a range of applications that handle geometric models. The approach is used to extract robust models from medical images, in order to illustrate how the aggregation of topological information can empower a simple thresholding technique for object detection, making up for the lack of geometrical information in the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we will be concerned with the recognition of 3D objects from a single 2D view obtained via a generalized weak perspective projection. Our methods will be independent of camera/sensor position and any camera/sensor parameters, as well as independent of the choice of coordinates used to express the feature point locations on the object or in the image. Our focus will be on certain natural metrics on the associated shape spaces (which are called object space and image space, respectively). These metrics provide a distance between two object shapes or between two image shapes and are a generalization of the Procrustes metrics of Statistical Shape Theory. They can be shown to be induced from the L2 metric on the space of all n-tuples of feature points via a modified orbit metric, i.e. as the minimum distance between two orbits under the action of the affine group, modified to account for scale and shear. Finally we will define two notions of “distance” between an object and an image (with distance zero being a match under some weak perspective projection). This makes use of the object-image equations and computes the distance entirely in either the object space or in the image space. A Metric Duality Theory shows these two notions of “distance” are the same. Ultimately, we would like to know if two configurations of a fixed number of points in 2D or 3D are the same if we allow affine transformations. If they are, then we want a distance of zero, and if not, we want a distance that expresses their dissimilarity - always recognizing that we can transform the points. The Procrustes metric, described in the shape theory literature [4], provides such a notion of distance for similarity transformations. However, it does not allow for weak perspective or perspective transformations and is fixed in a particular dimension. By the later we mean that it cannot be regarded as giving us a notion of “distance” between, say, a 3D configuration of points and a 2D configuration of points, where zero distance corresponds to the 2D points being, say, a generalized weak perspective projection of the 3D points. In this paper, we show that generalizations of the Procrustes metric exist in the above cases. Moreover these new metrics are quite natural in the context of the algebro-geometric formulation of the object/image equations discussed in Part I of this paper and reviewed below. These metrics also provide a rigorous foundation for error and statistical analysis in the object recognition problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we extend previous techniques we developed for efficient morphological processing of 2D document images to the analysis of 3D voxel data obtained by Computed Tomography (CT) scans. The proposed approach is based on a directional interval coding scheme of the voxel data and a basic set of operations that can be employed directly to the encoded data. The scan lines can be chosen to be in an arbitrary direction so as to fit directionality inherent to the data. Morphological operations are obtained by manipulating pairs of intervals belonging to the data and the kernel, where such manipulation can result in the addition, removal, or change of existing intervals. In addition to the implementation of ordinary morphological operations we develop a convolution operation that can be applied directly to the encoded data thus enabling the implementation of regulated morphological operations which incorporate a variable level of strictness. The computational complexity of the proposed operations is evaluated and compared to that of the standard implementation. The paper concludes with simulation results in which the execution time of iterative application of different morphological operations based on a standard implementation and the proposed encoded implementation are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The level sets of a map are the sets of points with level above a given threshold. The connected components of the level sets, thanks to the inclusion relation, can be organized in a tree structure, that is called the component tree. This tree, under several variations, has been used in numerous applications. Various algorithms have been proposed in the literature for computing the component tree. The fastest ones have been proved to run in 0(nln(n)) complexity. In this paper, we propose a simple to implement quasi-linear algorithm for computing the component tree on symmetric graphs, based on Tarjan’s union-find principle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On a regular grid, the analysis of digital straight lines
(DSL for short) has been intensively studied for nearly half a century. In this article, we attend to multi-scale discrete geometry.
More precisely, we are interested in defining geometrical properties
on heterogeneous grids that are mappings of the Euclidean plane with
different sized isothetic squares. In some applications, such a
heterogeneous grid can be a hierarchical subdivision of a regular unit
grid. First of all, we define the objects in such a geometry
(heterogeneous digital objects, arcs, curves...). Based on these
definitions, we characterize the DSL on such grids and then, we
develop the algorithms to recognize segments and to decompose a curve
into maximal pieces of DSL. Finally, both algorithms are illustrated
and practical examples that have motivated this research are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new notion of discrete tangent, called order d discrete tangent,
adapted to noisy curves, is proposed. It is based on the definition of
discrete tangents given by A. Vialard in 1996, on the definition of
fuzzy segments and on the linear algorithm of fuzzy segments
recognition. The algorithm calculating the order d discrete tangent at a point of a curve relies on simple calculations and is linear according to the number of points of the obtained tangent. From the definition of an order d discrete tangent, we deduced an estimation of the normal vector and of the curvature at a point of a discrete curve for a given order d.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate topological watersheds. For that purpose we introduce a notion of “separation between two points” of an image. One of our main results is a necessary and sufficient condition for a map G to be a watershed of a map F, this condition is based on the notion of separation. A consequence of the theorem is that there exists a (greedy) polynomial time algorithm to decide whether a map G is a watershed of a map F or not. We also show that, given an arbitrary total order on the minima of a map, it is possible to define a notion of “degree of separation of a minimum” relative to this order. This leads to another necessary and sufficient condition for a map G to be a watershed of a map F. At last we derive, from our framework, a new definition for the dynamics of a minimum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to image segmentation is presented. Novelty consists in combining multiple image feature information together -- color feature, texture feature and pixel’s geometric location in spatial domain to separate the regions with homogeneous color, texture, and similar spatiality --, as well as grouping the homogeneous clusters in the feature space with unique manner. The proposed segmentation algorithm contains two main stages. First, the mode finding and multi-link clustering algorithm converts an image into a map of small primary regions - region graph representation. The nodes of the graph correspond to distinguished regions, and the lines correspond to relations between neighbor regions. The region map is further simplified by the secondary graph analysis and merging of neighbor regions. The performance of developed algorithm was tested by using various images obtained by a real camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the xy-coordinates of any point in a binary image are compared with a standard VIRTUAL line of y=(n/wid)x, we can program a computer to find all the continuous points in the image which have equal vertical distance yd to the line. The starting and ending points of all these continuous points are then the corner points of a potential STRAIGHT EDGE detected from the image. The (n/wid) in the virtual line is the “slope constant” of the line. wid is the width of the image frame. n is an integer running from -hgt to hgt, and hgt is the height of the image frame. If we scan the standard line by running n from -hgt to +hgt in a loop and repeat the above process for each n, we can find all potential corner points and all potential straight edges in one half of the image with slopes running from -hgt/wid to +hgt/wid. Similarly, if we do a similar loop on the standard line of x=(m/hgt)y, where m runs from -wid to +wid on equal horizontal distance xd, we will get all the potential corner points and potential straight edges in the other half of the image frame. By these means, we can get all the potential corner points and the potential straight lines in the binary picture. Then the program will refine the findings by many sub-programs to find the true geometrical lines and true corner points. The result can be saved in a very compact file and recalled very efficiently to reconstruct the “skeleton” of the original object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TFT-LCD generally has the intrinsic non-uniformity due to the variance of the backlight. The region that has the perceptible non-uniformity is defined as a defect, called area-mura. In this paper, we present a new segmentation method for detecting area-mura. We first extract candidates of area-muras using regression diagnostics and then select the real area-muras among those candidates based on the size and SEMU index, a measure of contrast based on human brightness perception. Performance of the presented method has been evaluated on those TFT-LCD panel samples provided by Samsung Electronics Co., Ltd.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.