Moire patterns originated from overlapping display panel with the viewing zone forming optics are one of major factors of deteriorating the visual image quality of contact-type 3 dimensional imaging systems. An analysis showed that the visual effects of the patterns can be minimized at a specific overlapping angle between the panel and the plate. This angle is implemented by approximating each side of a pixel cell as a discrete line which is drawn along the boundaries of each pixel which lies along the side of the cell. The slope of the line is presented by as the ratio of pixel numbers in vertical and horizontal directions and equals to the tangential value of 1/2 of the angle. This method allows creating pixel cells with shapes of parallelograms and rhombs with a desired vertex angle for minimizing the moire pattern, especially in full-parallax imaging systems. The image generated reveals almost invisible moire pattern at the predefined viewing distance range.
In evaluating 3 dimensional images, the depth resolution which the image can provide, is an important parameter defining the image quality. The depth resolution obtainable with 3 dimensional images is mainly determined by the parameters of cameras used for multiview image acquisitions and viewer's eye resolution.
The depth resolution and the possible image depth range, obtainable with parallel, toed-in and sliding aperture camera configurations for multiview image acquisition in the 3-dimensional imaging systems are calculated by assuming that the focusing beam is diffraction limited and that a pixel pitch of the imaging sensor is a limiting image spot size. The calculation reveals that these parameters are essentially the same for the configurations considered. For the depth resolution comparison with a hologram with a size corresponding to the aperture size of camera objective, it is shown that the hologram provides better depth resolved image than the multiview systems. The depth resolution of the images in the the 3-D imaging systems is further reduced by viewer's eye resolution. The amount of the reduction is proportional to the number of picture elements in the eye resolution spot.
Moire fringes appearing in multiview full-parallax 3 dimensional imaging systems can be minimized by proper selection of the vertex angle of pixel cells. Pixel cells with arbitrary vertex angles are built by crossing a pair of discrete line arrays with gradients ± α. The discrete lines are form by the sides of pixels along the straight lines approximating the discrete lines. The gradient of the lines is defined as the ratio between pixel numbers in vertical and horizontal directions. This method allows creating rhomb approximating pixel cells with a desired shape.
To allow multiple viewers to see the correct perspective and to provide a single viewer with motion parallax cues during head movement, more than two views are needed. Since it is prohibitive to acquire, process, and transmit a continuum of views, it would be preferable to acquire only minimal set of views and to generate intermediate images by using the estimated disparities. For high quality of the generated image, the first, we propose how to generate the intermediate images using multi-resolution and irregular-quadtree decomposition. Irregular-quadtree decomposition is aligned at the object boundary which is the disparity discontinuity. By finding the peak over the absolute values of the high pass filtered output that is applied to the row and column average, the horizontal and vertical dividing locations of the block are computed. The second, regions of occlusion are decided by similarity comparisons among the matched block alternatives, then filled with the pixels of left or right image by the principles we proposed. Finally, the images at arbitrary viewpoints of generated and yielding a 31.1 dB PSNR at middle location between both viewpoints.
KEYWORDS: Target recognition, Radar, Signal to noise ratio, Scattering, Time-frequency analysis, Advanced distributed simulations, Principal component analysis, Detection and tracking algorithms, Monte Carlo methods, Data modeling
In this paper, we present the results of target recognition research based on the moment functions of various radar signatures, such as time-frequency signatures, range profiles, and scattering centers. The proposed approach utilizes geometrical moments or central moments of the obtained radar signatures. In particular, we derived exact and closed form expressions of the geometrical moments of the adaptive Gaussian representation (AGR), which is one of the adaptive joint time-frequency techniques, and also computed the central moments of range profiles and one-dimensional (1-D) scattering centers on a target, which are obtained by various super-resolution techniques. The obtained moment functions are further processed to provide small dimensional and redundancy-free feature vectors, and classified via a neural network approach or a Bayes classifier. The performances of the proposed technique are demonstrated using a simulated radar cross section (RCS) data set, or a measured RCS data set of various scaled aircraft models, obtained at the Pohang University of Science and Technology (POSTECH) compact range facility. Results show that the techniques in this paper can not only provide reliable classification accuracy, but also save computational resources.
Based on joining viewing zones of two 8-view TV projection optics spatially without overlapping, a 16-view 3D imaging system is designed and its performances are demonstrated. Each 8-view TV projection optics projects different view images time sequentially and its operation is synchronized with the other. The system is performed well.
We describe a new low-level scheme to achieve high definition 3D-stereoscopy within the bandwidth of the monoscopic HDTV infrastructure. Our method uses a studio quality monoscopic high resolution color camera to generate a transmitted `main stream' view, and a flanking 3D- stereoscopic pair of low cost, low resolution monochrome camera `outriggers' to generate a depth map of the scene. The depth map is deeply compressed and transmitted as a low bandwidth `auxiliary stream'. The two streams are recombined at the receiver to generate a 3D-stereoscopic pair of high resolution color views from the perspectives of the original outriggers. Alternately, views from two arbitrary perspectives between (and, to a limited extent, beyond) the low resolution monoscopic camera positions can be synthesized to accommodate individual viewer preferences. We describe our algorithms, and the design and outcome of initial experiments. The experiments begin with three NTSC color images, degrade the outer pair to low resolution monochrome, and compare the results of coding and reconstruction to the originals.
We present a new method for solid object 3-D imaging which can control the emitting diagram of each image point and can be applied in volumetric displays. A new method is based on the fact that the visibility of each point of the 3-D object or scene can be described in the terms of point's emitting diagram. It was found that the amount of information for the emitting diagram in general has the same order of magnitude as that for sampled hologram. The emitting diagram control can be applied to many different types of volumetric display with use of means like spatial filters. Unlike the electroholography, the method provides 3-D positioning of image points and emitting diagram forming separately. So in special cases, it allows us to reduce significantly information amount. Experimentally, we obtained a color stereoscopic image with a number of resolvable perspective views, distributed within 30 angular degrees using only two 2-D images and a moving shield as the diagram former.
There is a phenomenon that a 3D image appears in proportion to a focus distance when something is watched through a convex lens. An adjustable focus lens which can control the focus distance of the convex lens is contrived and applied to 3D TV. We can watch 3D TV without eyeglasses. The 3D TV image meets the NTSC standard. A parallax data and a focus data about the image can be accommodated at the same time. A continuous image method realizes much wider views. An anti 3D image effect can be avoided by using this method. At present, an analysis of proto-type lens and experiment are being carried out. As a result, a phantom effect and a viewing area can be improved. It is possible to watch the 3D TV at any distance. Distance data are triangulated by two cameras. A plan of AVI proto type using ten thousands lenses is discussed. This method is compared with four major conventional methods. As a result, it is revealed that this method can make the efficient use of integral photography and varifocal type method. In the case of integral photography, a miniaturization of this system is possible. But it is difficult to get actual focus. In the case of varifocal type method, there is no problem with focusing, but the miniaturization is impossible. The theory investigated in this paper makes it possible to solve these problems.
The main drawback of a transmission type holographic screen is the color separation when the color images are projected on it due to its high dispersion. This drawback can be overcome by recording the screen with use of a long narrow slit type diffuser as an object. With the diffuser, a holographic screen of size 30 by 40 cm2 has been recorded to display a full color stereoscopic image. The images displayed on the screen show a good resolution and are naturally colored, except near the edges of the screen. The color distortions in the edges of the screen are reduced by a lenticular sheet attached to the screen. The lenticular sheet enlarges the viewing zone. The image that appears on the screen is bright enough to watch even in a normally illuminated room
There is a phenomenon that a 3D image appears in proportion to a focus distance when something is watched through a convex lens. An adjustable focus lens which can control the focus distance of the convex lens is contrived and applied to 3D TV. We can watch 3D TV without eyeglasses. The 3D TV image meets the NTSC standard. A parallax data and a focus data about the image can be accommodated at the same time. A continuous image method realizes much wider views. An anti 3D image effect can be avoided by using this method. At present, an analysis of proto-type lens and experiment are being carried out. As a result, a phantom effect and a viewing area can be improved. It is possible to watch the 3D TV at any distance. Distance data are triangulated by two cameras. A plan of AVI photo type using ten thousand lenses is discussed. This method is compared with four major conventional methods. As a result, it is revealed that this method can make the efficient use of Integral Photography and Varifocal type method. In the case of Integral Photography, a miniaturization of this system is possible. But it is difficult to get actual focus. In the case of varifocal type method, there is no problem with focusing, but the miniaturization is impossible. The theory investigated in this paper makes it possible to solve these problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.