PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes implementation approaches in image acquisition and playback for 3-D computer graphics, 3-D television and 3-D theatre movies without special glasses. Projection lamps, spatial light modulators, CRT's and dynamic scanning are all eliminated by the application of an active image array, all static components and a semi-specular screen. The resulting picture shows horizontal parallax with a wide horizontal view field (up to 360 de-grees) giving a holographic appearance in full color with smooth continuous viewing without speckle. Static component systems are compared with dynamic component systems using both linear and circular arrays. Implementation of computer graphic systems are shown that allow complex shaded color images to extend from the viewer's eyes to infinity. Large screen systems visible by hundreds of people are feasible by the use of low f-stops and high gain screens in projection. Screen geometries and special screen properties are shown. Viewing characteristics offer no restrictions in view-position over the entire view-field and have a "look-around" feature for all the categories of computer graphics, television and movies. Standard video cassettes and optical discs can also interface the system to generate a 3-D window viewable without glasses. A prognosis is given for technology application to 3-D pictures without glasses that replicate the daily viewing experience. Super-position of computer graphics on real-world pictures is shown feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A liquid-crystal pi-cell is an ideal choice for fabricating a video stereoscope because it can switch between states at speeds that approach vertical blanking times on a standard display monitor. Initial applications, although very good compared with alternative technologies, revealed several areas for improvement. First, extinction ratio measurements made from 470 to 630 nanometers varied between 20:1 and 35:1. This range of ratios falls far short of dichroic polarizer capability. Second, a noticeable color shift can occur in the transmissive state as the image is rastered down the screen. For instance, a white image can appear bluish white at the top and yellowish near the bottom. The problems associated with less than perfect extinction ratios are due to boundary-layer optics of the liquid-crystal device and with the drive waveforms applied to the liquid-crystal device. The coloration in the transmissive state occurs because the switching time from the extinction state to the transmission state is 2 to 4 msec, which exceeds typical submillisecond vertical blanking times. A second compensating liquid-crystal cell was introduced to cancel the two extinction-state problems. The compensating cell arrangement also allows simultaneous cell switching without changing the net optical state of the device. By pre-switching both cells prior to vertical blanking, a drive scheme was devised that allows less than 100 Asec switching to either the extinction or transmis-sion state. Although increased by two to three times, extinction ratios still failed to meet expectations. However, the double cell arrangement accentuated substrate index mismatches that accounted for the last bit of light leakage. Correcting this problem brought the extinction values near the limits of the polarizers and improved the transmission state to 28% .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel stereoscopic depth encoding/decoding process has been developed which considerably simplifies the creation and presentation of stereoscopic images in a wide range of display media. The patented chromostereoscopic process is unique because the encoding of depth information is accomplished in a single image. The depth encoded image can be viewed with the unaided eye as a normal two dimensional image. The image attains the appearance of depth, however, when viewed by means of the inexpensive and compact depth decoding passive optical system. The process is compatible with photographic, printed, video, slide projected, computer graphic, and laser generated color images. The range of perceived depth in a given image can be selected by the viewer through the use of "tunable depth" decoding optics, allowing infinite and smooth tuning from exaggerated normal depth through zero depth to exaggerated inverse depth. The process is insensitive to the head position of the viewer. Depth encoding is accomplished by mapping the desired perceived depth of an image component into spectral color. Depth decoding is performed by an optical system which shifts the spatial positions of the colors in the image to create left and right views. The process is particularly well suited to the creation of stereoscopic laser shows. Other applications are also being pursued.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the framework of the research on High Definition Television (HDTV) at the Heinrich-Hertz-Institut in West Berlin/West Germany the feasibility of recording and projection onto lenticular screens of images with three dimensional effects is investigated. As the selectivity of a wide-angle lenticular screen has to be increased enormously in order to minimize crosstalk progress had to be made in the design of the projection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic images provide unique visual information not available in the planar computer-graphics images which have come to be called "3-D." However, special care must be taken in generating stereoscopic pairs of images for computer graphics. The creation of stereoscopic images requires observing special constraints which do not occur in generating planar "3-D" graphics, in order to produce three-dimensional images which are not distorted, are comfortable to view for long periods, and which appear to have appropriate z-axis depth. The interrelated stereoscopic factors of convergence, accommodation, homologous points, retinal disparity, and binocular symmetry are explained. The effects on the stereoscopic CRT image of positive, negative, zero, and uncrossed parallax values are described, and various special problems and solutions related to images with negative parallax are discussed. Limitations prescribed by the accommodation/convergence ratio and by the use of off-screen effects are proposed. Procedures are offered to writers of software for the creation of properly-prepared images for display on a flickerless time-multiplexed stereoscopic CRT display system, with consideration given to extrastereoscopic cues, screen geometry, and initialization parameters. A useful stereoscopic camera model is described, in which the mapping of points in world coordinates is designated, along with parallax governors to prevent the display of discomfort-producing images. Algorithms are given for the generation of images in the correct format for the system described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new concave semi-cylindrical holographic stereogram format, called the "alcove" hologram, offers 3-D real images that project to the viewer's fingertips with an angle of view of nearly 180 degrees. The hologram is area-multiplexed in a way that is extendable to large holograms that can be quickly and automatically produced, perhaps by a computer peripheral "3-D hard copy" device. In order to produce undistorted images, the input images must be intensively processed, or derived from a digital data base, so that the method for hologram production is particularly suited to a digital imaging and design environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Little if any research has been done on the acceptability of the VISIDEP (tm) three-dimensional image to viewers. The authors have had experiences which indicated to them that such acceptance would be readily obtained and that viewers would find the depth illusion satisfactory in almost every video screen, format. However, no controlled data collection has previously been carried out. This paper reports the first investigation with controls which results in evidence confirming the views held by the authors concerning the viewer and VISIDEP (tm) presentations. Several avenues of needed study were identified. As an initial investigation, the study attempts to establish future questions and trends rather than extensive amounts of hard data and findings. The results are reported as such.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusional vergence eye movements depend upon the interocular correlations of stimuli viewed on a video "3-D" display. The display was driven by a special computer interface herein described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two factors contributing to "ghosting" (image doubling) in plano-stereoscopic CRT displays are phosphor decay and dynamic range of the shutters. A ghosting threshold must be crossed before comfortable fusion can take place. The ghosting threshold changes as image brightness increases and with higher-contrast subjects and those with larger parallax values. Because of the defects of existing liquid crystal shutters, we developed a liquid-crystal shutter with high dynamic range, good transmission, and high speed. With these shutters, residual ghosting is a result of phosphor persistence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is only one real world, We "see" that world as extending into three dimensions because we look at it with two eyes. We are not presented with two "pictures" of the real world, but with two separate views. Views not pictures. The analog of the eye as a camera has done violence to the development of concepts of human vision. The eye is a dynamic sensing apparatus that supplies the brain with inputs from which the brain constructs the scene we "see", and so is responsible for our perceptual structuring of the real world. These visual perceptions are dependent upon our other sensory inputs as well. Indeed, our body senses control and direct, to some degree, where out eyes look and what we "see". This process of conceptualization is thoroughly egocentric. This paper addresses the processes by which our mind/eye/senses interact to form our perception (and concepts) of the world (real or illusionary) and the advantages (and problems) of our egocentric reduction of the data inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three professional stereo photointerpreters assessed the relative utilities of dynamic 2-D and binocular 3-D imagery sequences of an M48 tank and a truck imaged with fixed-position 8-12 micron FLIR sensors during night operations at a 2 Km proving range. The imagery assessment underscores the increased information content of the 3-D sequences, providing the potential for improving: spatial orientation, operator confidence in scene interpretation and the ability to navigate through irregular terrain, and the accuracy and consistency of moving target relative distance judgments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional two-channel stereoscopic 3D displays fall short of duplicating real-world viewing in a number of ways. For example, although an observer's convergence changes when looking at objects at varying distances in a 3D display, the ocular focus required remains constant, due to the fixed optical distance of the display's images. This can produce an undesirable mismatch between focus and fixation. Further, if the observer's head moves up/down, left/right, fore/aft, or tilts, the two retinal images do not transform as they normally would in direct viewing of the scene. The degree of "remote presence" or "telepresence" achieved by these systems is related to how well the display duplicates the retinal images and visual-motor feedback that would be experienced if directly viewing the remote scene. This paper describes two real-time head-coupled remote telepresence display concepts: a helmet-mounted display, and a "virtual window" display. These can, in principle, not only provide vertical, horizontal, and longitudinal motion parallax linked to changing observer position, but can also recreate the normal visual-motor relationship between focus and convergence in the observer's eyes. Existing hardware systems are discussed as an available means for implementing demonstration prototypes for these concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports several results from an on-going research program designed to examine the utility of alternate input device technologies for 3-dimensional (3-D) computer display workstations. In this paper, operator performance levels on a 3-D cursor-positioning task were compared using three input devices: (1) a trackball that allowed unrestricted (i.e., free-space) movements within the display space, (2) a mouse that provided selectable two-axis (i.e., plane) movements, and (3) a set of thumbwheels that provided separate controls for orthogonal single-axis (i.e., vector) movements. In addition, the input device evaluation was conducted for two operationally distinct 3-D display techniques: (1) a linear perspective encoding of image depth information and (2) a field-sequential stereoscopic encoding of depth information. Results are discussed in terms of input device selection and general design considerations for the user interface to 3-D computer workstations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By combining stereoscopic aspects of vision with other optical clues, the pilot of a flight simulator is able to perceive true three-dimensional representations of pictorial display formats or simulated visual scenes. Three-dimensional (3-D) stereographic pictorial formats and their corresponding display systems are being developed and evaluated in order to determine the payoffs of the 3-D computer-generated display formats in the cockpit. The objectives of this research in true three-dimensional cockpit imagery are 1) to determine whether a pilot can better interpret complex pictorial display formats or visual scenes when the third dimension is added and 2) to determine how motion and depth cues can be used to tightly couple the human responses of the pilot to the aircraft control systems. This paper reviews current research, development, and evaluation of easily modifiable 3-D stereo-graphic pictorial display systems being used at the Advanced Cockpit Display Laboratory (ACDL), Lockheed-Georgia Company and at the Flight Dynamics Laboratory, Wright-Patterson AFB. This research includes the analysis and development of true 3-D pictorial formats representing the entire 3-D flight profile; e.g., displays for terrain following/terrain avoidance/threat avoidance and air-to-air and air-to-surface weapon delivery. Electro-optical shuttering systems; e.g., active and passive liquid crystal shutters (LCSs), stereographic display systems, and high-performance pseudo 3-D computer graphics workstations (Silicon Graphics IRIS), are being used to generate stereo pairs. Sidestick and throttle controllers are used to fly through the visual database. These near real-time simulations will be performed in realistic fighter and transport cockpit shells, which may evolve into 1995 designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technique of Image Plane Integral (IPI) Holography has been demonstrated as a useful method for providing autostereoscopic three-dimensional viewing for conventional cineangiography. IPI holograms have a bright, high resolution image that presents proper spatial perception and faster comprehension of orientation, shape and distribution of vascular anatomy. The actual holographic conversion can be made routinely, economically and quickly by photographic technicians with a moderate amount of retraining. The format of the hologram is a Mylar film that is easily seen on a modified light viewbox. Holographic techniques previously attempted with angiography and planar imaging modalities have shown many limitations including low S/N ratios, narrow viewing angles, narrow depth-of-field, distorted spatial relations and various optical aberrations. IPI holography offered at a single imaging center could provide a convenient source of three-dimensional hard copy that would enhance various specific modalities such as conventional cineangiography, DSA, X-ray CT, magnetic resonance imaging (NR) and single photon emission computed tomography (SPECT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Heads-Up Stereo Display device is described for stereo microscope image presentation. Liquid crystal viewing screens are used to provide large exit pupils in the left and right channel. The observer, therefore, is presented with an eye level, three-dimensional image. The optical system is described showing how scattering is used to accomplish the pupil enlargement without mechanically moving parts, but rather by using the liquid crystal screen as an active optical diffuser.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This Paper Describes An Optical Projection System Where The Images Float In Space And Are Perceived With Enhanced Depth. The Optical System Can Be Used By Itself Or In Conjunction With Other 3D Enhancing Techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A single electronic camera mounted on a moving platform can be used to generate three-dimensional images for remote sensing in near-real time. Preliminary tests with an airborne motion picture camera and VISIDEP (tin) alternating frame technology have been used to demonstrate the feasibility of such a system. With the continued development of high density digital video, high quality three-dimensional video images may be generated in near-real-time with delays of less than three or four seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VISIDEP (TM) method of three-dimensional display has been applied to object and cel animation by the author. The VISIDEP method differs from other three-dimensional imaging techniqu1s in that it gives parallax information without requiring viewers to wear special glasses. The illusion of movement created by the various forms of animation has been used in motion pictures since the late 1800s. Based on persistence of vision, animation requires artwork replacement at a rate of 12 to 24 changes per second. Through the use of perspective and shading animators have been able to add a three-dimensional "look" to their films. This two-dimensional depth is "read" py the viewer through a learned process based on their cultural and sociological background. Until recently a true three-dimensional "look" could be achieved only through stereo-scopic filming. Most of the systems available require special projection and viewing optics. The subject matter is usually live-action. Only one3 animated feature film, "Star-chaser" (TM), has been produced using a stereoscopic process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increased demand for multi-resolution displays of simulated scene data for aircraft training or mission planning has led to a need for digital databases of 3-dimensional topography and geographically positioned objects. This data needs to be at varying resolutions or levels of detail as well as be positionally accurate to satisfy close-up and long distance scene views. The generation and maintenance processes for this type of digital database requires that relative and absolute spatial positions of geographic and cultural features be carefully controlled in order for the scenes to be representative and useful for simulation applications. Autometric, Incorporated has designed a modular Analytical Image Matching System (AIMS) which allows digital 3-D terrain feature data to be derived from cartographic and imagery sources by a combination of automatic and man-machine techniques. This system provides a means for superimposing the scenes of feature information in 3-D over imagery for updating. It also allows for real-time operator interaction between a monoscopic digital imagery display, a digital map display, a stereoscopic digital imagery display and automatically detected feature changes for transferring 3-D data from one coordinate system's frame of reference to another for updating the scene simulation database. It is an advanced, state-of-the-art means for implementing a modular, 3-D scene database maintenance capability, where original digital or converted-to-digital analog source imagery is used as a basic input to perform accurate updating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic stereograms, varifocal mirrors, stereo pairs and alternating pairs are evaluated as to their appropriateness for modeling digital cartographic data as produced by the Defense Mapping Agency. After describing the data and establishing criteria for what constitutes an acceptable three-dimensional display system, we present each technology and evaluate it based on the criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3-D) treatment planning has been widely recognized as the ultimate method for radiation therapy for several decades. Recently, interest in developing 3-D treatment planning has been stimulated by the advent of computed tomography (CT), magnetic resonance imaging, and advanced computer technology. A 3-D treatment planning system requires an interactive computer system which is capable of performing the following functions: Demonstration of the tumor volume and normal anatomy in three dimensions, Calculation of the tumor volume, Definition of the target volume, Measurement of the distance and angles from outer surface reference points (e.g., external meatus) to specific anatomic points of interest (e.g., center of tumor), Projection of the spatial relationship between the therapy beam and normal anatomy, and calculation and display of dose distribution in three-dimensions. We have used a commercially available computer display system with a host microcomputer (M68000) to satisfy the above display and interaction requirements except for the calculation of 3-D dose distributions. The system has been applied to several cases which used CT as the imaging modality. A scanning protocol was established which called for contiguous 5mm thick slices from 2 cm above to 2 cm below the skin markers for the designated treatment field. Each patient was scanned in the treatment position, possibly using a fixation device. The outer skin contours, the tumor and adjacent contours were manually traced using a digitizing pen. The surfaces of the skin, the tumor, and normal anatomic structures were reconstructed in the display computer which then allowed a variety of interactions with the data, including beam definition and the real time positioning of the beam. After beam positions were established, the dose distribution within the treatment volume was computed, reconstructed, and then displayed along with the anatomic structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Fluid Dynamics Division of the NASA Ames Research Center is using high definition (high spatial and color resolution) computer graphics to help visualize flow fields from computer simulations of air flow about vehicles such as the Space Shuttle. Computational solutions of the flow field are obtained from Cray supercomputers. These solutions are then transferred to Silicon Graphics Workstations for creation and interactive viewing of dynamic 3D displays of the flow fields. The scientist's viewing position in the 3D space can be interactively changed while the fluid flow is either frozen in time or moving in time. Specific animated sequences can be created for viewing on the workstation or for recording on video tape or 16mm movies with the aid of specialized software that permits easy editing and automatic "tweening" of the sequences. This paper will describe the software developed for creating the 3D flow field displays and for creating the animation sequences. It will also specify the hardware required to generate these displays, to record them on video tape, and to record them on 16mm film. A video tape will be shown to illustrate the capabilities of the hardware and software with examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Realistic 3-D scene generation is now a possibility for many applications. One barrier to increased use of this technique is the large amount of computer processing time needed to render a scene. With the advent of parallel processors that barrier may be overcome if efficient parallel scene generation algorithms can be developed. In general, this has not been true because of restrictions imposed by non-shared memory and limited processor interconnect architectures. In addition, vector processors do not efficiently support the adaptive nature of many of the algorithms. A new parallel computer, the NYU Ultracomputer, has been developed which features a shared memory with a combining network. The com-bining network permits simultaneous reads and writes to the same memory location using a new instruction the Fetch and_Op. These memory references are resolved in the memory access network and result in particularly efficient shared data structures. Basic elements of this architecture are also being used in the design of the gigaflop range RP3 at IBM. Some algorithms typical of image synthesis are explored in the paper and a class of equivalent queue based algorithms are developed. These algorithms are particularly well suited to the Ultra-computer class processor and hold the promise for many new applications of realistic scene generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pairs of digital images have been generated for stereoscopic viewing. The images simulate perspective views from a selected observation point, looking in a selected direction. The perspective scenes were generated from monoscopic digital image data plus grid elevation data. Stereoscopic scenes have been generated for both fixed and moving observation points, with motion simulated by 30 frames per second. The stereo scenes provide the observer with a strong stereoscopic impression when displayed using an electronic stereoscopic display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A true 4 dimensional (4D) graphics laboratory is under development to display data sampled from real world 4D structures. The system consists of multiple processors, 4D display devices, voice input, output, mechanical and eye driven cursors and projection programs that produce images with stereo, motion and focal depth cues. There are two bi-nocular stereo 4D displays, one using two graphic generators that are optically overlapped and the other using the Tektronix stereo liquid crystal shutter. The 4th dimension is provided by 16 megabytes of image memory in each generator that hold sixty four 512x512 stereo pairs. 3D hard copy is produced with NIMSLO lenticular and Polaroid Vectograph processes. 3D presentations use binocular stereo slide projectors and 4D presentations use the Tektronix shutter and video tapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.