PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chernoff faces have been proposed as a tool for scientific and information visualization. However, the effectiveness of this form of visualization is still open to speculation. Chernoff faces, it is suggested, make use of humans' apparently inherent ability to recognize faces and small changes in facial characteristics. Limited research has been conduced to assess how well Chernoff faces make use of this ability. So far, it is still unclear how humans recognize faces and whether or not a specific set of rules governs the process. A particular area of interest is whether or not certain features are pre-attentive. Furthermore, what effect a certain number of distractors have on the attentiveness of various features is also of concern. This information could be used to maximize the effectiveness of Chernoff faces by providing an indication of which applications would be best served by the use of Chernoff faces. In order to address this issue, we have conduced a user study, which tested the effectiveness and pre-attentiveness of several features of Chernoff faces. Our user study indicated that the perception of eye size, a specific face, eyebrow slant, and the combination eyebrow slant with eye size is a serial process. Our study also indicated that for longer viewing times, eye size and eyebrow slant were the most accurate features. These initial results indicate that Chernoff faces may not have a significant advantage over other iconic visualization techniques for multidimensional information visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The surface of the human face can be represented by a set of facets. The Phase Fourier Transform (PFT) can be used to transform a facet in the space domain to a peak in the frequency domain. The position and the distribution of the peak represent the orientation and shape of the facet respectively. The PFT of the human face provides a new signature of the face. The intensity of the PFT is invariant to the shift and out-of-plane rotation within a certain angle. It is also scale invariant within a certain range. We have used Circular Harmonic m-r filtering to achieve the in- plane partial rotation invariance. The recognition decision is based on the intensity and performance of the correlation peak.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-assisted interactive visualization has become a valuable tool for discovering the underlying meaning of tabular data, including categorical tabular data. The capabilities of the more traditionally mundane kinds of pictures like scatter plots can be expanded to usefully depict categorical tabular data by incorporating annotations and transforms, and by integrating the extensions into an interactive system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color is commonly used in data visualization in order to convey a wide variety of types of information: metric values, pattern, extrema, emphasis, and others. A distressingly large percentage of these visualizations appear to use the default color scale for the visualization package used to create them. While sometimes this is an appropriate choice, careful consideration of the implications of color scale selection can often result in a more effective visualization. Factors which should be considered include the characteristics of the data, the questions of interest about the data, and the expected viewers of the representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of our image-based rendering group is to accurately render scenes acquired from the real world. To achieve this goal, we capture scene data by taking 3D panoramic photographs from multiple locations and merge the acquired data into a single model from which real-time 3D rendering can be performed. In this paper, we describe our acquisition hardware and rendering system that seeks to achieve this goal, with particular emphasis on the techniques used to support interactive exploration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With advances in panoramic image-based rendering techniques and the rapid expansion of web advertising, new techniques are emerging for visualizing remote locations on the WWW. Success of these techniques depends on how easy and inexpensive it is to develop a new type of web content that provides pseudo 3D visualization at home, 24-hours a day. Furthermore, the acceptance of this new visualization medium depends on the effectiveness of the familiarization tools by a segment of the population that was never exposed to this type of visualization. This paper addresses various hardware and software solutions available to collect, produce, and view panoramic content. While cost and effectiveness of building the content is being addressed using a few commercial hardware solutions, effectiveness of familiarization tools is evaluated using a few sample data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a method for automatically registering a 3D range image and a 2D color image using the (chi) 2-similarity metric. The goal of this registration is to allow the reconstruction of a scene using multi-sensor information. Traditional registration algorithms use invariant image features to drive the registration process. This approach limits the applicability to multi-modal data since features of interest may not appear in each modality. However, the (chi) 2-similarity metric is an intensity- based approach that has interesting multi-modal characteristics. We explore this metric as a mechanism to govern the registration search. Using range data from a Perceptron laser camera and color data form a Kodak digital camera, we present result using this automatic registration with the (chi) 2-similarity metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ordinance-disposal expert was called to a remote site to dispose of a buried cache of explosives which had been hidden by a felon. The cache was buried in a forest and consisted ofa large number ofold sticks ofdynamite and blasting caps. The explosives and caps had been collected from abandoned mines and were old, corroded and fragile. The ordnance expert declined to explode the cache in place (a common and safe way ofdisposing ofold explosives) because he feared starting a forest fire. Instead, the explosives were removed from the burial site and moved to a dry streambed. The dynamite was burned. A small hole was dug in the streambed and the blasting caps as well as a few other small explosive devices were placed inside. Witnesses state that they were sent away from the site in preparation for the disposal. Instead ofthe usual shouting of "fire in the hole" followed by an explosion, there was simply an explosion. The witnesses returned to the site and found the explosives expert lying by the small pit in extremis. He died shortly thereafter of massive blast and shrapnel wounds. An important question in accidents such as this is whether it is the result of singular circumstances, lack of adherence to standard procedures, or failure of standard procedures. While the particulars of the fitli investigation are beyond the scope of this paper, a number of procedural questions were raised; these included the possibility ofgeneration ofstatic electricity from clothing, the possible presence of transmitters in the immediate area, and other factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The task of accurately reconstructing scenes for interpretation has frequently proved problematic. Conventional methods of reconstructing 3D scenes often involve sacrificial trade-offs among several parameters: (1) completeness of scene reconstruction, (2) geometric accuracy, (3) time for capturing information and creating 3D computer models, (4) the need to not disturb the original scene during data capture, and (5) cost. A compelling, new technology, Large-scale 3D Laser Scanning, Modeling and Visualization, promises to have a major impact on the field by delivering 3D computer models of scenes that are both highly accurate and complete, in a more timely, and cost- effective manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contemporary automatic target recognition (ATR) technology programs require reasoning not only in space but also across time and sensor type. If contemporary scale-space techniques are also considered then up to a 5D processing regime is required. Algorithm development under such conditions will greatly benefit from advanced visualization tools. In this paper, examples of 4D processing for ATR will be given using a prototype tool developed in Java 1.2. Limitations of current tools and software for visualization of ATR applications will be addressed. Future tools to accelerate the algorithm development process will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the representation and navigation of large, multi-resolution, georeferenced datasets in VRML97. This requires resolving nontrivial issues such as how to represent deep level of detail hierarchies efficiently in VRML; how to model terrain using geographic coordinate systems instead of only VRML's Cartesian representation; how to model georeferenced coordinates to sub-meter accuracy with only single-precision floating point support; how to enable the integration of multiple terrain datasets for a region, as well as cultural features such as buildings and rods; how to navigate efficiently around a large, global terrain dataset; and finally, how to encode metadata describing the terrain. We present solutions to all of these problems. Consequently, we are able to visualize geographic dat ain the order of terabytes or more, from the globe down to millimeter resolution, and in real-time, using standard VRML97.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a visualization system and method for measuring, inspecting and analyzing motion in video. Starting from a simple motion video, the system creates a still image representation which we call a digital strobe photograph. Similar to visualization techniques used in conventional film photography to capture high-speed motion using strobe lamps or very fast shutters, and to capture time-lapse motion where the shutter is left open, this methodology creates a single image showing the motion of one or a small number of objects over time. Based on digital background subtraction, we assume that the background is stationary or at most slowing changing and that the camera position is fixed. The method is capable of displaying the motion based on a parameter indicating the time step between successive movements. It can also overcome problems of visualizing movement that is obscured by previous movements. The method is used in an educational software tool for children to measure and analyze various motions. Examples are given using simple physical objects such as balls and pendulums, astronomical events such as the path of the stars around the north pole at night, or the different types of locomotion used by snakes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and demonstrated a vision-based pose determination and reality registration system for identifying objects in an unstructured visual environment. A wire-frame template of the object to be identified is compared to the input images form one or more cameras. If the object is found, an output of the object's position and orientation is computed. The placement of the template can be performed by a human in-the-loop, or through an automated real-time front end system. The three steps for classification and pose determination are comprised of two estimation modules and a module which refines the estimates to determine an answer. The first module in the sequence uses input images and models to generate a coarse pose estimate for the object. The second module in the sequence uses the estimates from the coarse pose estimation module, input images, and the model to further refine the pose. The last module in the sequence uses the fine pose estimation, the images, and the model to determine an exact match between the model and the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ongoing work in Activity Monitoring (AM) for the Airborne Video Surveillance (AVS) project is described. The goal for AM is to recognize activities of interest involving humans and vehicles using airborne video. AM consists of three major components: (1) moving object detection, tracking, and classification; (2) image to site-model registration; (3) activity recognition. Detecting and tracking humans and vehicles form airborne video is a challenging problem due to image noise, low GSD, poor contrast, motion parallax, motion blur, and camera blur, and camera jitter. We use frame-to- frame affine-warping stabilization and temporally integrated intensity differences to detect independent motion. Moving objects are initially tracked using nearest-neighbor correspondence, followed by a greedy method that favors long track lengths and assumes locally constant velocity. Object classification is based on object size, velocity, and periodicity of motion. Site-model registration uses GPS information and camera/airplane orientations to provide an initial geolocation with +/- 100m accuracy at an elevation of 1000m. A semi-automatic procedure is utilized to improve the accuracy to +/- 5m. The activity recognition component uses the geolocated tracked objects and the site-model to detect pre-specified activities, such as people entering a forbidden area and a group of vehicles leaving a staging area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content addressable memories (CAMS) store both key and association data. A key is presented to the CAN when it is searched and all of the addresses are scanned in parallel to find the address referenced by the key. When a match occurs, the corresponding association is returned. With the explosion of telecommunications packet switching protocols, large data base servers, routers and search engines a new generation of dense sub-micron high throughput CAMS has been developed. The introduction of this paper presents a brief history and tutorial on CAMS, their many uses and advantages, and describes the architecture and functionality of several of MUSIC Semiconductors CAM devices. In subsequent sections of the paper we address using Associative Processing to accommodate the continued increase in sensor resolution, number of spectral bands, required coverage, the desire to implement real-time target cueing, and the data flow and image processing required for optimum performance of reconnaissance and surveillance Unmanned Aerial Vehicles (UAVs). To be competitive the system designer must provide the most computational power, per watt, per dollar, per cubic inch, within the boundaries of cost effective UAV environmental control systems. To address these problems we demonstrate leveraging DARPA and DoD funded Commercial Off-the-Shelf technology to integrate CAM based Associative Processing into a real-time heterogenous multiprocessing system for UAVs and other platforms with limited weight, volume and power budgets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work represents the convergent evolution of a number of technologies and research 'threads'. A project called MetaMAP, which developed early hypermedia imagemap technology, dates back to 1986. Work on creating a new paradigm for doing client-server visualization over the Internet began in 1992. Another major project began in 1993 to turn the Web into a platform for interactive applications. A project to develop multidimensional imagemap technology began in 1995. Finally, work on a scalable computational server architecture called 'Dark Iron' began in 1997. The MultiVIS project represents the intersection of these various research efforts to create a new kind of navigable knowledge space that leverages the advantages of each of its constituent technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A growing need for more advanced training capabilities and the proliferation of government standards into the commercial market has inspired Cybernet to create an advanced, distributed 3D Simulation Toolkit. This system, called OpenSkies, is a truly open, realistic distributed system for 3D visualization and simulation. One of the main strengths of OpenSkies is its capability for data collection and analysis. Cybernet's Data Collection and Analysis Environment is closely integrated with OpenSkies to produce a unique, quantitative, performance-based measurement system. This system provides the capability for training students and operators on any complex equipment or system that can be created in a simulated world. OpenSkies is based on the military standard HLA networking architecture. This architecture allows thousands of users to interact in the same world across the Internet. Cybernet's OpenSkies simulation system brings the power and versatility of the OpenGL programming API to the simulation and gaming worlds. On top of this, Cybernet has developed an open architecture that allows the developer to produce almost any kind of new technique in their simulation. Overall, these capabilities deliver a versatile and comprehensive toolkit for simulation and distributed visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive visualization of multi-dimensional biological images has revolutionized diagnostic and therapy planning. Extracting complementary anatomical and functional information from different imaging modalities provides a synergistic analysis capability for quantitative and qualitative evaluation of the objects under examination. We have been developing NIHmagic, a visualization tool for research and clinical use, on the SGI OnyxII Infinite Reality platform. Images are reconstructed into a 3D volume by volume rendering, a display technique that employs 3D texture mapping to provide a translucent appearance to the object. A stack of slices is rendered into a volume by an opacity mapping function, where the opacity is determined by the intensity of the voxel and its distance from the viewer. NIHmagic incorporates 3D visualization of time-sequenced images, manual registration of 2D slices, segmentation of anatomical structures, and color-coded re-mapping of intensities. Visualization of MIR, PET, CT, Ultrasound, and 3D reconstructed electron microscopy images has been accomplished using NIHmagic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate cancer is the second most common cause of cancer deaths and is the most frequently detected form of cancer of males in the US. Death rate scan be greatly reduced by early treatment. Consequently, it is important to understand the cause and progression of this disease in order to improve detection and treatment methods. As part of the Cancer Genome Anatomy Project work is underway to produce a 'molecular finger print' of prostate cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of the documentation of forensics-relevant injuries, from the reconstructive point of view, the Forensic, 3D/CAD-supported Photometry plays an important role; particularly so when a detailed 3D reconstruction is vital. This was demonstrated with an experimentally-produced 'injury' to a head model, the 'skin-skull-brain model'. The injury-causing instrument, drawn from a real forensic case, was a specifically formed weapon.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of myocardial SPECT image segmentation is presented in this document. It uses the extraction of crest lines of the surface defined by gray level distribution in a given slice as low-level processing. This technique is robust with respect to defects. An active contour is then sued to delineate the left-ventricle myocardial centerline. The choice of radial slices, instead of parallel ones, allows efficient covering of the left ventricle for myocardium 'mid-surface' reconstruction. A direct application of this method is 3D-visualization of the radionuclide distribution with perfusion information mapping on this surface, which is expected to simplify the interpretation of cardiac SPECT studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an automated multi-modality registration algorithm based on hierarchical feature extraction. The approach, which has not ben used previously, can be divided into two distinct stages: feature extraction and geometric matching. Two kinds of corresponding features - edge and surface - are extracted hierarchically from various image modalities. The registration then is performed using least-squares matching of the automatically extracted features. Both the robustness and accuracy of feature extraction and geometric marching steps are evaluated using simulated and patient images. The preliminary results show the error is on the average of one voxel. We have shown the proposed 3D registration algorithm provides a simple and fast method for automatic registering of MR-to-CT and MR-to- PET image modalities. Our results are comparable to other techniques and require no user interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While image guidance is now routinely used in the brain in the form of frameless stereotaxy, it is beginning to be more widely used in other clinical areas such as the spine. At Georgetown University Medical Center, we are developing a program to provide advanced visualization and image guidance for minimally invasive spine procedures. This is a collaboration between an engineering-based research group and physicians from the radiology, neurosurgery, and orthopaedics departments. A major component of this work is the ISIS Center Spine Procedures Imaging and Navigation Engine, which is a software package under development as the base platform for technical advances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Congenital malformations and diseases have challenged the humanistic and intellectual resources of mankind for ages. Generations ofscientists have labored to understand their causes and effect their treatments. Efforts such as the Human Genome Project will produce the raw information base from which many cures and treatments may emerge. In order to begin to put meaning to the mountain ofdata being produced by the Gome Project, one must analyze this information within the context ofthe developing organism. The discipline ofstudy ofthe nature oforganism development has been referred to as embryology for over a century.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.