Patient registration enables image guidance by establishing a transformation between patient (physical) space and image space. In this study, we present an automatic patient registration method using intraoperative stereovision (iSV). First, an iSV system attached to a surgical microscope was used to acquired multiple iSV image pairs of the patient's head/face from various angles after the patient is positioned on the operating table but before incision, and the reconstructed iSV surfaces were concatenated to form a composite field-of-view. Second, the composite iSV surface was registered to the surface profile extracted from pre-operative MR (pMR) images as an initial approximation of patient registration. Third, another iSV image pair of the cortical surface was acquired after dural opening, and the reconstructed iSV cortical surface was re-registered to pMR to refine patient registration using automatically segmented surface features such as blood vessels. We evaluated retrospectively the performance of iSV-based patient registration in 6 patient cases in terms of accuracy and computational efficiency. The computational efficiency was ~10 min for the initial registration using skin iSV data and <10 sec for the re-registration using cortical surface. Target registration errors (TRE) were assessed using landmarks (e.g., blood vessels) on the cortical surface that were identifiable in both iSV and pMR, and the average TRE across 6 cases was 1.91±0.61 mm. These results suggest potential OR applications using intraoperative stereovision for automatic patient registration in image-guided open cranial surgery.
Preoperative magnetic resonance images (pMR) are typically used for intraoperative guidance in image-guided neurosurgery, the accuracy of which can be significantly compromised by brain deformation. Biomechanical finite element models (FEM) have been developed to estimate whole-brain deformation and produce model-updated MR (uMR) that compensates for brain deformation at different surgical stages. Early stages of surgery, such as after craniotomy and after dural opening, have been well studied, whereas later stages after tumor resection begins remain challenging. In this paper, we present a method to simulate tumor resection by incorporating data from intraoperative stereovision (iSV). The amount of tissue resection was estimated from iSV using a "trial-and-error" approach, and the cortical shift was measured from iSV through a surface registration method using projected images and an optical flow (OF) motion tracking algorithm. The measured displacements were employed to drive the biomechanical brain deformation model, and the estimated whole-brain deformation was subsequently used to deform pMR and produce uMR. We illustrate the method using one patient example. The results show that the uMR aligned well with iSV and the overall misfit between model estimates and measured displacements was 1.46 mm. The overall computational time was ~5 min, including iSV image acquisition after resection, surface registration, modeling, and image warping, with minimal interruption to the surgical flow. Furthermore, we compare uMR against intraoperative MR (iMR) that was acquired following iSV acquisition.
Robot-assisted laparoscopic partial nephrectomies (RALPN) are performed to treat patients with locally confined renal carcinoma. There are well-documented benefits to performing partial (opposed to radical) kidney resections and to using robot-assisted laparoscopic (opposed to open) approaches. However, there are challenges in identifying tumor margins and critical benign structures including blood vessels and collecting systems during current RALPN procedures. The primary objective of this effort is to couple multiple image and data streams together to augment visual information currently provided to surgeons performing RALPN and ultimately ensure complete tumor resection and minimal damage to functional structures (i.e. renal vasculature and collecting systems). To meet this challenge we have developed a framework and performed initial feasibility experiments to couple pre-operative high-resolution anatomic images with intraoperative MRI, ultrasound (US) and optical-based surface mapping and kidney tracking. With these registered images and data streams, we aim to overlay the high-resolution contrast-enhanced anatomic (CT or MR) images onto the surgeon’s view screen for enhanced guidance. To date we have integrated the following components of our framework: 1) a method for tracking an intraoperative US probe to extract the kidney surface and a set of embedded kidney markers, 2) a method for co-registering intraoperative US scans with pre-operative MR scans, and 3) a method for deforming pre-op scans to match intraoperative scans. These components have been evaluated through phantom studies to demonstrate protocol feasibility.
Accurate measurement of soft tissue material properties is critical for characterizing its biomechanical behaviors but can be challenging especially for the human brain. Recently, we have applied stereovision to track motion of the exposed cortical surface noninvasively for patients undergoing open skull neurosurgical operations. In this paper, we conduct a proof-of-concept study to evaluate the feasibility of the technique in measuring material properties of soft tissue in vivo using a tofu phantom. A block of soft tofu was prepared with black pepper randomly sprinkled on the top surface to provide texture to facilitate image-based displacement mapping. A disk-shaped indenter made of high-density tungsten was placed on the top surface to induce deformation through its weight. Stereoscopic images were acquired before and after indentation using a pair of stereovision cameras mounted on a surgical microscope with its optical path perpendicular to the imaging surface. Rectified left camera images obtained from stereovision reconstructions were then co-registered using optical flow motion tracking from which a 2D surface displacement field around the indenter disk was derived. A corresponding finite element model of the tofu was created subjected to the indenter weight and a hyperelastic material model was chosen to account for large deformation around the intender edges. By successively assigning different shear stiffness constant, computed tofu surface deformation was obtained, and an optimal shear stiffness was obtained that matched the model-derived surface displacements with those measured from the images. The resulting quasi-static, long-term shear stiffness for the tofu was 1.04 k Pa, similar to that reported in the literature. We show that the stereovision and free-weight indentation techniques coupled with an FE model are feasible for in vivo measurement of the human brain material properties, and it may also be feasible for other soft tissues.
KEYWORDS: 3D modeling, Data modeling, Brain, Magnetic resonance imaging, Image processing, Process modeling, Neuroimaging, Surgery, Ultrasonography, Tumors
Dartmouth and Medtronic have established an academic-industrial partnership to develop, validate, and evaluate a multimodality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. Previous studies have shown that brain shift compensation through a modeling framework using intraoperative ultrasound and/or visible light stereovision to update preoperative MRI appears to result in improved accuracy in navigation. However, image updates have thus far only been produced retrospective to surgery in large part because of gaps in the software integration and information flow between the co-registration and tracking, image acquisition and processing, and image warping tasks which are required during a case. This paper reports the first demonstration of integration of a deformation-based image updating process for brain shift modeling with an industry-standard image guided surgery platform. Specifically, we have completed the first and most critical data transfer operation to transmit volumetric image data generated by the Dartmouth brain shift modeling process to the Medtronic StealthStation® system. StealthStation® comparison views, which allow the surgeon to verify the correspondence of the received updated image volume relative to the preoperative MRI, are presented, along with other displays of image data such as the intraoperative 3D ultrasound used to update the model. These views and data represent the first time that externally acquired and manipulated image data has been imported into the StealthStation® system through the StealthLink® portal and visualized on the StealthStation® display.
In image-guided neurosurgery, intraoperative brain shift significantly degrades the accuracy of neuronavigation that is solely based on preoperative magnetic resonance images (pMR). To compensate for brain deformation and to maintain the accuracy in image guidance achieved at the start of surgery, biomechanical models have been developed to simulate brain deformation and to produce model-updated MR images (uMR) to compensate for brain shift. To-date, most studies have focused on shift compensation at early stages of surgery (i.e., updated images are only produced after craniotomy and durotomy). Simulating surgical events at later stages such as retraction and tissue resection are, perhaps, clinically more relevant because of the typically much larger magnitudes of brain deformation. However, these surgical events are substantially more complex in nature, thereby posing significant challenges in model-based brain shift compensation strategies. In this study, we present results from an initial investigation to simulate retractor-induced brain deformation through a biomechanical finite element (FE) model where whole-brain deformation assimilated from intraoperative data was used produce uMR for improved accuracy in image guidance. Specifically, intensity-encoded 3D surface profiles at the exposed cortical area were reconstructed from intraoperative stereovision (iSV) images before and after tissue retraction. Retractor-induced surface displacements were then derived by coregistering the surfaces and served as sparse displacement data to drive the FE model. With one patient case, we show that our technique is able to produce uMR that agrees well with the reconstructed iSV surface after retraction. The computational cost to simulate retractor-induced brain deformation was approximately 10 min. In addition, our approach introduces minimal interruption to the surgical workflow, suggesting the potential for its clinical application.
KEYWORDS: 3D image processing, 3D acquisition, Image registration, 3D modeling, Ultrasonography, Transducers, Data modeling, Brain, Model-based design, Calibration
True three-dimensional (3D) volumetric ultrasound (US) acquisitions stand to benefit intraoperative neuronavigation on
multiple fronts. While traditional two-dimensional (2D) US and its tracked, hand-swept version have been recognized for
many years to advantage significantly image-guided neurosurgery, especially when coregistered with preoperative MR
scans, its unregulated and incomplete sampling of the surgical volume of interest have limited certain intraoperative uses
of the information that are overcome through direct volume acquisition (i.e., through 2D scan-head transducer arrays). In
this paper, we illustrate several of these advantages, including image-based intraoperative registration (and reregistration)
and automated, volumetric displacement mapping for intraoperative image updating. These applications of
3D US are enabled by algorithmic advances in US image calibration, and volume rasterization and interpolation for
multi-acquisition synthesis that will also be highlighted. We expect to demonstrate that coregistered 3D US is well worth
incorporating into the standard neurosurgical navigational environment relative to traditional tracked, hand-swept 2D
US.
KEYWORDS: Brain, Neuroimaging, Magnetic resonance imaging, Data modeling, Volume rendering, Image registration, 3D image processing, Neodymium, Photography, 3D modeling
Intraoperative brain deformation can significantly degrade the accuracy of image guidance using preoperative MR
images (pMR). To compensate for brain deformation, biomechanical models have been used to assimilate intraoperative
displacement data, compute whole-brain deformation field, and to produce updated MR images (uMR). Stereovision
(SV) is an important technique to capture both geometry and texture information of exposed cortical surface at the
craniotomy, from which surface displacement data (known as sparse data) can be extracted by registering with pMR to
drive the computational model. Approaches that solely utilize geometrical information (e.g., closest point distance (CPD)
and iterative closest point (ICP) method) do not seem to capture surface deformation accurately especially when
significant lateral shift occurs. In this study, we have developed a texture intensity-based method to register cortical
surface reconstructed from stereovision after dural opening with pMR to extract 3D sparse data. First, a texture map is
created from pMR using surface geometry before dural opening. Second, a mutual information (MI)-based registration
was performed between the texture map and the corresponding stereo image after dural opening to capture the global
lateral shift. A block-matching algorithm was then executed to differentiate local displacements in smaller patches. The
global and local displacements were finally combined and transformed in 3D following stereopsis. We demonstrate the
application of the proposed method with a clinical patient case, and show that the accuracy of the technique is 1-2 mm in
terms of model-data misfit with a computation time <10 min.
KEYWORDS: Image registration, Data modeling, Brain, Ultrasonography, Magnetic resonance imaging, Tissues, Neuroimaging, Surgery, 3D image processing, 3D modeling
Compensating for brain shift as surgery progresses is important to ensure sufficient accuracy in patient-to-image
registration in the operating room (OR) for reliable neuronavigation. Ultrasound has emerged as an important and
practical imaging technique for brain shift compensation either by itself or through computational modeling that
estimates whole-brain deformation. Using volumetric true 3D ultrasound (3DUS), it is possible to nonrigidly (e.g., based
on B-splines) register two temporally different 3DUS images directly to generate feature displacement maps for data
assimilation in the biomechanical model. Because of a large amount of data and number of degrees-of-freedom (DOFs)
involved, however, a significant computational cost may be required that can adversely influence the clinical feasibility
of the technique for efficiently generating model-updated MR (uMR) in the OR. This paper parametrically investigates
three B-splines registration parameters and their influence on the computational cost and registration accuracy: number
of grid nodes along each direction, floating image volume down-sampling rate, and number of iterations. A simulated
rigid body displacement field was employed as a ground-truth against which the accuracy of displacements generated
from the B-splines nonrigid registration was compared. A set of optimal parameters was then determined empirically that
result in a registration computational cost of less than 1 min and a sub-millimetric accuracy in displacement
measurement. These resulting parameters were further applied to a clinical surgery case to demonstrate their practical
use. Our results indicate that the optimal set of parameters result in sufficient accuracy and computational efficiency in
model computation, which is important for future application of the overall biomechanical modeling to generate uMR for
image-guidance in the OR.
Preoperative magnetic resonance images are typically used for neuronavigation in image-guided neurosurgery. However,
intraoperative brain deformation (e.g., as a result of gravitation, loss of cerebrospinal fluid, retraction, resection, etc.)
significantly degrades the accuracy in image guidance, and must be compensated for in order to maintain sufficient
accuracy for navigation. Biomechanical finite element models are effective techniques that assimilate intraoperative data
and compute whole-brain deformation from which to generate model-updated MR images (uMR) to improve accuracy in
intraoperative guidance. To date, most studies have focused on early surgical stages (i.e., after craniotomy and
durotomy), whereas simulation of more complex events at later surgical stages has remained to be a challenge using
biomechanical models. We have developed a method to simulate partial or complete tumor resection that incorporates
intraoperative volumetric ultrasound (US) and stereovision (SV), and the resulting whole-brain deformation was used to
generate uMR. The 3D ultrasound and stereovision systems are complimentary to each other because they capture
features deeper in the brain beneath the craniotomy and at the exposed cortical surface, respectively. In this paper, we
illustrate the application of the proposed method to simulate brain tumor resection at three temporally distinct surgical
stages throughout a clinical surgery case using sparse displacement data obtained from both the US and SV systems. We
demonstrate that our technique is feasible to produce uMR that agrees well with intraoperative US and SV images after
dural opening, after partial tumor resection, and after complete tumor resection. Currently, the computational cost to
simulate tumor resection can be up to 30 min because of the need for re-meshing and the trial-and-error approach to
refine the amount of tissue resection. However, this approach introduces minimal interruption to the surgical workflow,
which suggests the potential for its clinical application with further improvement in computational efficiency.
Maximal tumor resection without damaging healthy tissue in open cranial surgeries is critical to the prognosis for
patients with brain cancers. Preoperative images (e.g., preoperative magnetic resonance images (pMR)) are typically
used for surgical planning as well as for intraoperative image-guidance. However, brain shift even at the start of surgery
significantly compromises the accuracy of neuronavigation, if the deformation is not compensated for. Compensating for
brain shift during surgical operation is, therefore, critical for improving the accuracy of image-guidance and ultimately,
the accuracy of surgery. To this end, we have developed an integrated neurosurgical guidance system that incorporates
intraoperative three-dimensional (3D) tracking, acquisition of volumetric true 3D ultrasound (iUS), stereovision (iSV)
and computational modeling to efficiently generate model-updated MR image volumes for neurosurgical guidance. The
system is implemented with real-time Labview to provide high efficiency in data acquisition as well as with Matlab to
offer computational convenience in data processing and development of graphical user interfaces related to
computational modeling. In a typical patient case, the patient in the operating room (OR) is first registered to pMR
image volume. Sparse displacement data extracted from coregistered intraoperative US and/or stereovision images are
employed to guide a computational model that is based on consolidation theory. Computed whole-brain deformation is
then used to generate a model-updated MR image volume for subsequent surgical guidance. In this paper, we present the
key modular components of our integrated, model-based neurosurgical guidance system.
In image-guided neurosurgery, preoperative images are typically used for surgical planning and intraoperative guidance.
The accuracy of preoperative images can be significantly compromised by intraoperative brain deformation. To
compensate for brain shift, biomechanical finite element models have been used to assimilate intraoperative data to
simulate brain deformation. The clinical feasibility of the approach strongly depends on its accuracy and efficiency. In
order to facilitate and streamline data flow, we have developed graphical user interfaces (GUIs) to provide efficient
image updates in the operating room (OR). The GUIs are organized in a top-down hierarchy with a main control panel
that invokes and monitors a series of sub-GUIs dedicated to perform tasks involved in various aspects of computations of
whole-brain deformation. These GUIs are used to segment brain, generate case-specific brain meshes, and assign and
visualize case-specific boundary conditions (BC). Registration between intraoperative ultrasound (iUS) images acquired
pre- and post-durotomy is also facilitated by a dedicated GUI to extract sparse displacement data used to drive a
biomechanical model. Computed whole-brain deformation is then used to morph preoperative MR images (pMR) to
generate a model-updated image set (i.e., uMR) for intraoperative guidance (accuracy of 1-2 mm). These task-driven
GUIs have been designed to be fault-tolerant, user-friendly, and with sufficient automation. In this paper, we present the
modular components of the GUIs and demonstrate the typical workflow through a clinical patient case.
Recent evidence suggests a correlation between extent of tumor resection and patient prognosis, making maximal tumor
resection a clinical ideal for neurosurgeons. Our group is currently undertaking a clinical study using fluorescence-based
detection of tumor coupled with a standard 3-D image guidance system to study the effectiveness of fluorescence-based
detection in the neurosurgical operating room. For fluorescence-based detection, we used 5-aminolevulinic acid to
induce accumulation of protoporphyrin IX in malignant tissues. In this paper, we chose one prototypical, highly
fluorescent case of glioblastoma multiforme, a high-grade glioma, to highlight some of the key findings and
methodology used in our study of fluorescence-based detection and resection of brain tumors.
Intraoperative brain shift compensation is important for improving the accuracy of neuronavigational systems and
ultimately, the accuracy of brain tumor resection as well as patient quality of life. Biomechanical models are practical
methods for brain shift compensation in the operating room (OR). These methods assimilate incomplete deformation
data on the brain acquired from intraoperative imaging techniques (e.g., ultrasound and stereovision), and simulate
whole-brain deformation under loading and boundary conditions in the OR. Preoperative images of the patient's head
(e.g., preoperative magnetic resonance images (pMR)) are then deformed accordingly based on the computed
displacement field to generate updated visualizations for subsequent surgical guidance. Apparently, the clinical
feasibility of the technique depends on the efficiency as well as the accuracy of the computational scheme. In this paper,
we identify the major steps involved in biomechanical simulation of whole-brain deformation and demonstrate the
efficiency and accuracy of each step. We show that a combined computational cost of 5 minutes with an accuracy of 1-2
millimeter can be achieved which suggests that the technique is feasible for routine application in the OR.
We present the methods that are being used in the scope of an on-going clinical trial designed to assess the usefulness of
ALA-PpIX fluorescence imaging when used in conjunction with pre-operative MRI. The overall objective is to develop
imaging-based neuronavigation approaches to aid in maximizing the completeness of brain tumor resection, thereby
improving patient survival rate. In this paper we present the imaging methods that are used, emphasizing technical
aspects relating to the fluorescence optical microscope, including initial validation approaches based on phantom and
small-animal experiments. The surgical workflow is then described in detail based on a high-grade glioma resection we
performed.
Intraoperative ultrasound (iUS) has emerged as a practical neuronavigational tool for brain shift compensation in image-guided
tumor resection surgeries. The use of iUS is optimized when coregistered with preoperative magnetic resonance
images (pMR) of the patient's head. However, the fiducial-based registration alone does not necessarily optimize the
alignment of internal anatomical structures deep in the brain (e.g., tumor) between iUS and pMR. In this paper, we
investigated and evaluated an image-based re-registration scheme to maximize the normalized mutual information (nMI)
between iUS and pMR to improve tumor boundary alignment using the fiducial registration as a starting point for
optimization. We show that this scheme significantly (p<<0.001) reduces tumor boundary misalignment pre-durotomy.
The same technique was employed to measure tumor displacement post-durotomy, and the locally measured tumor
displacement was assimilated into a biomechanical model to estimate whole-brain deformation. Our results demonstrate
that the nMI re-registration pre-durotomy is critical for obtaining accurate measurement of tumor displacement, which
significantly improved model response at the craniotomy when compared with stereopsis data acquired independently
from the tumor registration. This automatic and computationally efficient (<2min) re-registration technique is feasible
for routine clinical use in the operating room (OR).
Intraoperative ultrasound (iUS) has emerged as a practical neuronavigational tool in image-guided open cranial
procedures because of its low cost, easy implementation and real time image acquisition. Two-dimensional iUS (2DiUS)
is currently the most common ultrasonic imaging tool used in the operating room (OR). However, gaps between imaging
planes and limited volumetric sampling with 2DiUS often result in incomplete imaging of the internal anatomical
structures of interest (e.g., tumor). In this paper, we investigate and evaluate the use of coregistered volumetric true
three-dimensional iUS (3DiUS) generated from a broadband matrix array transducer (X3-1) attached to a Phillips iU22
intelligent ultrasound system. This 3DiUS scheme is able to provide full 3D sampling over a frustum-shaped volume
with high resolution dicom images directly recovered by the ultrasound system without the need for free-hand sweeps or
3D reconstruction. Volumetric 3DiUS images were co-registered with preoperative magnetic resonance (pMR) images
by tracking the spatial location and orientation of an infrared light-emitting tracker rigidly attached to the US scan-head
following a fiducial registration and an iUS scan-head calibration. The registration was further refined using an imagebased
scheme to maximize the inter-image normalized mutual information. In addition, we have utilized a coordinate
system nomenclature and developed a set of static visualization techniques to present 3D US image data in the OR,
which will be important for qualitative and quantitative analyses of the performance of 3DiUS in image-guided
neurosurgery in the future. We show that 3DiUS significantly improves the imaging efficiency and enhances integration
of iUS into the surgical workflow, making it appear to be promising for routine use in the OR.
Biomechanical models of brain deformation are useful tools for estimating the shift that occurs during neurosurgical
interventions. Incorporation of intra-operative data into the biomechanical model improves the accuracy of the
registration between the patient and the image volume. The representer method to solve the adjoint equations (AEM)
for data assimilation has been developed. In order to improve the computational efficiency and to process more intraoperative
data, we modified the adjoint equation method by changing the way in which intraoperative data is applied.
The current formulation is developed around a point-based data-model misfit. Surface based data-model misfit could
be a more robust and computationally efficient technique. Our approach is to express the surface misfit as the volume
between the measured surface and model predicted surface. An iterative method is used to solve the adjoint equations.
The surface misfit criterion is tested in a cortical distension clinical case and compared to the results generated with the
prior point-based methodology solved either iteratively or with the representer algorithm. The results show that solving
the adjoint equations with an iterative method improves computational efficiency dramatically over the representer
approach and that reformulating the minimization criterion in terms of a surface description is even more efficient.
Applying intra-operative data in the form of a surface misfit is computationally very efficient and appears promising
with respect to its accuracy in estimating brain deformation.
Brain shift poses a significant challenge to accurate image-guided neurosurgery. To this end, finite element (FE) brain
models have been developed to estimate brain motion during these procedures. The significance of the brain-skull
boundary conditions (BCs) for accurate predictions in these models has been explored in dynamic impact and inertial
rotation injury computational simulations where the results have shown that the brain mechanical response is sensitive to
the type of BCs applied. We extend the study of brain-skull BCs to quasi-static brain motion simulations which prevail
in neurosurgery. Specifically, a frictionless brain-skull BC using a contact penalty method master-slave paradigm is
incorporated into our existing deformation forward model (forced displacement method). The initial brain-skull gap
(CSF thickness) is assumed to be 2mm for demonstration purposes. The brain surface nodes are assigned as either fixed
(at bottom along the gravity direction), free (at brainstem), with prescribed displacement (at craniotomy) or as slave
nodes potentially in contact with the skull (all the remaining). Each slave node is assigned a penalty parameter (β=5)
such that when the node penetrates the rigid body skull inner-surface (master surface), a contact force is introduced
proportionally to the penetration. Effectively, brain surface nodes are allowed to move towards or away from the
cranium wall, but are ultimately restricted from penetrating the skull. We show that this scheme improves the model's
ability to represent the brain-skull interface.
Shift of brain tissues during surgical procedures affects the precision of image-guided neurosurgery (IGNS). To improve the accuracy of the alignment between the patient and images, finite element model-based non-rigid registration methods have been investigated. The best prior estimate (BPE), the forced displacement method (FDM), the weighted basis solutions (WBS), and the adjoint equations method (AEM) are versions of this approach that have appeared in the literature. In this paper, we present a quantitative comparison study on a set of three patient cases. Three-dimensional displacement data from the surface and subsurface was extracted using the intra-operative ultrasound (iUS) and intraoperative stereovision (iSV). These data are then used as the "ground truth" in a quantitative study to evaluate the accuracy of estimates produced by the finite element models. Different types of clinical cases are presented, including distension and combination of sagging and distension. In each case, a comparison of the performance is made with the four methods. The AEM method which recovered 26-62% of surface brain motion and 20-43% of the subsurface deformation, produced the best fit between the measured data and the model estimates.
Brain shift during neurosurgery currently limits the effectiveness of stereotactic guidance systems that rely on preoperative image modalities like magnetic resonance (MR). The authors propose a process for quantifying intraoperative brain shift using spatially-tracked freehand intraoperative ultrasound (iUS). First, one segments a distinct feature from the preoperative MR (tumor, ventricle, cyst, or falx) and extracts a faceted surface using the marching cubes algorithm. Planar contours are then semi-automatically segmented from two sets of iUS b-planes obtained (a) prior to the dural opening and (b) after the dural opening. These two sets of contours are reconstructed in the reference frame of the MR, composing two distinct sparsely-sampled surface descriptions of the same feature segmented from MR. Using the Iterative Closest Point (ICP) algorithm one obtains discrete estimates of the feature deformation performing point-to-surface matching. Vector subtraction of the matched points then can be used as sparse deformation data inputs for inverse biomechanical brain tissue models. The results of these simulations are then used to modify the pre-operative MR to account for intraoperative changes. The proposed process has undergone preliminary evaluations in a phantom study and was applied to data from two clinical cases. In the phantom study, the process recovered controlled deformations with an RMS error of 1.1 mm. These results also suggest that clinical accuracy would be on the order of 1-2mm. This finding is consistent with prior work by the Dartmouth Image-Guided Neurosurgery (IGNS) group. In the clinical cases, the deformations obtained were used to produce qualitatively reasonable updated guidance volumes.
KEYWORDS: Data modeling, Brain, Magnetic resonance imaging, Motion models, Ultrasonography, Tissues, Tumors, Error analysis, 3D modeling, Systems modeling
Model-based approaches to correct for brain shift in image-guided neurosurgery systems have shown promising results. Despite the initial success of such methods, the complex mechanical behavior of the brain under surgical loads makes it likely that model predictions could be improved with the incorporation of real-time measurements of tissue shift in the OR. To this end, an inverse method has been developed using sparse data and model constraints to generate estimates of brain motion. Based on methodology from ocean circulation modeling, this computational scheme combines estimates of statistical error in forcing conditions with a least squares minimization of the model-data misfit to directly estimate the full displacement solution. The method is tested on a 2D simulation based on clinical data in which ultrasound images were co-registered to the preoperative MR stack. Calculations from the 2D forward model are used as the 'gold standard' to which the inverse scheme is compared. Initial results are promising, though further study is needed to ascertain its value in 3D shift estimates.
KEYWORDS: Brain, Surgery, Ultrasonography, Neuroimaging, Magnetic resonance imaging, Data modeling, Tissues, Human-machine interfaces, Finite element methods, Head
Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.
Patient registration, a key step in establishing image guidance, has to be performed in real-time after the patient is anesthetized in the operating room (OR) prior to surgery. We propose to use cortical vessels as landmarks for registering the preoperative images to the operating space. To accomplish this, we have attached a video camera to the optics of the operating microscope and acquired a pair of images by moving the scope. The stereo imaging system is calibrated to obtain both intrinsic and extrinsic camera parameters. During neurosurgery, right after opening of dura, a pair of stereo images is acquired. The 3-D locations of blood vessels are estimated via stereo vision techniques. The same series of vessels are localized in the preoperative image volume. From these 3-D coordinates, the transformation matrix between preoperative images and the operating space is estimated. Using a phantom, we have demonstrated that patient registration from cortical vessels is not only feasible but also more accurate than using conventional scalp-attached fiducials. The Fiducial Registration Error (FRE) has been reduced from 1 mm using implanted fiducials to 0.3 mm using cortical vessels. By replacing implanted fiducials with cortical features, we can automate the registration procedure and reduce invasiveness to the patient.
Image guided neurosurgery systems rely on rigid registration of the brain to preoperative images, not taking into account the displacement of brain tissue during surgery. Co-registered ultrasound appears to be a promising means of detecting tissue shift in the operating room. Although the use of ultrasound images alone may be insufficient for adequately describing intraoperative brain deformation, they could be used in conjunction with a computational model to predict full volume deformation. We rigorously test the assumption that co-registered ultrasound is an accurate source of sparse displacement data. Our co-registered ultrasound system is studied in both clinical applications as well as in a series of porcine experiments. Qualitative analysis of patient data indicates that ultrasound correctly depicts displaced tissue. The porcine studies demonstrate that features from co-registered ultrasound and CT or MR images are properly aligned to within approximately 1.7 mm. Tissue tracking in pigs suggests that the magnitude of displaced tissue may be more accurately predicted than the actual location of features. We conclude that co-registered ultrasound is capable of detecting brain tissue shift, and that incorporating displacement data into computational model appears feasible.
Microscope-based image-guided neurosurgery can be divided into three steps: calibration of the microscope optics; registration of the pre-operative images to the operating space; and tracking of the patient and microscope over time. Critical to this overall system is the temporal retention of accurate camera calibration. Classic calibration algorithms are routinely employed to find both intrinsic and extrinsic camera parameters. The accuracy of this calibration, however, is quickly compromised due to the complexity of the operating room, the long duration of a surgical procedure, and the inaccuracies in the tracking system. To compensate for the changing conditions, we have developed an adaptive procedure which responds to accruing registration error. The approach utilizes miniature fiducial markers implanted on the bony rim of the craniotomy site, which remain in the field of view of the operating microscope. A simple error function that enforces the registration of the known fiducial markers is used to update the extrinsic camera parameters. The error function is minimized using a gradient descent. This correction procedure reduces RMS registration errors for cortical features on the surface of the brain by an average of 72%, or 1.5 mm. These errors were reduced to less than 0.6 mm after each correction during the entire surgical procedure.
The purpose of this study was to evaluate the thermal and tissue changes associated with the use of electrocautery hemostasis on dependent tissues such as the penis. Circumcision was performed on twelve male sheep using a Gomco clamp and surgical removal. Electrocautery was then applied circumferentially to 6 separate sites along the circumcision incision for a duration of 2 seconds per application, with a 10-second interval between applications. Coagulation electrocautery power was set at either 25 or 50 Watts. Temperature changes were monitored by fiber-optic temperature probes placed immediately beneath the circumcision site in the penile urethra and at the base of the penis. Three animals were sacrificed acutely while the remaining 15 animals were sacrificed one month post-procedure. Penises were assessed by gross and histologic observation. Immediately post-procedure, epidermal and superficial dermal necrosis was observed at the cautery sites. By 32-days postprocedure, primary tissue change at the cautery site consisted of scarring and mild inflammation. Energy level (25 vs 50 Watts) did not significantly affect the level of tissue damage. Penile tissue subadjacent or distant to the cautery sites showed no gross or histologic change at the acute or chronic assessment period. During electrocautery, a maximum increase of 7°C (1-2 mm from the site of electrocautery) was observed regardless of the energy level. Temperature elevations commonly returned to near baseline within 5 minutes post-application. Smaller increases in temperature (range 1-2°C) occurred at the base of the penis. Our results document the long-held perception that electrocautery hemostasis can be safely employed in penile surgery, if the appropriate technique is used. Significant temperature and tissue changes are confined to very small/localized regions immediately adjacent to the cautery applicator.
During neurosurgery, intraoperative brain shift comprises the accuracy of image guided techniques. We are investigating the use of ultrasound as an inexpensive means of gaining 3D data on subsurface tissue deformation. Measured displacement of easily recognizable features can then be used to drive a computational model for a description of full volume deformation. Subsurface features identified in the ultrasound image plane are located in world space using a 3D optical tracking system mounted to the ultrasound scanhead. This tracking system is also co- registered with the model space derived from preoperative MR, allowing the ultrasound image plane to e reconstructed in MR space, and the corresponding oblique MR slice to be obtained. The ultrasound image tracker has been calibrated with a novel strategy involving multiple scans of N-shaped wires positioned at several depths. Mean calibration error is found to range from 0.43 mm to 0.76 mm in plane and 0.86 mm to 1.51 mm out of plane for the two ultrasound image scales calibrated. Improved ultrasound calibration and co- registration facilitates subsurface feature tracking as a first step in obtaining model constraints for intraoperative image compensation. Estimation of and compensation for brain shift through the low cost, efficient technology of ultrasound, combined with computational modeling is feasible and appears to be a promising means of improving intraoperative image guided techniques.
Distortion between the operating field and preoperative images increases as image-guided surgery progresses. Retraction is a typical early-stage event that causes significant tissue deformation, which can be modeled as an intraoperative compensation strategy. This study compares the predictive power of incremental versus single-step retraction models in the porcine brain. In vivo porcine experiments were conducted that involved implanting markers in the brain whose trajectories were tracked in CT scans following known incremental deformations induced by a retractor blade placed interhemispherically. Studies were performed using a 3D consolidation model of brain deformation to investigate the relative predictive benefits of incremental versus single-step retraction simulations. The results show that both models capture greater than 75% of tissue loading due to retraction. We have found that the incremental approach outperforms the single-step method with an average improvement of 1.5%-3%. More importantly it also preferentially recovers the directionality of movement, providing better correspondence to intraoperative surgical events. A new incremental approach to tissue retraction has been developed and shown to improve data-model match in retraction experiments in the porcine brain. Incremental retraction modeling is an improvement over previous single- step models, which does not incur additional computational to overhead. Results in the porcine brain show that even when the overall displacement magnitudes between the two models are similar, directional trends of the displacement field are often significantly improved with the incremental method.
Compensation for intraoperative tissue motion in the registration of preoperative image volumes with the OR is important for improving the utility of image guidance in the neurosurgery setting. Model-based strategies for neuroimage compensation are appealing because they offer the prospect of retaining the high-resolution preoperative information without the expense and complexity associated with full volume intraoperative scanning. Further, they present opportunities to integrate incomplete or sparse, partial volume sampling of the surgical field as a guide for full volume estimation and subsequent compensation of the preoperative images. While potentially promising, there are a number of unresolved difficulties associated with deploying computational models for this purpose. For example, to date they have only been successful in representing the tissue motion that occurs during the earliest stages of neurosurgical intervention and have not addressed the later more complex events of tissue retraction and resection. IN this paper, we develop a mathematical framework for implementing retraction and resection within the context of finite element modeling of brain deformation using the equations of linear consolidation. Specifically, we discuss the critical boundary conditions applied at the new tissue surfaces created by these respective interventions and demonstrate the ability to model compound events where updated image volumes are generated in succession to represent the significant occurrences of tissue deformation which take place during the course of surgery. In this regard, we show image compensation for an actual OR case involving the implantation of a subdural electrode array for recording neural activity.
The desire for noninvasive monitoring of thermal therapy is readily apparent given its intent to be a minimally-invasive form of treatment. Electromagnetic properties of tissue vary with temperature; hence, the opportunity exists to exploit these variations as a means of following thermally-based therapeutic interventions. The review describes progress in electrical impedance tomography and active microwave imaging towards the realization of noninvasive temperature estimation. Examples are drawn from the author's experiences with these technologies in order to illustrate the principles and practices associated with electromagnetic imaging in the therapy monitoring context.
KEYWORDS: Tissues, Brain, Motion models, Protactinium, Magnetic resonance imaging, In vivo imaging, Data modeling, Computed tomography, Ultrasonography, 3D modeling
For more than a decade, surgical procedures have benefited significantly from the advent of OR (operating room) coregistered preoperative CT (computed tomographic) and MR (magnetic resonance) imaging. Despite advances in imaging and image registration, one of the most challenging problems is accounting for intraoperative tissue motion resulting from surgical loading conditions. Due to the considerable expense and cumbersome nature of intraoperative MR/CT scanners and the lack of high spatial definition of intracranial anatomy with ultrasound, we have elected to pursue a physics-based computational approach to account for tissue deformation in the context of frameless steroetactic neurosurgery. We have developed a computational model of the brain based on porous media physics and have begun to quantify subsurface deformation due to comparable surgical loads using an in vivo porcine model. Templates of CT-observable markers are implanted in a grid-like fashion in the pig brian to quantify tissue motion. Preliminary results based on the simplest of model assumptions are encouraging and have predicted displacement within 15% of measured values. In this paper, a series of computations is compared to experimental data to further understand the impact of material properties and pressure gradients within a homogeneous model of brain deformation. The results show that the best fits are obtained with Young's moduli and Poisson's ratio which are smaller than those values typically reported in the literature. As the Poisson ratio decreases towards 0.4 the corresponding Young's modulus increases towards the low end of the values contained in the literature. The optimal pressure gradient is found to be within physiological limits but generally higher than literature values would suggest for a given level of imparted loading, although differences between our experiments and those in the literature with respect to tissue loading conditions are noted.
KEYWORDS: Tissues, Analog electronics, Electrodes, Data acquisition, Image restoration, Injuries, Digital electronics, In vivo imaging, Tumors, Radiotherapy
Electrical properties of tissues in the 10 KHz to 10 MHz range are known to be temperature sensitive making the monitoring and assessment of thermal insult delivered for therapeutic purposes possible through imaging schemes which spatially resolve these changes. We have been developing electrical impedance imaging technology from both the hardware data acquisition and software image reconstruction perspectives in order to realize the capability of spectroscopically examining the electrical property response of tissues undergoing hyperthermia therapy. Results from simulations, in vitro phantom experiments and in vivo studies including in human patients are presented. Specifically, a new prototype multi-frequency data acquisition system which is functional to 1 MHz in both voltage and current modes is described. In addition, recent advances in image reconstruction methods which include the enhancement techniques of total variation minimization, dual meshing and spatial filtering are discussed. It is also clear that the electrical impedance spectrum of tissue has the potential to monitor other types of treatment-induced injury. Preliminary in vivo electrical impedance measurements in a rat leg model suggest that the tissue damage from radiation therapy can be tracked with this technique. Both dose and time-dependent responses have been observed in the electrical impedance data when compared to measurements recorded in an untreated control. Correlations with histological examination have also been performed and indicate that electrical impedance spectroscopy may provide unique information regarding tissue functional status and cellular morphology. Representative results from these studies are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.