The evaluation of head malformations plays an essential role in the early diagnosis, the decision to perform surgery and the assessment of the surgical outcome of patients with craniosynostosis. Clinicians rely on two metrics to evaluate the head shape: head circumference (HC) and cephalic index (CI). However, they present a high inter-observer variability and they do not take into account the location of the head abnormalities. In this study, we present an automated framework to objectively quantify the head malformations, HC, and CI from three-dimensional (3D) photography, a radiation-free, fast and non-invasive imaging modality. Our method automatically extracts the head shape using a set of landmarks identified by registering the head surface of a patient to a reference template in which the position of the landmarks is known. Then, we quantify head malformations as the local distances between the patient’s head and its closest normal from a normative statistical head shape multi-atlas. We calculated cranial malformations, HC, and CI for 28 patients with craniosynostosis, and we compared them with those computed from the normative population. Malformation differences between the two populations were statistically significant (p<0.05) at the head regions with abnormal development due to suture fusion. We also trained a support vector machine classifier using the malformations calculated and we obtained an improved accuracy of 91.03% in the detection of craniosynostosis, compared to 78.21% obtained with HC or CI. This method has the potential to assist in the longitudinal evaluation of cranial malformations after surgical treatment of craniosynostosis.
Ultrasound (US)-guided renal biopsy is a critically important tool in the evaluation and management of non-malignant renal pathologies with diagnostic and prognostic significance. It requires a good biopsy technique and skill to safely and consistently obtain high yield biopsy samples for tissue analysis. This project aims to develop a virtual trainer to help clinicians to improve procedural skill competence in real-time ultrasound-guided renal biopsy. This paper presents a cost-effective, high-fidelity trainer built using low-cost hardware components and open source visualization and interactive simulation libraries: interactive medical simulation toolkit (iMSTK) and 3D Slicer. We used a physical mannequin to simulate the tactile feedback that trainees experience while scanning a real patient and to provide trainees with spatial awareness of the US scanning plane with respect to the patient’s anatomy. The ultrasound probe and biopsy needle were modeled using commonly used clinical tools and were instrumented to communicate with the simulator. 3D Slicer was used to visualize an image sliced from a pre-acquired 3-D ultrasound volume based on the location of the probe, with a realistic needle rendering. The simulation engine in iMSTK modeled the interaction between the needle and the virtual tissue to generate visual deformations on the tissue and tactile forces on the needle which are transmitted to the needle that the user holds. Initial testing has shown promising results with respect to quality of simulated images and system responsiveness. Further evaluation by clinicians is planned for the next stage.
Despite a strong evidence of the clinical and economic benefits of minimally invasive surgery (MIS) for many common surgical procedures, there is a gross underutilization of MIS in many US hospitals, potentially due to its steep learning curve. Intraoperative videos captured using a camera inserted into the body during MIS procedures are emerging as an invaluable resource for MIS education, skill assessment and quality assurance. However, these videos often have a duration of several hours and there is a pressing need for automated tools to help surgeons quickly find key semantic segments of interest within MIS videos. In this paper, we present a novel integrated approach for facilitating content-based retrieval of video segments that are semantically similar to a query video within a large collection of MIS videos. We use state-of-theart deep 3D convolutional neural network (CNN) models pre-trained on large public video classification datasets to extract spatiotemporal features from MIS video segments and employ an iterative query refinement (IQR) strategy where in a support vector machine (SVM) classifier trained online based on relevance feedback from the user is used to refine the search results iteratively. We show that our method outperforms the state-of-the-art on the SurgicalActions160 dataset containing 160 video clips of typical surgical actions in gynecologic MIS procedures.
Surgical simulators are powerful tools that assist in providing advanced training for complex craniofacial surgical procedures and objective skills assessment such as the ones needed to perform Bilateral Sagittal Split Osteotomy (BSSO). One of the crucial steps in simulating BSSO is accurately cutting the mandible in a specific area of the jaw, where surgeons rely on high fidelity visual and haptic cues. In this paper, we present methods to simulate drilling and cutting of the bone using the burr and the motorized oscillating saw respectively. Our method allows low computational cost bone drilling or cutting while providing high fidelity haptic feedback that is suitable for real-time virtual surgery simulation.
There has been a recent emphasis in surgical science on supplementing surgical training outside of the Operating Room (OR). Combining simulation training with the current surgical apprenticeship enhances surgical skills in the OR, without increasing the time spent in the OR practicing. Computer-assisted surgical (CAS) planning consists of performing operative techniques virtually using three-dimensional (3D) computer-based models reconstructed from 3D crosssectional imaging. The purpose of this paper is to present a CAS system to rehearse, visualize and quantify osteotomies, and demonstrate its usefulness in two different osteotomy surgical procedures, cranial vault reconstruction and femoral osteotomy. We found that the system could sufficiently simulate these two procedures. Our system takes advantage of the high-quality visualizations possible with 3DSlicer, as well as implements new infrastructure to allow for direct 3D interaction (cutting and positioning) with the bone models. We see the proposed osteotomy planner tool evolving towards incorporating different cutting templates to help depict several surgical scenarios, help 'trained' surgeons maintain operating skills, help rehearse a surgical sequence before heading to the OR, or even to help surgical planning for specific patient cases.
The evaluation of cranial malformations plays an essential role both in the early diagnosis and in the decision to perform surgical treatment for craniosynostosis. In clinical practice, both cranial shape and suture fusion are evaluated using CT images, which involve the use of harmful radiation on children. Three-dimensional (3D) photography offers noninvasive, radiation-free, and anesthetic-free evaluation of craniofacial morphology. The aim of this study is to develop an automated framework to objectively quantify cranial malformations in patients with craniosynostosis from 3D photography. We propose a new method that automatically extracts the cranial shape by identifying a set of landmarks from a 3D photograph. Specifically, it registers the 3D photograph of a patient to a reference template in which the position of the landmarks is known. Then, the method finds the closest cranial shape to that of the patient from a normative statistical shape multi-atlas built from 3D photographs of healthy cases, and uses it to quantify objectively cranial malformations. We calculated the cranial malformations on 17 craniosynostosis patients and we compared them with the malformations of the normative population used to build the multi-atlas. The average malformations of the craniosynostosis cases were 2.68 ± 0.75 mm, which is significantly higher (p<0.001) than the average malformations of 1.70 ± 0.41 mm obtained from the normative cases. Our approach can support the quantitative assessment of surgical procedures for cranial vault reconstruction without exposing pediatric patients to harmful radiation.
Cochlear implantation is the standard of care for infants born with severe hearing loss. Current guidelines approve the surgical placement of implants as early as 12 months of age. Implantation at a younger age poses a greater surgical challenge since the underdeveloped mastoid tip, along with thin calvarial bone, creates less room for surgical navigation and can result in increased surgical risk. We have been developing a temporal bone dissection simulator based on actual clinical cases for training otolaryngology fellows in this delicate procedure. The simulator system is based on pre-procedure CT (Computed Tomography) images from pediatric infant cases (<12 months old) at our hospital. The simulator includes: (1) simulation engine to provide the virtual reality of the temporal bone surgery environment, (2) a newly developed haptic interface for holding the surgical drill, (3) an Oculus Rift to provide a microscopic-like view of the temporal bone surgery, and (4) user interface to interact with the simulator through the Oculus Rift and the haptic device. To evaluate the system, we have collected 10 representative CT data sets and segmented the key structures: cochlea, round window, facial nerve, and ossicles. The simulator will present these key structures to the user and warn the user if needed by continuously calculating the distances between the tip of surgical drill and the key structures.
Stenosis of the upper airway affects approximately 1 in 200,000 adults per year1 , and occurs in neonates as well2 . Its treatment is often dictated by institutional factors and clinicians’ experience or preferences 3 . Objective and quantitative methods of evaluating treatment options hold the potential to improve care in stenosis patients. Virtual surgical planning software tools are critically important for this. The Virtual Pediatric Airway Workbench (VPAW) is a software platform designed and evaluated for upper airway stenosis treatment planning. It incorporates CFD simulation and geometric authoring with objective metrics from both that help in informed evaluation and planning. However, this planner currently lacks physiological information which could impact the surgical planning outcomes. In this work, we integrated a lumped parameter, model based human physiological engine called BioGears with VPAW. We demonstrated the use of physiology informed virtual surgical planning platform for patient-specific stenosis treatment planning. The preliminary results show that incorporating patient-specific physiology in the pretreatment plan would play important role in patient-specific surgical trainers and planners in airway surgery and other types of surgery that are significantly impacted by physiological conditions during surgery.
Laparoscopic surgery is a minimally invasive surgical approach in which surgical instruments are passed through ports placed at small incisions. This approach can benefit patients by reducing recovery times and scars. Surgeons have gained greater dexterity, accuracy, and vision through adoption of robotic surgical systems. However, in some cases a preselected set of ports cannot be accommodated by the robot; the robot’s arms may cause collisions during the procedure, or the surgical targets may not be reachable through the selected ports. In this case, the surgeon must either make more incisions for more ports, or even abandon the laparoscopic approach entirely. To assist in this, we are building an easytouse system which, given a surgical task and preoperative medical images of the patient, will recommend a suitable port placement plan for the robotic surgery. This work bears two main contributions: 1) a high level user interface that assists the surgeon in operating the complicated underlying planning algorithm; and 2) an interface to assist the surgical team in implementation of the recommended plan in the operating room. We believe that such an automated port placement system would reduce setup time for robotic surgery and reduce the morbidity to patients caused by unsuitable surgical port placement.
The skull of young children is made up of bony plates that enable growth. Craniosynostosis is a birth defect that causes one or more sutures on an infant’s skull to close prematurely. Corrective surgery focuses on cranial and orbital rim shaping to return the skull to a more normal shape. Functional problems caused by craniosynostosis such as speech and motor delay can improve after surgical correction, but a post-surgical analysis of brain development in comparison with age-matched healthy controls is necessary to assess surgical outcome. Full brain segmentations obtained from pre- and post-operative computed tomography (CT) scans of 8 patients with single suture sagittal (n=5) and metopic (n=3), nonsyndromic craniosynostosis from 41 to 452 days-of-age were included in this study. Age-matched controls obtained via 4D acceleration-based regression of a cohort of 402 full brain segmentations from healthy controls magnetic resonance images (MRI) were also used for comparison (ages 38 to 825 days). 3D point-based models of patient and control cohorts were obtained using SPHARM-PDM shape analysis tool. From a full dataset of regressed shapes, 240 healthy regressed shapes between 30 and 588 days-of-age (time step = 2.34 days) were selected. Volumes and shape metrics were obtained for craniosynostosis and healthy age-matched subjects. Volumes and shape metrics in single suture craniosynostosis patients were larger than age-matched controls for pre- and post-surgery. The use of 3D shape and volumetric measurements show that brain growth is not normal in patients with single suture craniosynostosis.
Ultrasound is widely used intra-operatively to provide real-time feedback in image guided intervention procedures. Registration of pre- and intra-operative images is a crucial step in the procedure. Unfortunately, real-time US images often have poor signal-to-noise ratio and suffer from imaging artifacts. Hence, registration using US images can be challenging and significant preprocessing is often required to make the registrations robust. The amount of preprocessing required can be reduced by incorporating US physical imaging process. However, progress in this research is hampered due to lack of publicly available database for training and testing image analysis algorithms that take in to consideration ultrasound physical process. We present here a new database that we are building to archive and distribute ultrasound images of an abdominal phantom acquired at different image acquisition parameters. The database contains tracking information of the transducer in addition to the 2D ultrasound image slices. We believe a publicly available database like this one will provide a valuable resource for the research community and it will be instrumental in developing a collaborative scientific community needed to advance the field.
KEYWORDS: Surgery, Visualization, Laparoscopy, 3D modeling, 3D acquisition, Cameras, Image registration, Medical imaging, Data modeling, Detection and tracking algorithms
Laparoscopic surgery is a minimally invasive surgical approach, in which abdominal surgical procedures are performed through trocars via small incisions. Patients benefit by reduced postoperative pain, shortened hospital stays, improved cosmetic results, and faster recovery times. Optimal port placement can improve surgeon dexterity and avoid the need to move the trocars, which would cause unnecessary trauma to the patient. We are building an intuitive open source visualization system to help surgeons identify ports. Our methodology is based on an intuitive port placement visualization module and atlas-based registration algorithm to transfer port locations to individual patients. The methodology follows three steps:1) Use a port placement visualization module to manually place ports in an abdominal organ atlas. This step generates port-augmented abdominal atlas. This is done only once for a given patient population. 2) Register the atlas data with the patient CT data, to transfer the prescribed ports to the individual patient 3) Review and adjust the transferred port locations using the port placement visualization module. Tool maneuverability and target reachability can be tested using the visualization system. Our methodology would decrease the amount of physician input necessary to optimize port placement for each patient case. In a follow up
work, we plan to use the transferred ports as starting point for further optimization of the port locations by formulating a cost function that will take into account factors such as tool dexterity and likelihood of collision between instruments.
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime
requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due
to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its
requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear
elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems,
and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing
work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear
finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing
the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element
operations. We employ a virtual coupling method for separating deformable body simulation and collision
detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation.
The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with
haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the
material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic
relaxation are required to improve the stability of the system.
Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative
images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid
on top of preoperative images of the patient during surgery. The most commonly used localization systems in the
Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However,
OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different
implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight
occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain
values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for
robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues.
This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will
filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based
applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI
Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a
maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded
state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.
This paper presents on-going research that addresses uncertainty along the Z-axis in image-guided surgery, for
applications to large surgical workspaces, including those found in veterinary medicine. Veterinary medicine lags human
medicine in using image guidance, despite MR and CT data scanning of animals. The positional uncertainty of a surgical
tracking device can be modeled as an octahedron with one long axis coinciding with the depth axis of the sensor, where
the short axes are determined by pixel resolution and workspace dimensions. The further a 3D point is from this device,
the more elongated is this long axis, and the greater the uncertainty along Z of this point's position, in relation to its
components along X and Y. Moreover, for a triangulation-based tracker, its position error degrades with the square of
distance. Our approach is to use two or more Micron Trackers to communicate with each other, and combine this feature
with flexible positioning. Prior knowledge of the type of surgical procedure, and if applicable, the species of animal that
determines the scale of the workspace, would allow the surgeon to pre-operatively configure the trackers in the OR for
optimal accuracy. Our research also leverages the open-source Image-guided Surgery Toolkit (IGSTK).
In this paper, we present an open-source framework for testing tracking devices in surgical
navigation applications. At the core of image-guided intervention systems is the tracking interface
that handles communication with the tracking device and gathers tracking information. Given that
the correctness of tracking information is critical for protecting patient safety and for ensuring the
successful execution of an intervention, the tracking software component needs to be thoroughly
tested on a regular basis. Furthermore, with widespread use of extreme programming methodology
that emphasizes continuous and incremental testing of application components, testing design
becomes critical. While it is easy to automate most of the testing process, it is often more difficult to
test components that require manual intervention such as tracking device.
Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source
toolkit written in C++ to control the robot movements and assess the accuracy of the tracking
devices. The application program interface (API) is cross-platform and runs on Windows, Linux and
MacOS.
We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit
(IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on
tracking devices can be performed at low cost and improve significantly the quality of the software.
KEYWORDS: Video, Ultrasonography, Image-guided intervention, Medical imaging, 3D video streaming, Computed tomography, 3D image processing, Imaging systems, Visualization, Surgery
The image-guided surgery toolkit (IGSTK) is an open source C++ library that provides the basic components required
for developing image-guided surgery applications. While the initial version of the toolkit has been released, some
additional functionalities are required for certain applications. With increasing demand for real-time intraoperative image
data in image-guided surgery systems, we are adding a video grabber component to IGSTK to access intraoperative
imaging data such as video streams. Intraoperative data could be acquired from real-time imaging modalities such as
ultrasound or endoscopic cameras. The acquired image could be displayed as a single slice in a 2D window or integrated
in a 3D scene. For accurate display of the intraoperative image relative to the patient's preoperative image, proper
interaction and synchronization with IGSTK's tracker and other components is necessary. Several issues must be
considered during the design phase: 1) Functions of the video grabber component 2) Interaction of the video grabber
component with existing and future IGSTK components; and 3) Layout of the state machine in the video grabber
component. This paper describes the video grabber component design and presents example applications using the video
grabber component.
The Image-Guided Surgery Toolkit (IGSTK) is an open source C++ software library that provides the basic components
needed to develop image-guided surgery applications. The focus of the toolkit is on robustness using a state machine
architecture. This paper presents an overview of the project based on a recent book which can be downloaded from
igstk.org. The paper includes an introduction to open source projects, a discussion of our software development process
and the best practices that were developed, and an overview of requirements. The paper also presents the architecture
framework and main components. This presentation is followed by a discussion of the state machine model that was
incorporated and the associated rationale. The paper concludes with an example application.
The objective of this research is to evaluate and compare the performance of our automated detection algorithm on isolated and attached nodules in whole lung CT scans. Isolated nodules are surrounded by the lung parenchyma with no attachment to large solid structures such as the chest wall or mediastinum surface, while
attached nodules are adjacent to these structures. The detection algorithm involves three major stages. First, the region of the image space where pulmonary nodules are to be found is identified. This involves segmenting the lung region and generating the pleural surface. In the second stage, which is the hypothesis generation stage, nodule candidate locations are identified and their sizes are estimated. The nodule candidates are successively refined in the third stage a sequence of filters of increasing complexity.
The algorithm was tested on a dataset containing 250 low-dose whole lung CT scans with 2.5mm slice thickness. A scan is composed of images covering the whole lung region for a single person. The dataset was partitioned into 200 and 50 scans for training and testing the algorithm. Only solid nodules were considered in this study. Experienced chest radiologists identified a total of 447 solid nodules. 345 and 102 of the nodules were from the training and testing datasets respectively. 126(28.2%) of the nodules in the dataset were attached nodules.
The detection performance was then evaluated separately for isolated and attached nodule types considering different size ranges. For nodules 3mm and larger, the algorithm achieved a sensitivity of 97.8% with 2.0 false positives (FPs) per scan and 95.7% with 19.3 FPs per scan for isolated and attached nodules respectively. For nodules 4mm and larger, a sensitivity of 96.6% with 1.5 FP per scan and a 100% sensitivity with 13 FPs per scan were obtained for isolated and attached nodule types respectively. The results show that our algorithm detects isolated and attached nodules with comparable sensitivity but differing number of false positives per scan. The high number of false positives for attached nodule detection was mainly due to the complexity of the mediastinum lung surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.