A novel technology for estimating both the pose and the joint flexion from a single musculoskeletal X-ray image is presented for automatic quality assessment of patient positioning. The method is based on convolutional neural networks and does not require pose or flexion labels of the X-ray images for the training phase. The task is split into two steps: (i) detection of relevant bone contours in the X-ray by a feature-detection network and (ii) regression of the pose and flexion parameters by a pose-estimation network based upon the detected contours. This separation enables the pose-estimation network to be trained using synthetic contours, which are generated via projections of an articulated 3D model of the target anatomy. It is demonstrated that the use of data-augmentation techniques during training of the pose-estimation network significantly contributes to the robustness of the algorithm. Feasibility of the approach is illustrated using lateral ankle X-ray exams. Validation was performed using X-rays of an anthropomorphic phantom of the foot-ankle joint, imaged in various controlled positions. Reference pose parameters were established by an expert using an interactive tool to align the articulated 3D joint model with the phantom image. Errors in pose estimation are in the range of 2 degrees per pose angle and at the level of the expert performance. Using the rigid foot phantom the flexion parameter was constant, but the overall results indicate accurate estimation also of this parameter.
The quality of chest radiographs is a practical issue because deviations from quality standards cost radiologists' time, may lead to misdiagnosis and hold legal risks. Automatic and reproducible assessment of the most important quality figures on every acquisition can enable a radiology department to measure, maintain, and improve quality rates on an everyday basis. A method is proposed here to automatically quantify the quality according to the aspects of (i) collimation, (ii) patient rotation, and (iii) inhalation state of a chest PA radiograph by localizing a number of anatomical features and calculating some quality figures in accordance with international standards. The anatomical features related to these quality aspects are robustly detected by a combination of three convolutional neural networks and two probabilistic anatomical atlases. An error analysis demonstrates the accuracy and robustness of the method. The implementation proposed here works in real time (less than a second) on a CPU without any GPU support.
Automated interpretation of CT scans is an important, clinically relevant area as the number of such scans is increasing rapidly and the interpretation is time consuming. Anatomy localization is an important prerequisite for any such interpretation task. This can be done by image-to-atlas registration, where the atlas serves as a reference space for annotations such as organ probability maps. Tissue type based atlases allow fast and robust processing of arbitrary CT scans. Here we present two methods which significantly improve organ localization based on tissue types. A first problem is the definition of tissue types, which until now is done heuristically based on experience. We present a method to determine suitable tissue types from sample images automatically. A second problem is the restriction of the transformation space: all prior approaches use global affine maps. We present a hierarchical strategy to refine this global affine map. For each organ or region of interest a localized tissue type atlas is computed and used for a subsequent local affine registration step. A three-fold cross validation on 311 CT images with different fields-of-view demonstrates a reduction of the organ localization error by 33%.
A fully automatic method generating a whole body atlas from CT images is presented. The atlas serves as a reference space for annotations. It is based on a large collection of partially overlapping medical images and a registration scheme. The atlas itself consists of probabilistic tissue type maps and can represent anatomical variations. The registration scheme is based on an entropy-like measure of these maps and is robust with respect to field-of-view variations. In contrast to other atlas generation methods, which typically rely on a sufficiently large set of annotations on training cases, the presented method requires only the images. An iterative refinement strategy is used to automatically stitch the images to build the atlas.
Affine registration of unseen CT images to the probabilistic atlas can be used to transfer reference annotations, e.g. organ models for segmentation initialization or reference bounding boxes for field-of-view selection. The robustness and generality of the method is shown using a three-fold cross-validation of the registration on a set of 316 CT images of unknown content and large anatomical variability. As an example, 17 organs are annotated in the atlas reference space and their localization in the test images is evaluated. The method yields a recall (sensitivity), specificity and precision of at least 96% and thus performs excellent in comparison to competitors.
KEYWORDS: Cartilage, Image segmentation, Bone, 3D modeling, Data modeling, Magnetic resonance imaging, Image processing, 3D image processing, Error analysis, Medical research
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first
applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by
iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to
position deformable cartilage models for each of the three bones with reference to the segmented bone models. After
initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value
gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated
the sensitivity of 83±6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage
volume measurement yielded an average error of 9±7% as secondary endpoint. For cartilage being a thin structure,
already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard
criterion.
A novel and robust method for automatic scan planning of MRI examinations of knee joints is presented. Clinical
knee examinations require acquisition of a 'scout' image, in which the operator manually specifies the scan volume
orientations (off-centres, angulations, field-of-view) for the subsequent diagnostic scans. This planning task is
time-consuming and requires skilled operators. The proposed automated planning system determines orientations
for the diagnostic scan by using a set of anatomical landmarks derived by adapting active shape models of the
femur, patella and tibia to the acquired scout images. The expert knowledge required to position scan geometries
is learned from previous manually planned scans, allowing individual preferences to be taken into account. The
system is able to automatically discriminate between left and right knees. This allows to use and merge training
data from both left and right knees, and to automatically transform all learned scan geometries to the side for
which a plan is required, providing a convenient integration of the automated scan planning system in the clinical
routine. Assessment of the method on the basis of 88 images from 31 different individuals, exhibiting strong
anatomical and positional variability demonstrates success, robustness and efficiency of all parts of the proposed
approach, which thus has the potential to significantly improve the clinical workflow.
KEYWORDS: Data modeling, Magnetic resonance imaging, Brain, 3D modeling, Scanners, Neuroimaging, Diagnostics, Process modeling, Head, Image acquisition
In clinical MRI examinations, the geometry of diagnostic scans is defined in an initial planning phase. The operator plans the scan volumes (off-centre, angulation, field-of-view) with respect to patient anatomy in 'scout' images. Often multiple plans are required within a single examination, distracting attention from the patient waiting in the scanner. A novel and robust method is described for automated planning of neurological MRI scans, capable of handling strong shape deviations from healthy anatomy. The expert knowledge required to position scan geometries is learned from previous example plans, allowing site-specific styles to be readily taken into account. The proposed method first fits an anatomical model to the scout data, and then new scan geometries are positioned with respect to extracted landmarks. The accuracy of landmark extraction was measured to be comparable to the inter-observer variability, and automated plans are shown to be highly consistent with those created by expert operators using clinical data. The results of the presented evaluation demonstrate the robustness and applicability of the proposed approach, which has the potential to significantly improve clinical workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.