Cotton balls are a versatile and efficient tool commonly used in neurosurgical procedures to absorb fluids and manipulate delicate tissues. However, the use of cotton balls is accompanied by the risk of accidental retention in the brain after surgery. Retained cotton balls can lead to dangerous immune responses and potential complications, such as adhesions and textilomas. In a previous study, we showed that ultrasound can be safely used to detect cotton balls in the operating area due to the distinct acoustic properties of cotton compared with the acoustic properties of surrounding tissue. In this study, we enhance the experimental setup using a 3D-printed custom depth box and a Butterfly IQ handheld ultrasound probe. Cotton balls were placed in variety of positions to evaluate size and depth detectability limits. Recorded images were then analyzed using a novel algorithm that implements recently released YOLOv4, a state-of-the-art, real-time object recognition system. As per the radiologists’ opinion, the algorithm was able to detect the cotton ball correctly 61% of the time, at approximately 32 FPS. The algorithm could accurately detect cotton balls up to 5mm in diameter, which corresponds to the size of surgical balls used by neurosurgeons, making the algorithm a promising candidate for regular intraoperative use.
Efficiency and patient safety are top priorities in any surgical operation. One effective way to achieve these objectives is automating many of the logistical and routine tasks that occur in the operating room. Inspired by smart assistant technology already commonplace in the consumer sector, we engineered the Smart Hospital Assistant (SHA), a smart, voice-controlled virtual assistant that handles natural speech recognition while executing a plurality of functions to aid surgery. Simulated surgeries showed that the SHA reduced operating time, optimized surgical staff resources, and reduced the number of major touch points that can lead to surgical site infections. The SHA not only shows its potential in the operating room, but also in other healthcare environments that may benefit from having virtual smart assistant technology.
A prostate computer-aided diagnosis (CAD) based on random forest to detect prostate cancer using a combination of spatial, intensity, and texture features extracted from three sequences, T2W, ADC, and B2000 images, is proposed. The random forest training considers instance-level weighting for equal treatment of small and large cancerous lesions as well as small and large prostate backgrounds. Two other approaches, based on an AutoContext pipeline intended to make better use of sequence-specific patterns, were considered. One pipeline uses random forest on individual sequences while the other uses an image filter described to produce probability map-like images. These were compared to a previously published CAD approach based on support vector machine (SVM) evaluated on the same data. The random forest, features, sampling strategy, and instance-level weighting improve prostate cancer detection performance [area under the curve (AUC) 0.93] in comparison to SVM (AUC 0.86) on the same test data. Using a simple image filtering technique as a first-stage detector to highlight likely regions of prostate cancer helps with learning stability over using a learning-based approach owing to visibility and ambiguity of annotations in each sequence.
Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD’s (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.