Reflection of photoacoustic (PA) signals from strong acoustic heterogeneities in biological tissue leads to reflection artifacts (RAs) in B-mode PA images. In practice, RAs often clutter clinically obtained PA images, making the interpretation of these images difficult in the presence of hypoechoic or anechoic biological structures. Towards PA artifact removal, several researchers have exploited 1) the frequency/spectrum content of time-series photoacoustic data in order to separate the true signal from artifacts, and 2) the multi-wavelength response of photoacoustic targets, assuming that the spectral nature of RAs correlates well with their corresponding source signals. These approaches are limited to extensive offline processing and sometimes fail to correctly identify artifacts in deep tissue. This study demonstrates the use of a deep neural network with the U-Net architecture to detect and reduce RAs in B-mode PA images. In order to train the proposed deep learning model for the RA reduction task, a program is designed to randomly generate anatomically realistic digital phantoms of human fingers with the capacity to produce RAs when subjected to PA imaging. In-silico PA imaging experiments, modeling photon transport and acoustic wave propagation, on these digital finger phantoms enabled the generation of 1800 training samples. The algorithm was tested on both PA images generated from digital phantoms and in-vivo PA data acquired from human fingers using a hand-held LED-based PA imaging system. Our results suggest that robust reduction of RAs with a deep neural network is possible if the network is trained with sufficiently realistic simulated images.
In photoacoustic imaging, accurate spectral unmixing is required for revealing functional and molecular information of the tissue using multispectral photoacoustic imaging data. A significant challenge in deep-tissue photoacoustic imaging is the nonlinear dependence of the received photoacoustic signals on the local optical fluence and molecular distribution. To overcome this, we have deployed an end-to-end unsupervised neural network based on autoencoders. The proposed method employs the physical properties as the constraints to the neural network which effectively performs the unmixing and outputs the individual molecular concentration maps without a-priori knowledge of their absorption spectra. The algorithm is tested on a set of simulated multispectral photoacoustic images comprising of oxyhemoglobin, deoxy-hemoglobin and indocyanine green targets embedded inside a tissue mimicking medium. These in silico experiments demonstrated promising photoacoustic spectral unmixing results using a completely unsupervised deep learning approach.
Photoacoustic imaging shows great promise for clinical environments where real-time position feedback is critical, including the guiding of minimally invasive surgery, drug delivery, stem cell transplantation, and the placement of metal implants such as stents, needles, staples, and brachytherapy seeds. Photoacoustic imaging techniques generate high contrast, label-free images of human vasculature, leveraging the high optical absorption characteristics of hemoglobin to generate measurable longitudinal pressure waves. However, the depth-dependent decrease in optical fluence and lateral resolution affects the visibility of deeper vessels or other absorbing targets. This poses a problem when the precise locations of vessels are critical for the application at hand, such as navigational tasks during minimally invasive surgery. To address this issue, a novel deep neural network was designed, developed, and trained to predict the location of circular chromophore targets in tissue mimicking a strong scattering background, given measurements of photoacoustic signals from a linear array of ultrasound elements. The network was trained on 16,240 samples of simulated sensor data and tested on a separate set of 4,060 samples. Both our training and test sets consisted of optical fluence-dependent photoacoustic signal measurements from point sources at varying locations. Our network was able to predict the location of point sources with a mean axial error of 4.3 μm and a mean lateral error of 5.8 μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.