Modern face ID systems are often plagued with loss of privacy. To address this, some face ID systems incorporate image transformations in the detection pipeline. In particular, we consider transforms that convert human face images to non-face images (such as landscape images) to mask sensitive and bias-prone facial features and preserve privacy, while maintaining identifiability.
We propose two metrics that study the effectiveness of face image transformations used in privacy-preserving face ID systems. These metrics measure the invertibility of the transformations to ensure the meta-data of the face (e.g. race, sex, age, etc.) cannot be inferred from the transformed image.
In this paper, we create mix-and-matched generative networks to address privacy and bias
concerns in face recognition systems. There has been a rise in bias based on religion, gender, and race. To preserve the robustness of face ID systems while masking these bias-inducing facial features, we map the faces to neutral natural landscape images. This still leaves the possibility of estimating facial features from the landscape images. We address this issue through decorrelation shuffling functions between the latent spaces of the encoder and the generator networks, as a way of decorrelating facial and landscape features and preventing hacking.
Many datasets in important fields like healthcare and finance are often in a tabular format, where each observation is expressed as a vector of various feature values. While there exist several competitive algorithms such as random forests and gradient boosting, convolutional neural networks (CNNs) are making tremendous strides in terms of new research and applications. In order to exploit the power of convolution neural networks for these tabular datasets, we propose two vector-to-image transformations. One is a direct transformation, while the other is an indirect mechanism to first modulate the latent space of a trained generative adversarial network (GAN) with the observation vectors and then generate the images using the generator. On both simulated and real datasets, we show that CNNs trained on images based on our proposed transforms lead to better predictive performance compared to random forests and neural networks that are trained on the raw tabular datasets.
Propagation of action potentials arises on millisecond timescales, suggesting the need for advancement of methods capable of commensurate volume rendering for in vivo brain mapping. In practice, beam-scanning multiphoton microscopy is widely used to probe brain function, striking a balance between simplicity and penetration depth. However, conventional beam-scanning platforms generally do not provide access to full volume renderings at the speeds necessary to map propagation of action potentials. By combining a sparse sampling strategy based on Lissajous trajectory microscopy in combination with temporal multiplexing for simultaneous imaging of multiple focal planes, whole volumes of cells are potentially accessible each millisecond.
KEYWORDS: 3D modeling, Denoising, Optical spheres, Aluminum, Reconstruction algorithms, Data modeling, Electron tomography, Tomography, Computer simulations, Transmission electron microscopy
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework.
In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright
field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
A beam-scanning microscope based on Lissajous trajectory imaging is described for achieving streaming 2D imaging with continuous frame rates up to 1.4 kHz. The microscope utilizes two fast-scan resonant mirrors to direct the optical beam on a circuitous trajectory through the field of view. By separating the full Lissajous trajectory time-domain data into sub-trajectories (partial, undersampled trajectories) effective frame-rates much higher than the repeat time of the Lissajous trajectory are achieved with many unsampled pixels present. A model-based image reconstruction (MBIR) 3D in-painting algorithm is then used to interpolate the missing data for the unsampled pixels to recover full images. The MBIR algorithm uses a maximum a posteriori estimation with a generalized Gaussian Markov random field prior model for image interpolation. Because images are acquired using photomultiplier tubes or photodiodes, parallelization for multi-channel imaging is straightforward. Preliminary results show that when combined with the MBIR in-painting algorithm, this technique has the ability to generate kHz frame rate images across 6 total dimensions of space, time, and polarization for SHG, TPEF, and confocal reflective birefringence data on a multimodal imaging platform for biomedical imaging. The use of a multichannel data acquisition card allows for multimodal imaging with perfect image overlay. Image blur due to sample motion was also reduced by using higher frame rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.