Ultraviolet-visible (UV-Vis) spectroscopy is a well-established technique for real-time analyzing contaminants in finished drinking water and wastewater. However, it has struggled in surface water because surface water such as river water has more complex chemical compositions than drinking water and lower concentrations of nutrient contaminants such as nitrate. Previous spectrophotometric analysis using absorbance peak at UV region to estimate nitrate in drinking water performs poorly in surface water because of interference from suspended particles and dissolved organic carbon which absorb light along similar wavelengths. To overcome these challenges, the paper develops a machine learning approach to utilize the entire spectral wavelengths for accurate estimation of low concentration of dissolved nutrients from surface water background. The spectral training data used in this research are obtained by analyzing water samples collected from the US-Canada bi-nationally regulated Detroit River during agricultural seasons using A.U.G. Signals' dual channel spectrophotometer system. Confirmatory concentrations of dissolved nitrate in these samples are validated by laboratory analysis. Several commonly used supervised learning techniques including linear regression, support vector machine (SVM), and deep learning using convolutional neural network (CNN) and long short-term memory (LSTM) network are studied and compared in this work. The results conclude that the SVM with linear kernel, CNN with linear activation function, and LSTM network are the best regression models, which are able to achieve a cross validation root-mean-squared-error (RMSE) less than 0.17 ppm. The results demonstrate effectiveness of the machine learning approach and feasibility of real-time UV-Vis spectral analysis to monitor dissolved nutrient levels in the surface watersheds.
The presented work is an extension of previous work carried out at A.U.G. Signals Ltd. The problem is approached herein for vessel identification/verification using Deep Learning Neural Networks in a persistent surveillance scenario. Using images with vessels in the scene, Deep Learning Neural Networks were set up to detect vessels from still imagery (visible wavelength). Different neural network designs were implemented for vessel detection and compared based on learning performance (speed and demanded training sets) and estimation accuracy. Unique features from these designs were taken to create an optimized solution. This paper presents a comparison of the deep learning approaches implemented and their relative capabilities in vessel verification.
This paper studies the problem of achieving watermark semi-fragility in multimedia authentication through a composite hypothesis testing approach. The embedding of a semi-fragile watermark serves to distinguish legitimate distortions caused by signal processing manipulations from illegitimate ones caused by malicious tampering. This leads us to consider authentication verification as a composite hypothesis testing problem with the watermark as a priori information. Based on the hypothesis testing model, we investigate the best embedding strategy which assists the watermark verifier to make correct decisions. Our results show that the quantization-based watermarking method is more appropriate than the spread spectrum method to achieve the best tradeoff between two error probabilities. This observation is confirmed by a case study of additive Gaussian white noise channel with Gaussian source using two figures of merit: relative entropy of the two hypothesis distributions and the receiver operating characteristic. Finally, we focus on certain common signal processing distortions such as JPEG compression and image filtering, and investigate the best test statistic and optimal decision regions to distinguish legitimate and illegitimate distortions. The results of the paper show that our approach provides insights for authentication watermarking and allows better control of semi-fragility in specific applications.
This paper focuses on the analysis and enhancement of watermarking based security strategies for multimedia authentication. Based on an authentication game between a transmitter and its authorized receiver, and an opponent, security of authentication watermarking is measured by the opponent's inability to launch a successful attack. In this work, we consider two traditional classes of security for authentication: computational security and unconditional security. First we identify authentication watermarking as an error detection problem, which is different from error correction coding in robust watermarking. Then we analyze the computational and unconditional security requirements of an error detection code structure associated with quantization-based authentication watermarking schemes. We propose a novel security enhancement strategy that results in efficient and secure quantization-based embedding and verification algorithms. For computational security, cryptographic message authentication codes are incorporated while unconditional security is obtained by using unconditionally secure authentication codes. Both theoretical analysis and experimental results are presented. They show that using our approach, protection is achieved without significant increase in embedding distortion, and without sacrificing computational efficiency of the embedding and verification algorithms.
This paper focuses on the use of nested lattice codes for effective analysis and
design of semi-fragile watermarking schemes for content authentication
applications. We provide a design framework for digital watermarking which is semi-fragile to any form of acceptable distortions, random or deterministic, such that both objectives of robustness and fragility can be effectively controlled and
achieved. Robustness and fragility are characterized as two types of authentication errors. The encoder and decoder structures of semi-fragile schemes are derived and implemented using nested lattice codes to minimize these two types of errors. We then extend the framework to allow the legitimate and illegitimate distortions to be modelled as random noise. In addition, we investigate semi-fragile signature generation methods such that the signature is invariant to watermark embedding and legitimate distortion. A new approach, called MSB signature generation, is proposed which is shown to be more secure than the traditional dual subspace approach. Simulations of semi-fragile systems on real images are provided to demonstrate the effectiveness of nested lattice codes in achieving design objectives.
This paper addresses the issue of robust data hiding in the presence of perceptual coding. Two common classes of data hiding schemes are considered: spread spectrum and quantization-based techniques. We identify analytically the advantages of both approaches under the lossy compression class of attacks. Based on our mathematical model, a novel hybrid data hiding algorithm which exploits the best of both worlds is presented. Theoretical and simulation results demonstrate the superior robustness of the resulting hybrid scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.