Deep learning technology has developed rapidly in recent years, and deep learning-based steganography and steganalysis techniques have also achieved fruitful results. In the past few years, the over-expanded structure of steganalyzers based on deep learning has led to huge computational and storage costs. In this article, we propose image steganalysis based on model compression, and apply the model compression method to image steganalysis to reduce the network infrastructure of the existing large-scale over-parameter steganalyzer based on deep learning. We conducted extensive experiments on the BOSSBase+BOWS2 dataset. As can be seen from the experiment, compared with the original steganalysis model, the model structure we proposed can achieve performance with fewer parameters and floating-point operations. This model has better portability and scalability.
KEYWORDS: Convolution, Steganalysis, Feature extraction, Convolutional neural networks, Steganography, Mobile devices, Network architectures, Information fusion, Algorithm development, Signal to noise ratio
With the continuous improvement in the accuracy of steganalysis based on convolutional neural networks (CNNs), the network scale has shown explosive growth. Consequently, CNNs have a high demand for hardware resources and time-consuming training. To reduce the number of CNN parameters and improve the efficiency of steganalysis, we propose a lightweight steganalysis CNN called W-Net. The proposed W-Net first uses grouped convolution and channel shuffling units to extract noise residuals, strengthen the information exchange between groups, and improve feature extraction. In addition, the depth-wise separable convolution is applied to obtain different channels and spaces. The fusion of position information achieves the effect of conventional convolution while reducing the number of network parameters. We verified the effect of activation functions on steganalysis accuracy through experiments. In addition, the proposed W-Net can detect the steganographic data from the S-UNIWARAD spatial steganography algorithm with an embedding rate of 0.4 bpp. Compared with Xu-Net and Zhu-Net, the proposed W-Net improves the detection accuracy by 12.70% and 6.38%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.