The interaction of dispersion and nonlinear effects gives rise to a wide variety of pulse dynamics and proves to be a fundamental bottleneck for high speed communications. Traditionally, time consuming and computationally inefficient algorithms are used for this purpose1 and therefore, research in nonlinear optics and optical communications is now implementing machine learning based methods.2–5 We show a comprehensive comparison of different neural network (NN) architectures to learn the nonlinear Schrodinger equation (NLSE). We have used a NN based approach to reconstruct the pulse (temporal and spectral domain) at the transmitter from the pulse received through a highly nonlinear fiber (HNLF) without the prior knowledge of fiber parameters. Additionally, the trained network can also predict the dispersion and nonlinear parameters of an unknown fiber. The proposed NN also mitigates the need of using iterative reconstruction methods which are computationally expensive and slow. A detailed comparison of six different NN based techniques namely fully connected NN (FCNN), cascade NN (CaNN), convolutional NN (CNN), long short term memory networks (LSTM), bidirectional LSTM (BiLSTM) and gated recurrent unit (GRU) is presented. To our knowledge, the literature does not contain a detailed discussion of the NN architecture which is most suitable for learning the transfer function of the fiber. We perform a comprehensive study by including all popular NN architectures which enables the estimation of pulse profile for arbitrary pulse width, chirp, second and third order dispersion, nonlinearity and fiber length which can benefit nonlinear optics experiments and coherent optical communications. The growing popularity of NNs is resulting in increased design and development of hardware that is optimized for processing NN architectures. In light of this flexibility and optimised hardware, popularity of NN in optics is set to increase.
There are monocular depth cues present in images or videos that aid in depth perception in two-dimensional images or videos. Our objective is to preserve the defocus depth cue present in the videos along with the salient regions during compression application. A method is provided for opportunistic bit allocation during the video compression using visual saliency information comprising both the image features, such as color and contrast, and the defocus-based depth cue. The method is divided into two steps: saliency computation followed by compression. A nonlinear method is used to combine pure and defocus saliency maps to form the final saliency map. Then quantization values are assigned on the basis of these saliency values over a frame. The experimental results show that the proposed scheme yields good results over standard H.264 compression as well as pure and defocus saliency methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.