Image super-resolution is one of the key problems of image restoration. While a lot of methods focus on working with known degradation (like bicubic downsampling), in real-world use cases the degradation models are complex and unknown and therefore difficult to prepare for. To train such a model in supervised manner paired data with real-world degradations is needed. As only unpaired high- and low-resolution data is usually available due to the high cost of real paired data collection and alignment, methods that tackle the blind super-resolution task face the problem of degradation choice for training data generation. Some of existing methods provide a degradation pipeline that include noise injection, jpeg compression, downsampling with different kernels, etc. These methods may be effective in some cases but offer no mechanism of the pipeline to adapt to real-world scenario therefore lacking the performance. The other approach presented in this paper is simulating the degradation directly without trying to construct it from a predefined list. This can be done using modern generative models such as diffusion models. These models has strong generalization capabilities and are known to simulate the data distributions well. The proposed method uses a diffusion model trained for low-resolution image generation to simulate the degradations and construct paired data given high-resolution data. We compare proposed diffusion-based method with the existing paired data generation techniques and show the performance boost for it.
Mixed reality systems that integrate the real and digital worlds have recently gained popularity. The development of mixed reality devices in the form of wearable devices has been a significant trend in recent years. Key challenges in designing wearable mixed reality devices include reducing power consumption, enhancing productivity, and minimizing the size of the device. To address these challenges, neural processors are being incorporated into mixed reality systems. These are specialized hardware accelerators for convolutional neural network (CNN) processing. The use of neural processors is due to the fact that most environmental analysis and visualizations in mixed reality rely on CNNs. The use of these energy-efficient, high-performance processing devices enables increased comfort in using the wearable device through the high-performance processing of environmental data on a compact device.
The paper is devoted to the design of the microarchitecture of a neural processor for hardware acceleration of CNN processing based on the author's architecture of processor. The paper presents various microarchitectural solutions that can be used to accelerate CNN processing. We explore methods to optimize hardware resources and reduce time required for CNN processing. To achieve high throughput in pipelined computation, different algorithms for convolution calculations in a systolic array are examined. Based on the results of this research, we provide estimates of the characteristics of the neural processor with the proposed microarchitecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.