Paper
15 December 2023 Video quality assessment based on deep learning
Author Affiliations +
Proceedings Volume 12971, Third International Conference on Optics and Communication Technology (ICOCT 2023); 129710T (2023) https://doi.org/10.1117/12.3017706
Event: Third International Conference on Optics and Communication Technology (ICOCT 2023), 2023, Changchun, China
Abstract
This article proposes a no reference video quality assessment method based on deep learning, aiming to simulate human perception of video quality and evaluate videos. This method evaluates the quality of videos by learning effective feature representations in the spatiotemporal domain. First, in the spatial domain, 2D-CNN is used to extract the spatial quality of video frames. Then, in the temporal domain, Recurrent neural network (RNN) and pyramid feature aggregation (PFA) module are used to model the temporal domain and aggregate the frame level feature quality. The experiment shows that the method proposed in this paper has good performance on the KoNViD-1k and CVD2014 datasets, and also indicates that the method has high generalization ability.
(2023) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Li Yan Zhang and Zhen Gang Lang "Video quality assessment based on deep learning", Proc. SPIE 12971, Third International Conference on Optics and Communication Technology (ICOCT 2023), 129710T (15 December 2023); https://doi.org/10.1117/12.3017706
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Education and training

Feature extraction

Deep learning

Video compression

Data modeling

Video acceleration

RELATED CONTENT


Back to Top