7 August 2024 Video frame interpolation based on depthwise over-parameterized recurrent residual convolution
Xiaohui Yang, Weijing Liu, Shaowen Wang
Author Affiliations +
Abstract

To effectively address the challenges of large motions, complex backgrounds and large occlusions in videos, we introduce an end-to-end method for video frame interpolation based on recurrent residual convolution and depthwise over-parameterized convolution in this paper. Specifically, we devise a U-Net architecture utilizing recurrent residual convolution to enhance the quality of interpolated frame. First, the recurrent residual U-Net feature extractor is employed to extract features from input frames, yielding the kernel for each pixel. Subsequently, an adaptive collaboration of flows is utilized to warp the input frames, which are then fed into the frame synthesis network to generate initial interpolated frames. Finally, the proposed network incorporates depthwise over-parameterized convolution to further enhance the quality of interpolated frame. Experimental results on various datasets demonstrate the superiority of our method over state-of-the-art techniques in both objective and subjective evaluations.

© 2024 SPIE and IS&T
Xiaohui Yang, Weijing Liu, and Shaowen Wang "Video frame interpolation based on depthwise over-parameterized recurrent residual convolution," Journal of Electronic Imaging 33(4), 043036 (7 August 2024). https://doi.org/10.1117/1.JEI.33.4.043036
Received: 29 March 2024; Accepted: 16 July 2024; Published: 7 August 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Interpolation

Video

Convolution

Feature extraction

Optical flow

Education and training

Motion estimation

Back to Top