With the emergence of Neural Radiance Fields (NeRF), arbitrary view synthesis has made significant progress. However, most existing methods perform well only with low-resolution inputs, and they usually suffer from blurred synthesized views and high memory footprints as the input resolution increases, especially for dynamic scenes. To this end, this paper proposes a novel and effective framework that achieves a super-resolution dynamic NeRF for high-resolution arbitrary view rendering. Specifically, we first use a dynamic NeRF with HexPlane representation to learn a low-resolution neural model of dynamic scenes, which can synthesize low-resolution images from arbitrary views and times. Then, a spatiotemporal consistent super-resolution module is designed to reconstruct high-resolution synthesized views, which adopts a staged training strategy to enable our model with the ability to perceive geometric local context and detail processing. Experimental results demonstrate that our method can effectively generate high-quality super-resolution images from arbitrary viewpoints and times when dealing with dynamic scenes.
|