Feb 2022 Congratulations to Dr.Huan Zhang on her paper “Deep Learning-based Perceptual Video Quality Enhancement for 3D Synthesized View” was accepted by IEEE Transactions on Circuits and Systems for Video Technology (IEEE T-CSVT).

Fig. 1 Example of spatial and temporal distortions in synthesized video.


Fig. 2 Proposed framework of CNN-based synthesized video denoising in 3D system.


Due to occlusion among views and temporal inconsistency in depth video, spatio-temporal distortion occurs in 3D synthesized video with depth image-based rendering. In this paper, we propose a deep Convolutional Neural Network (CNN)-based synthesized video denoising algorithm to reduce temporal flicker distortion and improve perceptual quality of 3D synthesized video. First, we analyze the spatio-temporal distortion, and model eliminating spatio-temporal distortion as a perceptual video denoising problem. Then, a deep learningbased synthesized video denoising network is proposed, in which a CNN-friendly spatio-temporal loss function is derived from a synthesized video quality metric and integrated with a single image denoising network architecture. Finally, specific schemes, i.e., specific Synthesized Video Denoising Networks (SynVD-Nets), and a general scheme, i.e., General SynVD-Net (GSynVD-Net), based on existing CNN-based denoising models, are developed to handle synthesized video with different distortion levels more effectively. Experimental results show that the proposed SynVDNet and GSynVD-Net can outperform deep learning-based counterparts and conventional denoising methods, and significantly enhance perceptual quality of 3D synthesized video.