|
Three-dimensional wavelet video coding has been investigated by many scholars because its multiresolution nature can support spatial and temporal scalabilities simultaneously. Of the various wavelet video coding schemes, most can be classified into two categories: “ ” and “ ”.1 The major difference between them is whether the temporal transform is implemented before spatial decomposition or not. Since motion compensated temporal filtering (MCTF) is usually used for the temporal transform, “ ” is also called a spatial domain MCTF (SDMCTF) scheme and “ ” is called an in-band MCTF (IBMCTF) scheme. The IBMCTF scheme is particularly attractive because of its inherent spatial scalability and flexible coding framework. In the IBMCTF scheme, the coefficients of each spatial band obtained by 2-D spatial wavelet decomposition have some perceptual redundancy. At a given bitrate, if such visually redundant coefficients are completely coded, it will lead to the decrease of coding bits for comparative important coefficients in the spatial band, thus the overall perceptual quality of the coded video will be deteriorated. In fact, some redundant coefficients below the just-noticeable-distortion (JND) value can be removed safely since human eyes cannot sense any changes below the JND threshold around a coefficient due to their underlying spatial/temporal sensitivity and masking properties.2 From the signal compression viewpoint, the removal of the visually redundant coefficients will increase the coding bits of the visually important coefficients, improving visual quality. In this paper, we propose a perceptually-adaptive preprocessing method for in-band MCTF-based 3-D wavelet video coding. A locally adaptive wavelet domain JND profile is first proposed, which is then incorporated into a preprocessor of the in-band MCTF to remove the visually redundant coefficients before performing the MCTF of each spatial band. Figure 1 shows the framework of the proposed perceptually-adaptive in-band preprocessing scheme for 3-D wavelet video coding. The spatial wavelet transform is first applied to the original video sequence, which generates multiple spatial bands. Then each spatial band is preprocessed to remove the visually insignificant coefficients guided by a wavelet domain JND profile, which is built according to both the local property of each wavelet coefficient and the quantization noise visibility of each spatial band. After preprocessing, MCTF is performed to exploit the temporal correlation within each spatial band. For each temporal band of a certain spatial band, the spatial transform can be further employed to exploit the spatial correlation. Finally, the residual coefficients, motion vectors and modes of each spatiotemporal band are coded independently so that the server can simply drop the unnecessary spatiotemporal bands according to the resolution requested by the client. Since human eyes have underlying spatial/temporal sensitivity and masking properties, an appropriate JND model can significantly help to improve the performance of video coding algorithms. Several methods for finding JND have been proposed based upon intensive research in subbands as well as some work in the image domain.3, 4, 5 Watson 3 constructed the model of discrete wavelet transform (DWT) noise visibility thresholds as a function of scale, orientation, and display visual resolution. Their threshold model is based on the psychovisual detection of noise injected to wavelet bands. In their model, the local property of each wavelet coefficient was not considered, so each coefficient in a spatial band shares the same threshold. Based on Watson’s threshold model, we formulate a locally adaptive wavelet domain JND profile as given in Eq. 1, in which the Watson’s band-wise thresholds are modulated by the local activity factor of each wavelet coefficient: where is the threshold of the quantization noise visibility of each spatial band, is a local activity factor, denotes the scale of spatial wavelet transform, is the different spatial band after each spatial wavelet transform, and its possible values of are , corresponding to the spatial low-low-pass , high-low-pass , high-high-pass and low-high-pass , and and denote the coordinates of the coefficient of each spatial band. The threshold can be computed as follows3:where denotes the spatial frequency, which is determined by the viewing condition (maximum display resolution and viewing distance). In our implementation, the value of is for the component and for the and components. Here is a parabola with a minimum at and a width of . The optimized parameters , , , and can follow the corresponding values in Ref. 3.Considering that variance is a good indication of the local activity, we define the local activity factor of each wavelet coefficient as follows: where is the local variance in a window centered at in the spatial band . The second item in the expression is similar to the most known form of the empirical noise visibility function (NVF) in image restoration applications.6 It is the basic prototype for many adaptive regularization algorithms in the image domain.7, 8 Since the wavelet coefficients still have strong local activity even in the spatial high-frequency band, we can apply this prototype to the wavelet domain. Here is a subband-dependent contrast adjustment parameter computed as in Eq. 4, assuming that the noise can be modeled by a nonstationary Gaussian process.7 where is the maximum local variance for the spatial band , and is an empirical parameter.The above adjustment factor shows that the JND values in the highly textured and edged areas are stronger than those in the flat regions in the same subband. With the above wavelet domain JND, we can define the following perceptually adaptive in- band preprocessor: where is the coefficient value at the coordinate in the spatial band .In the above preprocessor, if a coefficient is below the wavelet domain JND value, it will be viewed as insignificant and set to be zero. Since the JND profile is locally adaptive, after this processor the visually insignificant coefficients are removed while the visually significant coefficients can remain. It will benefit the following processing process of each spatial band since the corresponding coding bits for the visually important coefficients will be increased. Thus the overall visual quality of coded video will be improved. We validated the perceptually-adaptive in-band preprocessing scheme in MPEG scalable video coding (SVC) reference software for a wavelet ad-hoc group.9 In the experiments, the video is first decomposed into four spatial bands with a 9/7 filter. The coefficients of each spatial band are then perceptually preprocessed with the proposed scheme, in which the window size is for computing local variance and the contrast adjustment factor is set to be 100. After the preprocessing step, a four-level MCTF with a 5/3 filter is performed in each spatial band. Figure 2 shows the visual quality comparison of the different decoded Foreman sequences with preprocessing and without preprocessing, respectively. In the figure, the decoded sequence named “ ” means that the bit-stream of the “Foreman” sequence is decoded with image size of QCIF at a frame rate of and a bitrate of . We can see that the visual quality is consistently better for the decoded video with the proposed preprocessing method at different resolution, different frame rate, and different bitrate. As shown in the figure, some artifacts and noise are removed. It makes that the flat areas, such as Foreman’s face and neck, look more smooth and comfortable. In addition, some important detail texture becomes clearer, such as Foreman’s mouth, teeth, and ears. In order to further confirm the visual quality improvement by the proposed scheme, we performed subjective quality evaluation. The subjective quality evaluation is performed according to the double stimulus continuous quality scale method in Rec. ITU-R BT.500.10 The mean opinion score (MOS) scales for viewers to vote for the quality after viewing are: excellent (100–80), good (80–60), fair (60–40), poor (40–20), and bad (20–0). Five observers were involved in the experiments. The subjective visual quality assessment was performed in a typical laboratory environment, using a 21-in. SONY G520 professional color monitor with a resolution of . The viewing distance is approximately six times that of the image height. Difference mean opinion scores (DMOS) are calculated as the difference of MOSs between the original video and the decoded video. The smaller the DMOS is, the higher the perceptual quality of the decoded video is. Table 1 shows the averaged DMOSs over all five subjects for the Foreman decoded sequences, where scheme I and II denote the IBMCTF without preprocessing and with preprocessing, respectively. From the table, we can see that the subjective rating is consistently better for the decoded sequences with the proposed scheme, and the average subjective quality gains of 6.71 measured in DMOS is achieved by the proposed scheme. Table 1Average objective and subjective performance for Foreman(300frames) sequence with preprocessing (scheme I) and without preprocessing (scheme II).
The PSNR results for the Foreman decoded sequences are listed in Table 1. From the table, we can find that the IBMCTF scheme with the proposed preprocessing has almost the same PSNR performance as the IBMCTF scheme without preprocessing. Interestingly, the objective coding performance does not increase. The underlying reason may be that signal distortion of a conventional IBMCTF is introduced by the embedded quantization for wavelet coefficients, while additional distortion from the JND-adaptive preprocessing needs to be considered in the proposed scheme. Therefore, although the removal of the visually insignificant coefficients can save some bits for coding the visually significant coefficients, it cannot guarantee the improvement of the overall objective quality measured by PSNR due to the additional signal distortion from preprocessing. In the motion-compensated residues preprocessor for the close-loop predictive coding paradigm,5 a method for determining the optimum parameter has been devised for improvement of PSNR at a given bitrate for nonscalable video coding. But such an optimization idea is inapplicable for the open-loop MCTF coding paradigm, which has to adapt to a wide range of bitrate and spatiotemporal resolutions. Therefore, the proposed preprocessing scheme ensures the improvement of the overall subjective quality instead of the objective quality. AcknowledgmentThis work was supported by National Natural Science Foundation of China under Grant Nos. 60332030 and 60502034, and Shanghai Rising-Star Program under Grant No. 05QMX1435. ReferencesJ.-R. Ohm,
“Advances in scalable video coding,”
Proc. IEEE, 93
(1), 42
–56
(2005). 0018-9219 Google Scholar
N. S. Jayant,
J. D. Johnston, and
R. J. Safranek,
“Signal compression based on models of human perception,”
Proc. IEEE, 81 1385
–1422
(1993). https://doi.org/10.1109/5.241504 0018-9219 Google Scholar
A. B. Watson,
G. Y. Yang,
J. A. Solomon, and
J. Villasenor,
“Visibility of wavelet quantization noise,”
IEEE Trans. Image Process., 6
(8), 1164
–1175
(1997). https://doi.org/10.1109/83.605413 1057-7149 Google Scholar
I. S. Hontsch and
L. J. Karam,
“Adaptive image coding with perceptual distortion control,”
IEEE Trans. Image Process., 11
(3), 213
–222
(2002). 1057-7149 Google Scholar
X. K. Yang,
W. S. Lin,
Z. K. Lu,
E. P. Ong, and
S. S. Yao,
“Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile,”
IEEE Trans. Circuits Syst. Video Technol., 15
(6), 742
–752
(2005). 1051-8215 Google Scholar
S. Efstratiadis and
A. Katsaggelos,
“Adaptive iterative image restoration with reduced computational load,”
Opt. Eng., 29
(12), 1458
–1468
(1990). https://doi.org/10.1117/1.2169036 0091-3286 Google Scholar
S. Voloshynovskiy,
A. Herrigel,
N. Baumgärtner, and
T. Pun,
“A stochastic approach to content adaptive digital image watermarking,”
International Workshop on Information Hiding, 212
–236
(1999) Google Scholar
L. Song,
J. Z. Xu,
H. K. Xiong, and
F. Wu,
“Content adaptive update steps for lifting-based motion compensated temporal filtering,”
589
–593
(2004) Google Scholar
R. Q. Xiong,
X. Y. Ji,
D. D. Zhang,
J. Z. Xu,
G. Pau,
M. Trocan, and
V. Bottreau,
“Vidwav wavelet video coding specifications,”
(2005) Google Scholar
ITU-R, “Methodology for the Subjective Assessment of the Quality of Television Pictures,” ITU-R Rec. BT. 500-9, Std. (
(1999) Google Scholar
|