Spatial information, temporal information and the interactions that occur between them all affect the perception of quality, hence directly applying an IQA method to video frames often leads to poor results . The VQA problem is much more complicated than image quality assessment (IQA). Ī basic element of video QoE methods are VQA models that analyze spatial–temporal video content. Compared with QoS, QoE methods have a wider range of application scenarios and a greater variety of solutions . These models considered multiple factors that may affect visual experience, such as rebuffering measurements and video quality scores. Bampis et al. trained a variety of recurrent neural networks to predict QoE. Lin et al. designed a compressed video quality assessment metric that takes three major factors (quantization, motion and bit allocation) into consideration. considered relationships between visual consistency theory and buffer constraints as well as the free energy principle and how it relates to visual speed perception , to design video coding strategies controlled by video quality. Raake et al. proposed a HyperText Transfer Protocol (HTTP) adaptive streaming quality assessment model that serves as a component of the P.1203 standard framework. Because the factors that affect user experience may result from outside the system, such methods often have to deal with different types of input . The main concern of QoE methods is the degree of user satisfaction with respect to video service . These methods typically focus on the performance of the physical system and rarely model the user directly. Chong et al. proposed a novel dynamic transmission bandwidth allocation strategy for real-time variable-bit-rate video transport in Asynchronous Transfer Mode (ATM) networks. Liu et al. considered the interaction between peak signal-to-noise ratio (PSNR) and compression quantization parameters. For example, Adas designed a dynamic network bandwidth allocation strategy that sustains variable bitrate video traffic. QoS methods mainly deal with measurable performance factors of delivery platforms (such as telecommunication services), and are designed to find balances between system capacity and the needs of the users of the service. Objective video quality assessment models can be roughly divided into two categories: (1) those that investigate quality of service (QoS), and (2) those that measure quality of experience (QoE) . However, objective video quality assessment (VQA) algorithms that correlate well with human judgments are well-suited for this purpose. However, methods to measure, monitor, and control the perceptual quality of video content remain imperfect, and continue to evolve .Īlthough subjective tests provide the most accurate assessments of video quality, they are impractical for deployment in most real-world video processing systems. The many significant leaps in camera, computer, and network technologies over the past decade have led to an explosion of video content being delivered and shared over the Internet . Source code for STS-PSNR is freely available at. This leads to an algorithm called Space–TimeSlice PSNR (STS-PSNR), which we thoroughly tested on three publicly available video quality assessment databases and found it to deliver significantly elevated performance relative to state-of-the-art video quality models. A simple learned pooling strategy is used to combine the multiple IQA outputs to generate a final video quality score. These reference-distorted maps are then processed using a standard image quality model, such as peak signal-to-noise ratio (PSNR) or Structural Similarity (SSIM). To more comprehensively characterize space–time distortions, a collection of distortion-aware maps are computed on each reference–test video pair. The approach first arranges the reference and test video sequences into a space–time slice representation. We develop a full-reference (FR) video quality assessment framework that integrates analysis of space–time slices (STSs) with frame-based image quality measurement (IQA) to form a high-performance video quality predictor.
0 Comments
Leave a Reply. |