Institute for Telecommunication Sciences / Research / Quality of Experience / Video Quality Research / Data

Video Quality Research Data

AGH/NTIA Dataset

Subjective experiment AGH/NTIA includes multiple instances of the same stimuli rated three or six times by the same subject. The goal is to provide insights into the suitability of subject screening methods, the impact of source video reuse on subjective data, and the behavior of subjects when repeatedly rating the same stimuli. The AGH/NTIA dataset was the starting point for innovation into experiments designed around unrepeated scenes (i.e., designs that avoid scene reuse).

The experiment design can be found here. The dataset is distributed on is distributed on CDVL as a ZIP file that includes videos, individual subject ratings from one lab, and questionnaire.

AGH/NTIA/Dolby Dataset

Subjective experiment AGH/NTIA/Dolby compares and contrasts the traditional full matrix experiment design with two novel experiment designs that do not re-use scenes. This dataset provides important insights into how to design experiments to study camera impairments and other topics where scene re-use is not possible. AGH/NTIA/Dolby expands on the concepts explored in the AGH/NTIA dataset.

This dataset is distributed on CDVL in six (6) zip files; search for the key word "AGH-NTIA-Dolby". The AGH/NTIA/Dolby dataset ZIP file includes videos, individual subject ratings from three labs, questionnaire, and demographics. The dataset is described here.

American Sign Language (ASL) Videos with English Translations

This set of American Sign Language (ASL) videos was filmed and distributed to enable research into the use of modern video systems for ASL communication.

The videos depict Deaf signers and one teacher at a school for the Deaf. These people are conversing at a natural pace, using American Sign Language (ASL). The goal during filming was to characterize various signing behaviors by seeking a large variety of ages, genders, skin tones, health conditions, and places of origin (e.g., vocabulary). These videos are distributed on CDVL.

CCRIQ Dataset

Traditional 35mm film cameras are no longer the main devices today’s consumers use to capture images. Though the dominant technology has shifted to digital cameras and displays that differ widely in pixel count and resolution, our understanding of the quality impact of these variables lags. The CCRIQ dataset explores the quality impact of resolution. Images were collected from 23 cameras, ranging from a 1 megapixel (MP) mobile phone to a 20 MP digital single-lens reflex camera (DSLR). Subjective ratings from three labs were used to explore the relationship between the camera’s pixel count, the display resolution, and the overall perceived quality.

This dataset is distributed on CDVL: search for the key word "CCRIQ". The dataset ZIP file contains images, and individual subject ratings from three labs. The dataset is described here.

CCRIQ2 and VIME1 Dataset

VIME1CCRIQ2 and VIME1 is a pair of image dataset that explore experiment design for no-reference metrics. The first data set, CCRIQ2, uses a strict experiment design, which is more suitable for camera performance evaluation. The second data set, VIME1, uses a loose experiment design that resembles the behavior of consumer photographers. 

CCRIQ2 and VIME1 are distributed on CDVL as separate records; search for key word "CCRIQ2" and "VIME1". The ZIP file contains images and individual subject ratings from one lab. The dataset is described here.

COCRID Dataset

COCRID datasetThe Challenging Optical Character Recognition Image Dataset (COCRID) is an experiment designed to challenge optical character recognition with common environment or capture conditions. The design goals of the COCRID dataset are (1) to train no-reference (NR) metrics that track the quality of recognized text, (2) to understand characteristics of images that are particularly difficult for Optical Character Recognition (OCR) algorithms, and (3) to develop a metric that responds strongly to the effects of impaired text. The lessons learned from this dataset will help researchers learn how to design datasets for other computer vision algorithms. To find the dataset, go to CDVL and search for the key word "COCRID".

ITS 2010 Audiovisual Dataset

ITS 2010 is an audiovisual dataset that explores the relationship between audio quality and video quality in the overall audiovisual quality. The most important overall conclusion is that only the cross term (audio x video) is needed to predict the overall audiovisual quality.

To find the dataset, go to CDVL and search for the key word "ITS 2010". The ZIP file contains audio files, video files, audiovisual files, and individual subject ratings from one lab. The dataset is described here. This paper draws upon data from this dataset and 12 prior experiments.

ITS AV-Synch 2010 Dataset

ITS 2010 is an audiovisual dataset that explores the relationship between audio quality, video quality, and delay in the overall audiovisual quality.

To find the dataset, go to CDVL and search for the key word "ITS AV-Synch 2010". The dataset is divided into four (4) ZIP files that contain audiovisual files and individual subject ratings from one lab. The dataset is described here.

its4s Dataset

Its4s is the first in a series of subjective video and image quality experiments designed specifically for no-reference metric development. Its4s was designed as a proof of concept for many novel design choices, including 4 second video sequences, the "skip" rating option, unrepeated scenes, and a few videos where the original production quality is poor or worse.

To download the dataset, go to CDVL and search for the key word "its4s". The its4s dataset is distributed in two ways: compressed and uncompressed. The compressed distribution contains the videos as viewed and rated. The uncompressed distribution also contains original videos that were not rated. All dataset ZIP files include subject ratings from two labs, although one lab only rated two of six sessions. The experiment design and a link to the dataset can be found here.

its4s2 Dataset

Its4s2 is the second in a series of subjective video and image quality experiments designed specifically for no-reference metric development. Its4s2 contains a diverse selection of images with camera impairments. The experiment design is described here.

To download the dataset, go to CDVL and search for the key word "ITS4S2". The ZIP file contains the photographs, the videos that were viewed and rated by subjects (i.e., each image presented as a 4 second video), attribution for each photograph, and individual subject ratings. The dataset is described here.

its4s3 Dataset

Its4s3 is the third in a series of subjective video and image quality experiments designed specifically for no-reference metric development. Its4s3 contains six sessions, each depicting camera impairments in the context of a first responder application (e.g., fireground, crime scene, search & rescue). The videos in each session were rated by different subjects.

To download the dataset, go to CDVL and search for the key word "ITS4S3". The ZIP file contains the videos, individual subject ratings, attribution information for each video, and subject demographics. The dataset is described here.

its4s4 Dataset

Its4s4 is the fourth in a series of subjective video and image quality experiments designed specifically for no-reference metric development. Its4s4 contains camera pans, real and simulated. The goal is to train a no-reference metric that analyzes the quality impact of camera pan speed. Most of the videos depict first responder applications.

To download the dataset, go to CDVL and search for the key word "ITS4S4". CDVL distributes two versions of the its4s4 dataset: compressed videos (as seen by subjects) and uncompressed videos (AVI files). The ZIP files also contains attribution for each video, individual subject ratings, and subject demographics. The dataset is described here.

ITSnoise Dataset

sample images from the ITSnoise datasetITSnoise is the fifth in a series of subjective video and image quality experiments designed specifically for no-reference metric development. ITSnoise contains images that depict camera capture noise and other low light impairments. This dataset was designed to train a no-reference (NR) metrics that analyzes the impact of camera capture noise on quality. The dataset was designed for first responder applications.

To download the dataset, go to CDVL and search for the key word "ITSnoise". The ZIP file the original photographs, images scaled to the test monitor resolution, individual subject ratings, and subject demographics. The dataset is described here.

NTIAcolor

The subjective dataset NTIAcolor is available for download in Microsoft Excel® format or as comma-separated values (CSV) files for Colordatapool1 and Colordatapool2. This dataset is described in conference paper "A missing factor in objective video quality models: a study of color" by Margaret H. Pinson presented at the Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics - VPQM 2015, Chandler, AZ, February 5-6, 2015.

Disclaimer: The commercial software program Microsoft Excel® was used to create the spreadsheet containing this dataset for the convenience of the author; such use does not imply recommendation or endorsement by the National Telecommunications and Information Administration, nor does it imply that the software used is necessarily the best available for the particular application or uses.

Public Safety #1 Dataset

The Public Safety #1 (PS1) dataset depicts how first responders use video systems. First responders rated each video on the 5-level Absolute Category Rating (ACR) scale, and then rated whether the video quality depicted was acceptable for public safety applications, on a [0, 1] scale. ITS conducted this subjective test in late 2005 / early 2006, using standard definition television systems. The dataset includes frame rate and resolution reduction impairments that remain of interest.

The ps1 dataset is available on CDVL; search for key word "ps1". The ZIP file includes videos and individual subject ratings from first responders. The experiment results are described here.

Public Safety #2 Dataset

The Public Safety #2 (PS2) dataset builds upon the Public Safety #1 experiment and uses a very similar experiment design. ITS conducted this subjective test in 2006.

The ps2 dataset is available on CDVL; search for keyword "ps2". The ZIP file includes videos, individual subject ratings form first responders, and the experiment design notes. The results of this experiment were not published.

VCRDCI Dataset

Distribution of AV1 encoded video impairment levels in the VCRDCI datasetThe VMAF Compression Ratings that Disregard Camera Impairments (VCRDCI) dataset. This dataset was designed similarly to a video quality subjective experiment, but the VMAF metric was used to create simulated subjective data. The VCRDCI dataset contains 130 scenes that have been rescaled to eight resolutions and compressed into 10 variable bitrates with three codecs.

The VCRDCI dataset is available on CDVL; search for keyword "vcrdci". The dataset is too large for a single download. Each ZIP file includes part of the videos with VMAF ratings. These videos must be converted to uncompressed AVI to be used by the NRMetricFramework. The results of this experiment were not published.

 

The experiment design is described here.

Standards Development Organization (SDO) Datasets

ANSI T1.801.01

In 1995, the American National Standards Association (ANSI) created a collection of test scenes for subjective assessment and objective assessment of video teleconferencing. This dataset contains standard definition video sequences (also known as 525-line or NTSC). These public domain videos are available on the Consumer Digital Video Library (CDVL). Select the advanced search option and choose dataset "ANSI T1.801.01".

ANSI T1.801.02

In 1996, the American National Standards Association (ANSI) created terms and definitions for digital video impairments, and an informative video presenting examples. This video is available on the Consumer Digital Video Library (CDVL). Select the advanced search option and choose dataset "ANSI T1.801.02".

FRTV Phase I

Video Quality Experts Group (VQEG) Full Reference Television (FRTV) Phase I dataset is the first validation experiment conducted by VQEG.  This test examined full references and no reference objective video quality models that predicted the quality of standard definition television (625-line and 525-line). Models were submitted in 1999 and VQEG's Final Report was approved June, 2000.

All video sequences are available to researchers. Models that are trained on these datasets must not be compared to the models submitted to VQEG for independent validation in 1999. Such a comparison is misleading, because the experiments contain mainly source scenes and HRCs that were unknown to the model developers

The videos and individual subject ratings are available on the Consumer Digital Video Library (CDVL). Search for key word "FRTV".

FRTV Phase II

The VQEG Full Reference Television (FRTV) Phase II dataset built upon the FRTV Phase I experiment. FRTV Phase II has a similar design as FRTV Phase I, but the datasets span a wider range of quality. Models were submitted in 2002 and VQEG's Final Report was approved August 25, 2003. The video sequences cannot be distributed, but individual subject ratings are available on the Consumer Digital Video Library (CDVL). Search for key word "FRTV".

HDTV

The VQEG High Definition Television (HDTV) experiment validated objective video quality models for 1080i 29.97fps, 1080p 29.97fps, 1080i 25fps, and 1080p 25fps. The HDTV Final Report was approved June 10, 2010. Five of six datasets are available on CDVL. Each dataset is distributed as a ZIP file that contains the video files and individual subject ratings. Go to the Consumer Digital Video Library (CDVL) and search for key word "vqeghd".

Hybrid

The VQEG Hybrid Perceptual / Bit-Stream (Hybrid) experiment validated objective video quality models that use both the processed video sequence and bit-stream information. This test examined WVGA/VGA video and also HDTV video. The Hybrid Final Report was approved July 10, 2014. Currently, only individual subject ratings are available on the Consumer Digital Video Library (CDVL). Search for key word "Hybrid".

ITU-R Rec. BT.802

In 1994, the International Telecommunications Union (ITU) created a standard set of video test sequences. These public domain videos are available on the Consumer Digital Video Library (CDVL). Select the advanced search option and choose dataset "ITU-R Rec BT.802, 525-line" or "ITU-R Rec BT.802, 625-line".

Note: When ITS obtained the 625-line sequences on a hard drive from the ITU, some of the 625-line standard sequences were missing. The hard drive also contained a few non-standard sequences. We have not been able to find copies of the missing sequences. ITS does not distribute the non-standard sequences, because their licensing terms are unknown.

RRNR-TV Ratings

The VQEG reduced reference and no reference television (RRNR-TV) dataset was used to validate objective video quality models that predict the quality of standard definition television (625-line and 525-line). Models were submitted in 2008 and VQEG's Final Report was approved June 22, 2009. The videos cannot be distributed, but the individual subject ratings are available on the Consumer Digital Video Library (CDVL). Select for the key word "RRNR".

T1A1 Dataset

The T1A1 subjective dataset was created in 1993-1994 by a subcommittee of the American National Standards Association (ANSI) accredited Alliance for Telecommunications Industry Solutions (ATIS). This dataset contains 625 standard definition, NTSC video sequences. The videos and mean opinion scores (MOS) are available on the Consumer Digital Video Library (CDVL). Search for key word "t1a1".

See also Margaret H. Pinson and Arthur Webster, "T1A1 Validation Test Database," VQEG eLetter, vol. 1, no. 2, 2015. Available: VQEG_eLetter_vol01_issue2.pdf