University of Surrey

Test tubes in the lab Research in the ATI Dance Research

QESTRAL (Part 3): system and metrics for spatial quality prediction

Jackson, PJB, Dewhirst, M, Conetta, R, Zielinski, S, Rumsey, F, Meares, D, Bech, S and George, S (2008) QESTRAL (Part 3): system and metrics for spatial quality prediction In: 125th Audio Engineering Society Convention, 2008-10-02 - ?, San Francisco CA.

[img]
Preview
PDF
JacksonEtAl_AES08.pdf
Available under License : See the attached licence file.

Download (239kB)
[img] Plain Text (licence)
licence.txt

Download (1kB)

Abstract

The QESTRAL project aims to develop an artificial listener for comparing the perceived quality of a spatial audio reproduction against a reference reproduction. This paper presents implementation details for simulating the acoustics of the listening environment and the listener’s auditory processing. Acoustical modeling is used to calculate binaural signals and simulated microphone signals at the listening position, from which a number of metrics corresponding to different perceived spatial aspects of the reproduced sound field are calculated. These metrics are designed to describe attributes associated with location, width and envelopment attributes of a spatial sound scene. Each provides a measure of the perceived spatial quality of the impaired reproduction compared to the reference reproduction. As validation, individual metrics from listening test signals are shown to match closely subjective results obtained, and can be used to predict spatial quality for arbitrary signals.

Item Type: Conference or Workshop Item (Paper)
Additional Information: The papers at this Convention have been selected on the basis of a submitted abstract and extended precis that have been peer reviewed by at least two qualified anonymous reviewers. This convention paper has been reproduced from the author’s advance manuscript, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be obtained by sending request and remittance to Audio Engineering Society, 60 East 42nd Street, New York, New York 10165-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is not permitted without direct permission from the Journal of the Audio Engineering Society.
Divisions: Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing
Depositing User: Symplectic Elements
Date Deposited: 27 Jun 2012 09:29
Last Modified: 23 Sep 2013 18:51
URI: http://epubs.surrey.ac.uk/id/eprint/7761

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800