University of Surrey

Test tubes in the lab Research in the ATI Dance Research

A System Architecture for Semantically Informed Rendering of Object-Based Audio

Franck, Andreas, Francombe, Jon, Woodcock, James, Hughes, Richard, Coleman, Philip, Menzies-Gow, Robert, Cox, Trevor J. and Jackson, Philip J. B. (2019) A System Architecture for Semantically Informed Rendering of Object-Based Audio Journal of the Audio Engineering Society, 67 (7/9). pp. 1-11.

[img]
Preview
Text
JAES-D-18-00068Revised.pdf - Accepted version Manuscript
Available under License Creative Commons Attribution.

Download (739kB) | Preview

Abstract

Object-based audio promises format-agnostic reproduction and extensive personalization of spatial audio content. However, in practical listening scenarios, such as in consumer audio, ideal reproduction is typically not possible. To maximize the quality of listening experience, a different approach is required, for example modifications of metadata to adjust for the reproduction layout or personalization choices. In this paper we propose a novel system architecture for semantically informed rendering (SIR), that combines object audio rendering with high-level processing of object metadata. In many cases, this processing uses novel, advanced metadata describing the objects to optimally adjust the audio scene to the reproduction system or listener preferences. The proposed system is evaluated with several adaptation strategies, including semantically motivated downmix to layouts with few loudspeakers, manipulation of perceptual attributes, perceptual reverberation compensation, and orchestration of mobile devices for immersive reproduction. These examples demonstrate how SIR can significantly improve the media experience and provide advanced personalization controls, for example by maintaining smooth object trajectories on systems with few loudspeakers, or providing personalized envelopment levels. An example implementation of the proposed system architecture is described and provided as an open, extensible software framework that combines object-based audio rendering and high-level processing of advanced object metadata.

Item Type: Article
Divisions : Faculty of Arts and Social Sciences > Department of Music and Media
Authors :
NameEmailORCID
Franck, Andreas
Francombe, Jonj.francombe@surrey.ac.uk
Woodcock, James
Hughes, Richard
Coleman, Philipp.d.coleman@surrey.ac.uk
Menzies-Gow, Robert
Cox, Trevor J.
Jackson, Philip J. B.P.Jackson@surrey.ac.uk
Date : 2019
Funders : Engineering and Physical Sciences Research Council (EPSRC)
DOI : 10.17743/jaes.2019.0025
Copyright Disclaimer : © 2019 Audio Engineering Society. This is an Open Access paper, licensed under a Creative Commons Attribution 4.0 International License.
Uncontrolled Keywords : Object-based Audio; Spatial audio; Audio rendering
Related URLs :
Additional Information : All software and data is fully available without restriction from the DOI 10.5281/zenodo.3243995.
Depositing User : Clive Harris
Date Deposited : 09 Jul 2019 15:32
Last Modified : 18 Sep 2019 05:10
URI: http://epubs.surrey.ac.uk/id/eprint/852234

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800