University of Surrey

Test tubes in the lab Research in the ATI Dance Research

A System Architecture for Semantically Informed Rendering of Object-Based Audio

Franck, Andreas, Francombe, Jon, Woodcock, James, Hughes, Richard, Coleman, Philip, Menzies, Dylan, Cox, Trevor J., Jackson, Philip J. B. and Fazi, Filippo Maria (2019) A System Architecture for Semantically Informed Rendering of Object-Based Audio Journal of the Audio Engineering Society, 67 (7/8).

Full text not available from this repository.

Abstract

Object-based audio promises format-agnostic reproduction and extensive personalization of spatial audio content. However, in practical listening scenarios, such as in consumer audio, ideal reproduction is typically not possible. To maximize the quality of listening experience, a different approach is required, for example modifications of metadata to adjust for the reproduction layout or personalization choices. In this paper we propose a novel system architecture for semantically informed rendering (SIR), that combines object audio rendering with high-level processing of object metadata. In many cases, this processing uses novel, advanced metadata describing the objects to optimally adjust the audio scene to the reproduction system or listener preferences. The proposed system is evaluated with several adaptation strategies, including semantically motivated downmix to layouts with few loudspeakers, manipulation of perceptual attributes, perceptual reverberation compensation, and orchestration of mobile devices for immersive reproduction. These examples demonstrate how SIR can significantly improve the media experience and provide advanced personalization controls, for example by maintaining smooth object trajectories on systems with few loudspeakers, or providing personalized envelopment levels. An example implementation of the proposed system architecture is described and provided as an open, extensible software framework that combines object-based audio rendering and high-level processing of advanced object metadata.

Item Type: Article
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing
Authors :
NameEmailORCID
Franck, Andreas
Francombe, Jon
Woodcock, James
Hughes, Richard
Coleman, Philipp.d.coleman@surrey.ac.uk
Menzies, Dylan
Cox, Trevor J.
Jackson, Philip J. B.P.Jackson@surrey.ac.uk
Fazi, Filippo Maria
Date : 5 June 2019
Funders : EPSRC - Engineering and Physical Sciences Research Council
DOI : 10.17743/jaes.2019.0025
Grant Title : Grant S3A: Future Spatial Audio for an Immersive Listener Experience at Home
Copyright Disclaimer : Copyright 2019 Audio Engineering Society
Related URLs :
Depositing User : Diane Maxfield
Date Deposited : 09 Jul 2019 15:20
Last Modified : 18 Sep 2019 05:10
URI: http://epubs.surrey.ac.uk/id/eprint/852233

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800