University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Weakly Labelled AudioSet Tagging With Attention Neural Networks

Kong, Qiuqiang, Yu, Changsong, Xu, Yong, Iqbal, Turab, Wang, Wenwu and Plumbley, Mark D. (2019) Weakly Labelled AudioSet Tagging With Attention Neural Networks IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27 (11). pp. 1791-1802.

[img]
Preview
Text
KongYuXuIWP19-aslp-audioset_accepted_ieee.pdf - Accepted version Manuscript

Download (3MB) | Preview

Abstract

Audio tagging is the task of predicting the presence or absence of sound classes within an audio clip. Previous work in audio tagging focused on relatively small datasets limited to recognising a small number of sound classes. We investigate audio tagging on AudioSet, which is a dataset consisting of over 2 million audio clips and 527 classes. AudioSet is weakly labelled, in that only the presence or absence of sound classes is known for each clip, while the onset and offset times are unknown. To address the weakly-labelled audio tagging problem, we propose attention neural networks as a way to attend the most salient parts of an audio clip. We bridge the connection between attention neural networks and multiple instance learning (MIL) methods, and propose decision-level and feature-level attention neural networks for audio tagging. We investigate attention neural networks modelled by different functions, depths and widths. Experiments on AudioSet show that the feature-level attention neural network achieves a state-of-the-art mean average precision (mAP) of 0.369, outperforming the best multiple instance learning (MIL) method of 0.317 and Google’s deep neural network baseline of 0.314. In addition, we discover that the audio tagging performance on AudioSet embedding features has a weak correlation with the number of training examples and the quality of labels of each sound class.

Item Type: Article
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing
Authors :
NameEmailORCID
Kong, Qiuqiangq.kong@surrey.ac.uk
Yu, Changsong
Xu, Yong
Iqbal, Turabt.iqbal@surrey.ac.uk
Wang, WenwuW.Wang@surrey.ac.uk
Plumbley, Mark D.m.plumbley@surrey.ac.uk
Date : November 2019
Funders : EPSRC - Engineering and Physical Sciences Research Council, The China Scholarship Council
DOI : 10.1109/TASLP.2019.2930913
Grant Title : “Making Sense of Sounds”
Copyright Disclaimer : © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TASLP.2019.2930913
Uncontrolled Keywords : Audio tagging; AudioSet; Attention neural network; Weakly labelled data; Multiple instance learning
Depositing User : Diane Maxfield
Date Deposited : 30 Aug 2019 12:02
Last Modified : 30 Aug 2019 12:02
URI: http://epubs.surrey.ac.uk/id/eprint/852511

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800