University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Sparse Kernel Dictionary Learning

Plumbley, Mark and O’Brien, C (2016) Sparse Kernel Dictionary Learning In: 11th IMA International Conference on Mathematics in Signal Processing, 2016-12-12 - 2016-12-14, IET Austin Court, Birmingham.

[img]
Preview
Text
MISPabstract.pdf - Accepted version Manuscript
Available under License : See the attached licence file.

Download (87kB) | Preview
[img]
Preview
Text
okdl.pdf - Accepted version Manuscript
Available under License : See the attached licence file.

Download (235kB) | Preview
[img]
Preview
PDF (licence)
SRI_deposit_agreement.pdf
Available under License : See the attached licence file.

Download (33kB) | Preview

Abstract

Dictionary Learning (DL) has seen widespread use in signal processing and machine learning. Given a data set, DL seeks to find a so-called ‘dictionary’ such that the data can be well represented by a sparse linear combination of dictionary elements. The representational power of DL may be extended by the use of kernel mappings, which implicitly map the data to some high dimensional feature space. In Kernel DL we wish to learn a dictionary in this underlying high-dimensional feature space, which can often model the data more accurately than learning in the original space. Kernel DL is more challenging than the linear case however since we no longer have access to the dictionary atoms directly – only their relationship to the data via the kernel matrix. One strategy is therefore to represent the dictionary as a linear combination of the input data whose coefficients can be learned during training [1], relying on the fact that any optimal dictionary lies in the span of the data. A difficulty in Kernel DL is that given a data set of size N, the full (N ×N) kernel matrix needs to be manipulated at each iteration and dealing with such a large dense matrix can be extremely slow for big datasets. Here, we impose an additional constraint of sparsity on the coefficients so that the learned dictionary is given by a sparse linear combination of the input data. This greatly speeds up learning, and furthermore the speed-up is greater for larger datasets and can be tuned via a dictionary-sparsity parameter. The proposed approach thus combines Kernel DL with the ‘double sparse’ DL model [2] in which the learned dictionary is given by a sparse reconstruction over some base dictionary (in this case, the data itself). We investigate the use of sparse Kernel DL as a feature learning step for a music transcription task and compare it to another Kernel DL approach based on the K-SVD algorithm [1] (which doesnt lead to sparse dictionaries in general), in terms of computation-time and performance. Initial experiments show that Sparse Kernel DL is significantly faster than the non-sparse Kernel DL approach (6× to 8× speed-up depending on the size of the training data and the sparsity level) while leading to similar performance.

Item Type: Conference or Workshop Item (Conference Paper)
Subjects : Electronic Engineering
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering
Authors :
NameEmailORCID
Plumbley, Markm.plumbley@surrey.ac.ukUNSPECIFIED
O’Brien, CUNSPECIFIEDUNSPECIFIED
Date : 14 December 2016
Copyright Disclaimer : Copyright 2016 Institute of Mathematics
Contributors :
ContributionNameEmailORCID
UNSPECIFIEDInstitute of Mathematics, UNSPECIFIEDUNSPECIFIED
Related URLs :
Depositing User : Symplectic Elements
Date Deposited : 08 Mar 2017 12:27
Last Modified : 13 Jul 2017 13:08
URI: http://epubs.surrey.ac.uk/id/eprint/812680

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800