Discriminative nonnegative dictionary learning using cross-coherence penalties for single channel source separation
Grais, EM and Erdogan, H (2013) Discriminative nonnegative dictionary learning using cross-coherence penalties for single channel source separation
Full text not available from this repository.Abstract
In this work, we introduce a new discriminative training method for nonnegative dictionary learning. The new method can be used in single channel source separation (SCSS) applications. In SCSS, nonnegative matrix factorization (NMF) is used to learn a dictionary (a set of basis vectors) for each source in the magnitude spectrum domain. The trained dictionaries are then used in decomposing the mixed signal to find the estimate for each source. Learning discriminative dictionaries for the source signals can improve the separation performance. To achieve discriminative dictionaries, we try to avoid the bases set of one source dictionary from representing the other source signals. We propose to minimize cross-coherence between the dictionaries of all sources in the mixed signal. We incorporate a simplified cross-coherence penalty using a regularized NMF cost function to simultaneously learn discriminative and reconstructive dictionaries. The new regularized NMF update rules that are used to discriminatively train the dictionaries are introduced in this work. Experimental results show that using discriminative training gives better separation results than using conventional NMF. Copyright © 2013 ISCA.
Item Type: | Conference or Workshop Item (UNSPECIFIED) |
---|---|
Divisions : | Surrey research (other units) |
Authors : | Grais, EM and Erdogan, H |
Date : | 1 January 2013 |
Depositing User : | Symplectic Elements |
Date Deposited : | 17 May 2017 13:54 |
Last Modified : | 23 Jan 2020 18:56 |
URI: | http://epubs.surrey.ac.uk/id/eprint/840736 |
Actions (login required)
![]() |
View Item |
Downloads
Downloads per month over past year