University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Sparse analysis model based dictionary learning and signal reconstruction.

Dong, Jing (2016) Sparse analysis model based dictionary learning and signal reconstruction. Doctoral thesis, University of Surrey.

[img] Text
thesis_final.pdf - Version of Record
Restricted to Repository staff only until 29 July 2017.
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (10MB)
[img] Text
RestrictingAccessThesisForm_signed.pdf
Restricted to Repository staff only until 29 July 2017.
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (357kB)

Abstract

Sparse representation has been studied extensively in the past decade in a variety of applications, such as denoising, source separation and classification. Earlier effort has been focused on the well-known synthesis model, where a signal is decomposed as a linear combination of a few atoms of a dictionary. However, the analysis model, a counterpart of the synthesis model, has not received much attention until recent years. The analysis model takes a different viewpoint to sparse representation, and it assumes that the product of an analysis dictionary and a signal is sparse. Compared with the synthesis model, this model tends to be more expressive to represent signals, as a much richer union of subspaces can be described. This thesis focuses on the analysis model and aims to address the two main challenges: analysis dictionary learning (ADL) and signal reconstruction. In the ADL problem, the dictionary is learned from a set of training samples so that the signals can be represented sparsely based on the analysis model, thus offering the potential to fit the signals better than pre-defined dictionaries. Among the existing ADL algorithms, such as the well-known Analysis K-SVD, the dictionary atoms are updated sequentially. The first part of this thesis presents two novel analysis dictionary learning algorithms to update the atoms simultaneously. Specifically, the Analysis Simultaneous Codeword Optimization (Analysis SimCO) algorithm is proposed, by adapting the SimCO algorithm which is proposed originally for the synthesis model. In Analysis SimCO, the dictionary is updated using optimization on manifolds, under the $\ell_2$-norm constraints on the dictionary atoms. This framework allows multiple dictionary atoms to be updated simultaneously in each iteration. However, similar to the existing ADL algorithms, the dictionary learned by Analysis SimCO may contain similar atoms. To address this issue, Incoherent Analysis SimCO is proposed by employing a coherence constraint and introducing a decorrelation step to enforce this constraint. The competitive performance of the proposed algorithms is demonstrated in the experiments for recovering synthetic dictionaries and removing additional noise in images, as compared with existing ADL methods. The second part of this thesis studies how to reconstruct signals with learned dictionaries under the analysis model. This is demonstrated by a challenging application problem: multiplicative noise removal (MNR) of images. In the existing sparsity motivated methods, the MNR problem is addressed using pre-defined dictionaries, or learned dictionaries based on the synthesis model. However, the potential of analysis dictionary learning for the MNR problem has not been investigated. In this thesis, analysis dictionary learning is applied to MNR, leading to two new algorithms. In the first algorithm, a dictionary learned based on the analysis model is employed to form a regularization term, which can preserve image details while removing multiplicative noise. In the second algorithm, in order to further improve the recovery quality of smooth areas in images, a smoothness regularizer is introduced to the reconstruction formulation. This regularizer can be seen as an enhanced Total Variation (TV) term with an additional parameter controlling the level of smoothness. To address the optimization problem of this model, the Alternating Direction Method of Multipliers (ADMM) is adapted and a relaxation technique is developed to allow variables to be updated flexibly. Experimental results show the superior performance of the proposed algorithms as compared with three sparsity or TV based algorithms for a range of noise levels.

Item Type: Thesis (Doctoral)
Subjects : sparse representation, dictionary learning, denoising
Divisions : Theses
Authors :
AuthorsEmailORCID
Dong, Jingdongjing0710@163.comUNSPECIFIED
Date : 29 July 2016
Funders : EPSRC, MOD
Contributors :
ContributionNameEmailORCID
Thesis supervisorWang, Wenwuw.wang@surrey.ac.ukUNSPECIFIED
Depositing User : Jing Dong
Date Deposited : 01 Aug 2016 08:27
Last Modified : 01 Aug 2016 08:27
URI: http://epubs.surrey.ac.uk/id/eprint/811095

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800