University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Image and Video Mining through Online Learning

Gilbert, Andrew and Bowden, Richard (2017) Image and Video Mining through Online Learning Computer Vision and Image Understanding, 158. pp. 72-84.

[img] Text
image-video-mining.pdf - Accepted version Manuscript
Restricted to Repository staff only until 2 February 2018.
Available under License : See the attached licence file.

Download (10MB)
[img]
Preview
Text (licence)
SRI_deposit_agreement.pdf
Available under License : See the attached licence file.

Download (33kB) | Preview

Abstract

Within the eld of image and video recognition, the traditional approach is a dataset split into xed training and test partitions. However, the labelling of the training set is time-consuming, especially as datasets grow in size and complexity. Furthermore, this approach is not applicable to the home user, who wants to intuitively group their media without tirelessly labelling the content. Consequently, we propose a solution similar in nature to an active learning paradigm, where a small subset of media is labelled as semantically belonging to the same class, and machine learning is then used to pull this and other related content together in the feature space. Our interactive approach is able to iteratively cluster classes of images and video. We reformulate it in an online learning framework and demonstrate competitive performance to batch learning approaches using only a fraction of the labelled data. Our approach is based around the concept of an image signature which, unlike a standard bag of words model, can express co-occurrence statistics as well as symbol frequency. We e ciently compute metric distances between signatures despite their inherent high dimensionality and provide discriminative feature selection, to allow common and distinctive elements to be identi ed from a small set of user labelled examples. These elements are then accentuated in the image signature to increase similarity between examples and pull correct classes together. By repeating this process in an online learning framework, the accuracy of similarity increases dramatically despite labelling only a few training examples. To demonstrate that the approach is agnostic to media type and features used, we evaluate on three image datasets (15 scene, Caltech101 and FG-NET), a mixed text and image dataset (ImageTag), a dataset used in active learning (Iris) and on three action recognition datasets (UCF11, KTH and Hollywood2). On the UCF11 video dataset, the accuracy is 86.7% despite using only 90 labelled examples from a dataset of over 1200 videos, instead of the standard 1122 training videos. The approach is both scalable and e cient, with a single iteration over the full UCF11 dataset of around 1200 videos taking approximately 1 minute on a standard desktop machine.

Item Type: Article
Subjects : Electronic Engineering
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing
Authors :
NameEmailORCID
Gilbert, AndrewA.Gilbert@surrey.ac.ukUNSPECIFIED
Bowden, RichardR.Bowden@surrey.ac.ukUNSPECIFIED
Date : 2 February 2017
Identification Number : 10.1016/j.cviu.2017.02.001
Copyright Disclaimer : © 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
Uncontrolled Keywords : Action Recognition, Data Mining, Real-time, Learning, Spatio-temporal, Clustering
Related URLs :
Depositing User : Symplectic Elements
Date Deposited : 01 Feb 2017 16:37
Last Modified : 07 Jul 2017 09:05
URI: http://epubs.surrey.ac.uk/id/eprint/813431

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800