University of Surrey

Test tubes in the lab Research in the ATI Dance Research

The Visual Object Tracking VOT2013 challenge results

Kristan, M, Pflugfelder, R, Leonardis, A, Matas, J, Porikli, F, Cehovin, L, Nebehay, G, Fernandez, G, Vojir, T, Gatt, A, Khajenezhad, A, Salahledin, A, Soltani-Farani, A, Zarezade, A, Petrosino, A, Milton, A, Bozorgtabar, B, Li, B, Chan, CS, Heng, C, Ward, D, Kearney, D, Monekosso, D, Karaimer, HC, Rabiee, HR, Zhu, J, Gao, J, Xiao, J, Zhang, J, Xing, J, Huang, K, Lebeda, K, Cao, L, Maresca, ME, Lim, MK, ELHelw, M, Felsberg, M, Remagnino, P, Bowden, R, Goecke, R, Stolkin, R, Lim, SY, Maher, S, Poullot, S, Wong, S, Satoh, S, Chen, W, Hu, W, Zhang, X, Li, Y and Niu, Z (2013) The Visual Object Tracking VOT2013 challenge results 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). pp. 98-111.

[img]
Preview
Text
Kristan_VOT_2013_ICCV_paper.pdf - ["content_typename_Submitted version (pre-print)" not defined]
Available under License : See the attached licence file.

Download (403kB) | Preview
[img]
Preview
PDF (licence)
SRI_deposit_agreement.pdf
Available under License : See the attached licence file.

Download (33kB) | Preview

Abstract

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge. net)

Item Type: Article
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing
Authors :
AuthorsEmailORCID
Kristan, MUNSPECIFIEDUNSPECIFIED
Pflugfelder, RUNSPECIFIEDUNSPECIFIED
Leonardis, AUNSPECIFIEDUNSPECIFIED
Matas, JUNSPECIFIEDUNSPECIFIED
Porikli, FUNSPECIFIEDUNSPECIFIED
Cehovin, LUNSPECIFIEDUNSPECIFIED
Nebehay, GUNSPECIFIEDUNSPECIFIED
Fernandez, GUNSPECIFIEDUNSPECIFIED
Vojir, TUNSPECIFIEDUNSPECIFIED
Gatt, AUNSPECIFIEDUNSPECIFIED
Khajenezhad, AUNSPECIFIEDUNSPECIFIED
Salahledin, AUNSPECIFIEDUNSPECIFIED
Soltani-Farani, AUNSPECIFIEDUNSPECIFIED
Zarezade, AUNSPECIFIEDUNSPECIFIED
Petrosino, AUNSPECIFIEDUNSPECIFIED
Milton, AUNSPECIFIEDUNSPECIFIED
Bozorgtabar, BUNSPECIFIEDUNSPECIFIED
Li, BUNSPECIFIEDUNSPECIFIED
Chan, CSUNSPECIFIEDUNSPECIFIED
Heng, CUNSPECIFIEDUNSPECIFIED
Ward, DUNSPECIFIEDUNSPECIFIED
Kearney, DUNSPECIFIEDUNSPECIFIED
Monekosso, DUNSPECIFIEDUNSPECIFIED
Karaimer, HCUNSPECIFIEDUNSPECIFIED
Rabiee, HRUNSPECIFIEDUNSPECIFIED
Zhu, JUNSPECIFIEDUNSPECIFIED
Gao, JUNSPECIFIEDUNSPECIFIED
Xiao, JUNSPECIFIEDUNSPECIFIED
Zhang, JUNSPECIFIEDUNSPECIFIED
Xing, JUNSPECIFIEDUNSPECIFIED
Huang, KUNSPECIFIEDUNSPECIFIED
Lebeda, KUNSPECIFIEDUNSPECIFIED
Cao, LUNSPECIFIEDUNSPECIFIED
Maresca, MEUNSPECIFIEDUNSPECIFIED
Lim, MKUNSPECIFIEDUNSPECIFIED
ELHelw, MUNSPECIFIEDUNSPECIFIED
Felsberg, MUNSPECIFIEDUNSPECIFIED
Remagnino, PUNSPECIFIEDUNSPECIFIED
Bowden, RUNSPECIFIEDUNSPECIFIED
Goecke, RUNSPECIFIEDUNSPECIFIED
Stolkin, RUNSPECIFIEDUNSPECIFIED
Lim, SYUNSPECIFIEDUNSPECIFIED
Maher, SUNSPECIFIEDUNSPECIFIED
Poullot, SUNSPECIFIEDUNSPECIFIED
Wong, SUNSPECIFIEDUNSPECIFIED
Satoh, SUNSPECIFIEDUNSPECIFIED
Chen, WUNSPECIFIEDUNSPECIFIED
Hu, WUNSPECIFIEDUNSPECIFIED
Zhang, XUNSPECIFIEDUNSPECIFIED
Li, YUNSPECIFIEDUNSPECIFIED
Niu, ZUNSPECIFIEDUNSPECIFIED
Date : 1 January 2013
Identification Number : 10.1109/ICCVW.2013.20
Uncontrolled Keywords : Science & Technology, Technology, Computer Science, Artificial Intelligence, Engineering, Electrical & Electronic, Computer Science, Engineering, HUMAN MOTION CAPTURE, PROTOCOL, MODEL, FACE
Related URLs :
Additional Information : © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Depositing User : Symplectic Elements
Date Deposited : 17 Nov 2015 18:08
Last Modified : 17 Nov 2015 18:08
URI: http://epubs.surrey.ac.uk/id/eprint/808968

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800