Navigation

Hough-based Tracking of Non-Rigid Objects


Abstract

HoughTrack Online learning has shown to be a successful approach for tracking of previously unknown objects. The major limitation, however, is that most approaches are limited to a bounding-box representation with fixed aspect ratio. Thus, they provide a less accurate foreground/background separation, and cannot handel highly non-rigid and articulated objects.
In this paper, we present a novel tracking-by-detection approach overcoming these limitations based on the Generalized Hough-Transform. We extend the idea of Hough-Forests to the online domain and couple the center vote based detection and back-projection with a rough segmentation based on graph-cuts. This significantly reduces the amount of noisy training samples during online learning and effectively prevents the tracker from drifting. We demonstrate that our method successfully tracks various previously unknown objects, even under heavy non-rigid transformations, partial occlusions, scale changes and rotations.
Moreover, we compare our tracker to state-of-the-art methods (both bounding-box based as well as part-based) and show robust and accurate tracking results on various challenging sequences.

This work has been supported by the Austrian FFG project MobiTrick (8258408) under the FIT-IT program.

Tracking Loop

HoughTrack The tracking loop depicts the different stages of our tracking approach (starting from top-left):
See also: Approach (avi).




Code & Dataset

Code: HoughTrack1.0 (zip).

Collection of the used Sequences: Sequences (zip).

Sample videos

Supplemental Material: Tracking (avi).

Poster Video: Poster Video (mp4).

Related Publications

  1. Hough-based Tracking of Non-rigid Objects (bib)Martin Godec, Peter M. Roth, and Horst Bischof In Proc. International Conference on Computer Vision (ICCV), 2011

Copyright 2010 ICG