Navigation

Research

Hough-based Tracking of Non-Rigid Objects

HoughTrack Online learning has shown to be a successful approach for tracking of previously unknown objects. The major limitation, however, is that most approaches are limited to a bounding-box representation with fixed aspect ratio. Thus, they provide a less accurate foreground/background separation, and cannot handel highly non-rigid and articulated objects.
In this paper, we present a novel tracking-by-detection approach overcoming these limitations based on the Generalized Hough-Transform. We extend the idea of Hough-Forests to the online domain and couple the center vote based detection and back-projection with a rough segmentation based on graph-cuts. This significantly reduces the amount of noisy training samples during online learning and effectively prevents the tracker from drifting. We demonstrate that our method successfully tracks various previously unknown objects, even under heavy non-rigid transformations, partial occlusions, scale changes and rotations.
Moreover, we compare our tracker to state-of-the-art methods (both bounding-box based as well as part-based) and show robust and accurate tracking results on various challenging sequences.

For more details see here.


Grid-based Object Detection

The most prominent approach for object detection is to use a sliding window technique. In contrast, the idea of classifier grids is to train a separate classifier for each image location. Thus, the complexity of the classification task that has to be handled by a single classifier is dramatically reduced. Each classifier has only to discriminate the object-of-interest from the background at one specific location in the image, which, in turn, further reduces the required complexity of the classifier. Classifier Grid
For more details see here


Action Recognition

The goal of this project is to develop a human action recognition system suitable for very short sequences. In particular, we estimate Histograms of Oriented Gradients (HOGs) for the current frame as well as the corresponding dense flow field estimated from two frames. The thus obtained descriptors are then efficiently represented by the coefficients of a Nonnegative Matrix Factorization (NMF). For classification we apply either a one-vs-all SVM or an efficient cascaded Linear Discriminant Analysis (CLDA) classifier. AR System

For more details see here.


Person Re-Identification

The goal of this project is to develop an interactive visual search method that finds a given pedestrian in a large archive of other camera views efficiently. A user-selected pedestrian image or sequence is used to obtain initial discriminative features and an initial ranked list of hypothetical matches. A discriminative pedestrian recognition model is learned in an on-line manner by user interaction assigning positive and negative labels to the initially retrieved results and on-line boosting for feature selection. This enables that the best discriminative features for the current query are selected. AR System


For more details see here.


On-line Learning from Multiple Cameras




The goal of this project is to train detectors exploiting the vast amount of unlabeled data given by geometry information of a specific multiple camera setup. Starting from a small number (as small as one!) of positive training samples, we apply a co-training strategy in order to generate new very valuable samples from unlabeled data.
MC_COTRAIN
For more details see here.













Copyright 2010 ICG

Valid XHTML 1.1 Valid CSS