Navigation

Georg Waltner


E-Mail waltner(at)icg.tugraz.at
Phone +43 316 873-5029
Office Room E3.09 (IE02112)
Inffeldgasse 16, 2nd floor
8010 Graz, Austria
Google Scholar ResearchGate DBLP

Short CV

Georg Waltner received a M.Sc. in Telematics in 2014 from Graz University of Technology. He is research assistant at the Institute for Computer Graphics and Vision at Graz University of Technology and pursuing a PhD.

Projects

N-Shot Learning (CVWW'16)

Authors: Waltner, Opitz, Bischof

Abstract.
We propose a model able to learn new object classes with a very limited amount of training samples (i.e. 1 to 5), while requiring near zero runtime cost for learning new object classes. After extracting Convolutional Neural Network (CNN) features, we discriminatively learn embeddings to separate the classes in feature space. The proposed method is especially useful for applications such as dish or logo recognition, where users typically add object classes comprising a wide variety of representations. Another benefit of our method is the low demand for computing power and memory, making it applicable for object classification on embedded devices. We demonstrate on the Food-101 dataset that even one single training example is sufficient to recognize new object classes and considerably improve results over the probabilistic Nearest Class Means (NCM) formulation.

See also:

Fruit Object recognition (ICIAP'15 Workshop)

Authors: Waltner, Schwarz, Ladstätter, Weber, Luley, Bischof, Lindschinger, Schmid, and Paletta

Abstract.
The prevention of cardiovascular diseases becomes more and more important, as malnutrition accompanies today's fast moving society. While most people know the importance of adequate nutrition, information on advantageous food is often not at hand, such as in daily activities. Decision making on individual dietary management is closely linked to the food shopping decision. Since food shopping often requires fast decision making, due to stressful and crowded situations, the user needs a meaningful assistance, with clear and rapidly available associations from food items to dietary recommendations. This paper presents first results of the Austrian project (MANGO) which develops mobile assistance for instant, situated information access via Augmented Reality (AR) functionality to support the user during everyday grocery shopping. Within a modern diet - the functional eating concept - the user is advised which fruits and vegetables to buy according to his individual profile. This specific oxidative stress profile is created through a short in-app survey. Using a built-in image recognition system, the application automatically classifies video captured food using machine learning and computer vision methodology, such as Random Forests classification and multiple color feature spaces. The user can decide to display additional nutrition information along with alternative proposals. We demonstrate, that the application is able to recognize food classes in real-time, under real world shopping conditions, and associates dietary recommendations using situated AR assistance.

See also:
  • Paper, Poster, Slides
  • The dataset will be made available. Please check back soon or contact the author (waltner[at]icg.tugraz.at).

Sport Activity Recognition (AAPR'14, DVS'14)

Authors: Waltner, Mauthner and Bischof

Abstract.
Activity recognition in sport is an attractive field for computer vision research. Game, player and team analysis are of great interest and research topics within this field emerge with the goal of automated analysis. The very specific underlying rules of sports can be used as prior knowledge for the recognition task and present a constrained environment for evaluation. This paper describes recognition of single player activities in sport with special emphasis on volleyball. Starting from a per-frame player-centered activity recognition, we incorporate geometry and contextual information via an activity context descriptor that collects information about all player’s activities over a certain timespan relative to the investigated player. The benefit of this context information on single player activity recognition is evaluated on our new real-life dataset presenting a total amount of almost 36k annotated frames containing 7 activity classes within 6 videos of professional volleyball games.
Our incorporation of the contextual information improves the average player-centered classification performance on volleyball specific classes, proving that spatio-temporal context is an important clue for activity recognition.


Publications

2016

  1. Grid Loss: Detecting Occluded Faces (bib) (supp)Michael Opitz, Georg Waltner, Georg Poier, Horst Possegger, and Horst Bischof In Proc. European Conference on Computer Vision (ECCV), 2016
  2. BaCoN: Building a Classifier from only N Samples (bib)Georg Waltner, Michael Opitz, and Horst Bischof In Proc. Computer Vision Winter Workshop (CVWW), 2016

2015

  1. MANGO - Mobile Augmented Reality with Functional Eating Guidance and Food Awareness (bib)Georg Waltner, Michael Schwarz, Stefan Ladstätter, Anna Weber, Patrick Luley, Horst Bischof, Meinrad Lindschinger, Irene Schmid, and Lucas Paletta In Proc. International Workshop on Multimedia Assisted Dietary Management (MADIMA, in conjunction with ICIAP), 2015
  2. Encoding based Saliency Detection for Videos and Images (bib) (supp) Thomas Mauthner, Horst Possegger, Georg Waltner, and Horst Bischof In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015

2014

  1. Improved Sport Activity Recognition using Spatio-temporal Context (bib)Georg Waltner, Thomas Mauthner, and Horst Bischof In Proc. DVS-Conference on Computer Science in Sport (DVS/GSSS), 2014
  2. Indoor Activity Detection and Recognition for Automated Sport Games Analysis (bib)Georg Waltner, Thomas Mauthner, and Horst Bischof In Proc. Workshop of the Austrian Association for Pattern Recognition (AAPR/OAGM), 2014

2014

  1. Indoor Activity Detection and Recognition for Automated Sport Games Analysis (bib)Georg Waltner MSc. Thesis, Graz University of Technology, Faculty of Computer Science, 2014


Copyright 2010 ICG

Valid XHTML 1.1 Valid CSS