Navigation

Running Projects and Fundings

Human Factors Technologies and Services

Human Factors Technologies and Services (FACTS)

Successful markets are populated with products which actually meet the user expectations well. A key contribution to determine the proper functionality of products, systems and environment design is provided by "human factors technologies and services": usability analyses, studies on human behaviour, research on motivation, advertising and marketing, and security research are nowadays increasingly applied for the structuring of our habitats - for the design of shop floors, web and product design, interaction design, urban and traffic planning, and guidance system designs.

The facial analysis task poses several challenges we need to address:
  1. Robust real-world face detection and tracking of faces
  2. Recognition of behavioral patterns and the interpretation of facial actions
Our part in this projects includes the development of novel learning algorihtms for facial action unit and facial emotion recogntion in order to obtain the necessary robustness and flexibility required for real-world usability analyses and studies on human behaviour.


Project partners
Funding

Advanced Learning for Tracking and Detection in Medical Workflow Analysis

Advanced Learning for Tracking and Detection in Medical Workflow Analysis

The focus of this project is to advance object detection and tracking methods by considering a challenging task: automatic surgical work-flow analysis, which means creating a detailed description of the events happening in the operating room during surgeries. This analysis can be used for many applications such as optimizing the work-flow, recovering average work-flows for guiding and evaluating training surgeons, automatic report generation and ultimately for monitoring in a context aware operating room.
The project concentrates on tracking and the detection of people and large scale medical equipment in the operating room, which, in turn, facilitates the global work-flow analysis. For computer vision research an operation room offers quite some challenges due to harsh conditions concerning illumination, occlusions, numbers and types of ojects. It is therefore an ideal test-bed for the development of novel robust algorithms.
The work-flow analysis task poses several challenges we need to address:
  1. Robust people detection and tracking using multiple cameras
  2. Recognition of behavioral patterns and the interpretation of actions
  3. Tracking of different objects (with and without texture)
Our part in this project includes the development of novel on-line learning algorihtms for people and object detection and tracking in order to obtain the necessary robustness and flexibility required for medical work-flow analysis.


Project partners
Funding
Contact: Horst Possegger



Mobi-Trick

MOBile TRaffic ChecKer

The focus of the project is outdoor mobile computer vision with all of its challenges. Mobile systems need to be compact and energy efficient and are frequently changing locations. Therefore they must be autonomous and perform processing locally. A number of challenges arise from these requirements for which the project aims to provide solutions: Being compact, there is not much space for a large number of sensors such as laser scanners, radar antennas and the like. The work in this project will focus on stereo vision but with two different types of cameras. Often a second camera is already available and stereo information increases detection accuracies. Each time the system moves it needs to adapt to the changing situation. This requires adaptive calibration and on-line learning. Mobile systems often work from batteries. In addition, there is not much space to include intricate cooling systems. Thus, the system must be designed to be very energy efficient. New approaches for dynamic power management will be explored in the project. To put the work into context, several applications from the area of traffic surveillance/toll enforcement will be implemented and tested in an application oriented setting.
A more detailed overview of the project can be found on the project page.

Project partners
Funding
Contact: Martin Godec







Finished Projects

OUTLIER

On-line and Unattended Learning for Implicit Event Recognition

The ever increasing number of cameras in surveillance system requires automatic video analysis in order to spot critical situations and to alert the monitoring personnel in a timely manner. While most current approaches in this area aim for detecting a large number of specific events on a large set of complex application scenarios, the goal of this project is to go far beyond state of the art by developing novel on-line learning methods to detect unusual situations in a camera specific scenario. We will exploit the huge amount of data available for a specific camera to reliably learn usual and unusual situations.

In particular the OUTLIER project will carry out basic research in the following areas:


These generic learning algorithms will be applied for the detection of unusual situations in public places and traffic scenarios. Examples are the detection of unusual crowd behavior (upcoming panic, barred emergency exits, or toppled persons), suspicious behavior of pedestrians (e.g. going from one car to another, loitering), vehicles or persons moving on unusual locations, the detection of unusual types of moving objects and detection of unusual situations like accidents, clashes and collisions. Unlike other approaches we do not want to model these situations explicitly and individually, but we will resort to learning to discriminate the usual situation from the unusual one.

Research partners in the project are JRS, TUG for basic and applied research and Siemens for industrial exploitation of project results.

A more detailed overview of the project can be found on the project page.


Project partners

Funding

Contact: Amir Saffari, Peter M. Roth



KIRAS - MDL

Multimedia Documentation Lab

The potential for integration of multimedia content into the analysis of security relevant affairs is researched for the first time within the scope of Austrian security research efforts. The project’s goal is to harvest audio-visual information from specified open multimedia sources such as TV broadcasts and allow for integration into existing environments at user sites. The intended use of the system is to allow experts to efficiently generate more realistic and high-quality situation reports in the face of critical situations. Subsequently, these can be employed for communication with the population of Austria and to increase its security and sense of security - target goals of the KIRAS framework. An exemplary implementation of a prototype will be installed at the Zentraldokumentation of the Austrian Armed Forces. In terms of audio-processing the project builds upon existing technologies of the industrial partner, while the visual processing is investigated by ICG as academic partner and will mainly deal with person/face detection, tracking and recognition methods.

A more detailed overview of the project can be found on the project page.


Project partners



Funding

Contact: Martin Koestingerc, Paul Wohlhart, Peter M. Roth



KIRAS - SECRET

Search for Critical Events in Video Archives

Different authorities like such as the Ministry of the Interior often require to find certain event or behavior patterns in recordings in large video archives. This "forensic" search is computationally extremely expensive and due to restricted storage permissions often even not possible. Thus, security-critical events can often not prevented or being postpursued. To overcome these problems, the aim of the OUTLIER project is the investigation of algorithms, methods, and processes to alleviate the work of security staff in searching and pursuiting of events in video archives. Furthermore these tasks should be performed more efficiently.

Based on the requirements of the Ministry of the Interior as well as the possibilities of an infrastructure operator these issues should be examined and a research prototype should be created. This should occur in cooperation of AIT and ICG (University of Technology Graz) as research partners and ASE as an industrial partner. Essential research subjects are: (i) detection and segmentation of people, (ii) comparisons and finding of events in different video streams, and (iii) analyses and learning of behavior patterns. In addition, a social-scientific acceptance research will be established by the research institute of the Red Cross (FRK). Based on these results recommendations are compiled for the optimization by use and minimization of problem potentials from social-scientific view.


Project partners


Funding

Contact: Peter M. Roth





Semi-Supervised Learning for the Analysis of Unstructured Documents


The goal of this project is to develop and analyze methods for analyzing textual information. This should be realized by using semi-supervised learning methods, which use labeled as well as unlabeled data. In particular, existing methods which are already applied for pattern recognition should be adapted such that those can also be applied for textual data. For a practical analysis comparisons to SVM and k-NN classifier using a boosting algorithm should be performed, the influence of the amount of labeled/unlabeled data and the convergence should be analyzed. Moreover, a fair comparative study between batch and on-line methods is performed.

Project partners



Duration: 2008 - 2011


Contact: Amir Saffari,





Person Re-Identification


The goal of this project is to develop an interactive visual search method that finds a given pedestrian in a large archive of other camera views efficiently. A user-selected pedestrian image or sequence is used to obtain initial discriminative features and an initial ranked list of hypothetical matches. A discriminative pedestrian recognition model is learned in an on-line manner by user interaction assigning positive and negative labels to the initially retrieved results and on-line boosting for feature selection. This enables that the best discriminative features for the current query are selected.


Project partners



Funding

Contact: Martin Hirzer, Peter M. Roth



Image Processing and Statistical Learning

Image Processing and Statistical Learning

The goal of this project is to study statistical learning methods in particular boosting and random forest for computer vision tasks. We are especially interested in on-line learning.


Project partners



Contact: Peter M. Roth



EVis

Autonomous Traffic Monitoring by Embedded Vision

The world will witness a tremendous increase in the number of vehicles in the near future. Future traffic monitoring systems will therefore play an important role to improve the throughput and safety of roads. Current monitoring systems capture (usually vision-based) traffic data from a large sensory network; however, they require continuous human supervision which is extremely expensive. In the EVis research project we investigate the scientific and technological foundations for future autonomous traffic monitoring systems.


Project partners



Funding

Contact: Christian Leistner, Martin Godec



AUTOVISTA

Advanced Unsupervised Monitoring and Visualization of Complex Scenarios

The trend in video surveillance is an ever increasing number of (digital) cameras for surveying complex scenarios (e.g. crowds). Currently available video surveillance systems cannot cope with this increased complexity, the detection rates are too low and the systems are not reliable enough. This hinders the broad use of automatic surveillance systems. AUTOVISTA proposes to use modern visual computing technologies to advance the state-of-the-art of video surveillance considerably. In order to cope with the increasing number of cameras, AUTOVISTA will (1) use novel on-line learning techniques to increase the detection rate and decrease the false alarm rate, while the camera adapts in an unsupervised manner to the surveyed scene. Besides an increased performance, this has the additional advantage that the installation and maintenance effort will be substantially decreased; (2) exploit novel visualization and interaction techniques to support the human operator. Furthermore two complementary visualization modes are proposed, blending smoothly between these allows the operator to maintain coherence. These techniques will enable a single operator to cope simultaneously with a large amount of cameras. AUTOVISTA will tackle the problem of increased people densities and highly cluttered scenes in a novel manner. Instead of relying on single person detection and tracking (which is not feasible for high people density scenarios), methods will be investigated to handle the crowd as a whole. AUTOVISTA will derive spatio-temporal crowd statistics, describe normal crowd behavior and use this for unusual event detection.


Project partners



Funding

Duration: 2007 - 2009


Contact: Peter M. Roth



INT-2: Hi-Moni

Highway Monitoring

The aim of the project Highway Monitoring is to investigate detection and tracking methods based on video and audio sensors, as well as the combination of these two modalities. To control the increasing traffic-flow on highways, monitoring of traffic is becoming more and more important. Existing incident detection systems shall be improved for the surveillance of highways. Starting from the existing methods and systems, the main goal of the project is to improve and to adapt these systems in order to get more robust systems. The overall motivation of this project is to increase traffic safety through the decrease of false alarm rates of detection and tracking algorithms. In this project we will focus on three scenarios, namely detection/tracking of wrong-way drivers, detection of traffic jams, and detection of standing vehicles. Social and economic benefits: Traffic safety, accident reduction, reduction of fuel consumption, prediction of traffic jams.

A more detailed overview of the project can be found on the project page.


Project partners



Funding

Contact: Sabine Sternig Peter M. Roth



FSP/JRP Cognitive Vision


We envision a scenario in which every person will interact in a natural way with artificial devices as an aid in daily life situations such as orientation, search and information retrieval. We refer to this long-term vision as the Personal Assistance (PA) scenario, where a combination of mobile devices and distributed ambient spaces unobtrusively support users by being aware of the present situation and by responding to user requests.

A more detailed overview of the project can be found on the project page.


Project partners



Funding

Duration: 2008 - 2099


Contact: Amir Saffari, Peter M. Roth



OverseasScholarshipPhaseIIBatch1


These scholarships are being offered to scholars who intend to continue their postgraduate studies abroad in MS leading to PhD programs being offered by international universities.

General Objectives of the Program:

Funding

Contact: Inayatullah Khan



Copyright 2010 ICG

Valid XHTML 1.1 Valid CSS