Monocular viewpoint invariant human activity recognition
One of the grand goals of robotics is to have assistive robots living side-by-side with humans, autonomously assisting humans in everyday activities. To be able to interact with humans and assist them, robots must be able to understand and interpret human activities. There is a growing interest in t...
Main Authors: | , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2011
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/43201/ http://irep.iium.edu.my/43201/ http://irep.iium.edu.my/43201/1/CIS-RAM_2011.PDF |
Summary: | One of the grand goals of robotics is to have assistive robots living side-by-side with humans, autonomously assisting humans in everyday activities. To be able to interact with humans and assist them, robots must be able to understand and interpret human activities. There is a growing interest in the problem of human activity recognition. Despite much progress, most computer vision researchers have narrowed the problem towards fixed camera viewpoint owing to inherent difficulty to train their systems across all possible viewpoints. However, since the robots and humans are free to move around in the environment, the viewpoint of a robot with respect to a person varies all the time. Therefore, we attempt to relax the infamous fixed viewpoint assumption and present a novel and efficient framework to recognize and classify human activities from monocular video source from arbitrary viewpoint. The proposed framework comprises of two stages: human pose recognition and human activity recognition. In the pose recognition stage, an ensemble of pose models performs inference on each video frame. Each pose model estimates the probability that the given frame contains the corresponding pose. Over a sequence of frames, each pose model forms a time series. In the activity recognition stage, we use nearest neighbor, with dynamic time warping as a distance measure, to classify pose time series. We have built a small-scale proof-of-concept model and performed some experiments on three publicly available datasets. The satisfactory experimental results demonstrate the efficacy of our framework and encourage us to further develop a full-scale architecture. |
---|