Human Action Recognition System Based On Silhouette

Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of...

Full description

Bibliographic Details
Main Authors: S. Maheswari, P. Arockia Jansi Rani
Format: Text
Language:English
Published: Zenodo 2016
Subjects:
Online Access:https://dx.doi.org/10.5281/zenodo.1125520
https://zenodo.org/record/1125520
id ftdatacite:10.5281/zenodo.1125520
record_format openpolar
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language English
topic Background subtraction
human silhouette
optical flow
classification.
spellingShingle Background subtraction
human silhouette
optical flow
classification.
S. Maheswari
P. Arockia Jansi Rani
Human Action Recognition System Based On Silhouette
topic_facet Background subtraction
human silhouette
optical flow
classification.
description Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy. : {"references": ["A. Bobick and J. Davis. The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):257\u2013267, 2002.", "Antonios Oikonomopoulos, Ioannis Patras and Maja Pantic. \"Spatiotemporal Salient Points For visual Recognition of Human actions.\" IEEE Transactions on Image Processing Vol. 36, no. No. 3. (2006).", "Antonios Oikonomopoulos, Maja Pantic, Ioannis Patras \"Sparse B-spline polynomial descriptors for human activity recognition\" in 2009.", "Danielweinland, RemiRonfard and Edmond Boyer, \"A Survey of Vision- Based Methods for Action Representation, Segmentation and Recognition\", October 18, 2010.", "Droogenbroeck, O. Barnich and M. Van. \"ViBe: A universal background subtraction algorithm for video sequences.\" IEEE Transactions on Image Processing, June 2011. 20(6):1709-1724.", "Enric Meinhardt-Llopis, Javier Sanchez, Daniel Kondermann , \"Horn\u2013Schunck Optical Flow with a Multi-Scale Strategy\", Image Processing On Line, 3 (2013), pp. 151\u2013172.", "I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1\u20138, 2008.", "J. K. Aggarwal, Q. Cai, W. Liao, and B. Sabata, Articulated and elastic non-rigid motion: a review, in Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994, pp. 2\u201314.", "J. Yamato, J. Ohya, and K. Ishii. Recognizing human action in time-sequential images using hidden markov model. In CVPR, 1992.\n[10]\tK. Schindler and L. van Gool. Action snippets: How many frames does human action recognition require? In CVPR, 2008.\n[11]\tKeigo Takahara, Takashi Toriu and Thi Thi Zin. \"Making Background Subtraction Robustto Various Illumination Changes.\" IJCSNS International Journal of Computer Science and Network Security, March 2011: VOL.11 No.3.\n[12]\tM. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. \"Actions as space-time shapes.\" In ICCV, 2005.\n[13]\tM. Rodriguez, J. Ahmed, and M. Shah. Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1\u20138, 2008.\n[14]\tMaja Pantic, Alexpentland and Thomas Huang, \"Human Computing and Machine Understanding of Human Behaviour\", ICMI'06, November 2-4, 2006.\n[15]\tN.A. Deepak and U.N.Sinha, \"Silhouette Based Human Motion Detection and Recognising their Actions from the captured Video Streams\", IntJ. Advanced Networking and Applications Volume: 02, Issue: 05, Pages: 817-823(2011).\n[16]\tR. Polana and R. Nelson. Low level recognition of human motion. In IEEE Workshop onNonrigid and Articulate Motion, 1994."]}
format Text
author S. Maheswari
P. Arockia Jansi Rani
author_facet S. Maheswari
P. Arockia Jansi Rani
author_sort S. Maheswari
title Human Action Recognition System Based On Silhouette
title_short Human Action Recognition System Based On Silhouette
title_full Human Action Recognition System Based On Silhouette
title_fullStr Human Action Recognition System Based On Silhouette
title_full_unstemmed Human Action Recognition System Based On Silhouette
title_sort human action recognition system based on silhouette
publisher Zenodo
publishDate 2016
url https://dx.doi.org/10.5281/zenodo.1125520
https://zenodo.org/record/1125520
long_lat ENVELOPE(-56.720,-56.720,-63.529,-63.529)
ENVELOPE(35.583,35.583,-71.417,-71.417)
geographic Austin
Rodriguez
Yamato
geographic_facet Austin
Rodriguez
Yamato
genre laptev
genre_facet laptev
op_relation https://dx.doi.org/10.5281/zenodo.1125521
op_rights Open Access
Creative Commons Attribution 4.0
https://creativecommons.org/licenses/by/4.0
info:eu-repo/semantics/openAccess
op_rightsnorm CC-BY
op_doi https://doi.org/10.5281/zenodo.1125520
https://doi.org/10.5281/zenodo.1125521
_version_ 1766062683642134528
spelling ftdatacite:10.5281/zenodo.1125520 2023-05-15T17:07:19+02:00 Human Action Recognition System Based On Silhouette S. Maheswari P. Arockia Jansi Rani 2016 https://dx.doi.org/10.5281/zenodo.1125520 https://zenodo.org/record/1125520 en eng Zenodo https://dx.doi.org/10.5281/zenodo.1125521 Open Access Creative Commons Attribution 4.0 https://creativecommons.org/licenses/by/4.0 info:eu-repo/semantics/openAccess CC-BY Background subtraction human silhouette optical flow classification. Text Journal article article-journal ScholarlyArticle 2016 ftdatacite https://doi.org/10.5281/zenodo.1125520 https://doi.org/10.5281/zenodo.1125521 2021-11-05T12:55:41Z Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy. : {"references": ["A. Bobick and J. Davis. The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):257\u2013267, 2002.", "Antonios Oikonomopoulos, Ioannis Patras and Maja Pantic. \"Spatiotemporal Salient Points For visual Recognition of Human actions.\" IEEE Transactions on Image Processing Vol. 36, no. No. 3. (2006).", "Antonios Oikonomopoulos, Maja Pantic, Ioannis Patras \"Sparse B-spline polynomial descriptors for human activity recognition\" in 2009.", "Danielweinland, RemiRonfard and Edmond Boyer, \"A Survey of Vision- Based Methods for Action Representation, Segmentation and Recognition\", October 18, 2010.", "Droogenbroeck, O. Barnich and M. Van. \"ViBe: A universal background subtraction algorithm for video sequences.\" IEEE Transactions on Image Processing, June 2011. 20(6):1709-1724.", "Enric Meinhardt-Llopis, Javier Sanchez, Daniel Kondermann , \"Horn\u2013Schunck Optical Flow with a Multi-Scale Strategy\", Image Processing On Line, 3 (2013), pp. 151\u2013172.", "I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1\u20138, 2008.", "J. K. Aggarwal, Q. Cai, W. Liao, and B. Sabata, Articulated and elastic non-rigid motion: a review, in Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994, pp. 2\u201314.", "J. Yamato, J. Ohya, and K. Ishii. Recognizing human action in time-sequential images using hidden markov model. In CVPR, 1992.\n[10]\tK. Schindler and L. van Gool. Action snippets: How many frames does human action recognition require? In CVPR, 2008.\n[11]\tKeigo Takahara, Takashi Toriu and Thi Thi Zin. \"Making Background Subtraction Robustto Various Illumination Changes.\" IJCSNS International Journal of Computer Science and Network Security, March 2011: VOL.11 No.3.\n[12]\tM. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. \"Actions as space-time shapes.\" In ICCV, 2005.\n[13]\tM. Rodriguez, J. Ahmed, and M. Shah. Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1\u20138, 2008.\n[14]\tMaja Pantic, Alexpentland and Thomas Huang, \"Human Computing and Machine Understanding of Human Behaviour\", ICMI'06, November 2-4, 2006.\n[15]\tN.A. Deepak and U.N.Sinha, \"Silhouette Based Human Motion Detection and Recognising their Actions from the captured Video Streams\", IntJ. Advanced Networking and Applications Volume: 02, Issue: 05, Pages: 817-823(2011).\n[16]\tR. Polana and R. Nelson. Low level recognition of human motion. In IEEE Workshop onNonrigid and Articulate Motion, 1994."]} Text laptev DataCite Metadata Store (German National Library of Science and Technology) Austin Rodriguez ENVELOPE(-56.720,-56.720,-63.529,-63.529) Yamato ENVELOPE(35.583,35.583,-71.417,-71.417)