Open Access   Article Go Back

Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique

G. Augusta Kani1 , P. Geetha2 , A. Gomathi3

Section:Research Paper, Product Type: Journal Paper
Volume-06 , Issue-07 , Page no. 1-7, Sep-2018

Online published on Sep 30, 2018

Copyright © G. Augusta Kani, P. Geetha, A. Gomathi . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: G. Augusta Kani, P. Geetha, A. Gomathi, “Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique,” International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.1-7, 2018.

MLA Style Citation: G. Augusta Kani, P. Geetha, A. Gomathi "Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique." International Journal of Computer Sciences and Engineering 06.07 (2018): 1-7.

APA Style Citation: G. Augusta Kani, P. Geetha, A. Gomathi, (2018). Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique. International Journal of Computer Sciences and Engineering, 06(07), 1-7.

BibTex Style Citation:
@article{Kani_2018,
author = {G. Augusta Kani, P. Geetha, A. Gomathi},
title = {Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {9 2018},
volume = {06},
Issue = {07},
month = {9},
year = {2018},
issn = {2347-2693},
pages = {1-7},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=457},
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=457
TI - Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique
T2 - International Journal of Computer Sciences and Engineering
AU - G. Augusta Kani, P. Geetha, A. Gomathi
PY - 2018
DA - 2018/09/30
PB - IJCSE, Indore, INDIA
SP - 1-7
IS - 07
VL - 06
SN - 2347-2693
ER -

           

Abstract

Human action recognition (HAR) from videos is a significant and has more research focus in the domain of Computer vision. The purpose of human action recognition in videos is to detect and recognize the human actions from the sequence of frames. Human action recognition undertakes many difficulties such as differences in human shape, cluttered background, moving cameras, illumination conditions, motion, occlusion, and viewpoint variations. In previously, local features or deep learned features are used to recognize the action. In the proposed work, both the features are used to recognize action and for analysis. From sequences of frames background is subtracted using Multi-frame averaging method. Two kinds of feature extraction are done. Shape based feature extraction, Optical flow feature extraction are some of the hand-crafted features performed and classification is done using HMM. The other one is deep learned features. Convolutional Neural Network extracts the features from frames in each layer. It extracts the features such as line, edge, color, texture and Classification is done using SVM. For human action recognition, hand-crafted features attain good result but it fails on large set of data. Deep learned features such as CNN have been used for large dataset and good result is obtained on recognition. To improve the human action recognition result, CNN is proposed. We compared both the approaches CNN and HMM and the results were analyzed. CNN results better accuracy while comparing with HMM.

Key-Words / Index Term

Background Subtraction, Convolutional Neural Network, Canny Edge Detection, Optical Flow, Hidden Markov Model

References

[1] M. Ahmad and Seong-Whan Lee. “HMM-based Human Action Recognition Using Multiview Image Sequences”. IEEE 18th International Conference on Pattern Recognition, vol. 4, pp. 874-879, 2006.
[2] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. “Large-scale Video Classification with Convolutional Neural Networks”. IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2014.
[3] . Corinna Cortes and Vladimir Vapnik. “Support-Vector Networks”. Machine Learning, vol. 20, pp. 273–297, 1995.
[4] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie. “Behavior recognition via sparse spatio-temporal features”. IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72, 2006.
[5] Figueroa-Angulo J., Savage J., Bribiesca E., Escalante B. and Scucar L. “Compound Hidden Markov Model for Activity Labelling”. IEEE International Journal of Intelligence Science, vol. 5, pp. 177-195, 2015.
[6] Fu Jie Huang and Yann LeCun. “Large-scale Learning with SVM and Convolutional Nets for Generic Object Categorization”. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2006.
[7] Imran N. Junejo, Khurrum Nazir Junejo and Zaher Al Aghbari. “Silhouette-based human
[8] action recognition using SAX-Shapes”.Springer, vol. 30, pp. 259-269, 2014.
[9] Jie Yang, Jian Cheng and Hanqing Lu. “Human Activity Recognition based on the Blob Features”. IEEE International Conference on Multimedia and Expo, pp. 358-361, 2009.
[10] Limin Wang, Yu Qiao, and Xiaoou Tang. “Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors”. IEEE Conference on Computer Vision and Pattern Recognition, pp. 7–12, 2015.
[11] Maheshkumar H. Kolekar and Deba Prasad Dash. “Hidden Markov Model based human activity recognition using shape and optical flow based features”. IEEE Region 10 Conference (TENCON), pp. 393-396, 2016.
[12] NavidNourani-Vatani, Paulo V. K. Borges and Jonathan M. Roberts. “A Study of Feature Extraction Algorithms for Optical Flow Tracking . Australasian Conference on Robotics and Automation”. Australian Robotics and Automation Association, 2012.
[13] PalwashaAfsar and Paulo Cortez. “Automatic Human Action Recognition from Video Using Hidden Markov Model”. IEEE 18th International Conference on Computational Science and Engineering, pp. 105-109, 2015.
[14] Sheng Yu, Yun Cheng, Songzhi Su, Guorong Cai, and Shaozi Li. “Stratified pooling based deep convolutional neural networks for human action recognition”. Multimedia Tools and Applications, vol. 76, pp. 13367–13382, 2016.
[15] Xiaojiang Peng, LiminWang, XingxingWang, and Yu Qiao. “Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice”, Elsevier, vol. 150, 2016.
[16] ] Xin Yuan and Xubo Yang. “A Robust Human Action Recognition System Using Single Camera”. IEEE International Conference on Computational Intelligence and Software Engineering, pp. 1-4, 2009.
[17] Md. Zia Uddin, Nguyen Duc Than and Tae-Seong Kim. Human. “ActivityRecognition via 3-D joint angle features and Hidden Markov models”. IEEE International Conference on Image Processing Electronics and Telecommunications Research Institute(ETRI) Journal, pp. 713-716, 2010.
[18] Zhenzhong Lan, Shoou-I Yu, Ming Lin, Bhiksha Raj, and Alexander G. Hauptmann. “Local Handcrafted Features Are Convolutional Neural Networks”. International Conference on Learning Representations, pp 43–56, 2016.
[19] http://crcv.ucf.edu/data/UCF50.php