Open Access   Article Go Back

Human Activity Recognition Using LSTM Networks

P. Sharma1 , S. Chaudhary2 , Komal 3

  1. CSE, Amity School of Engineering and Technology, Amity University, Gurugram, India.
  2. CSE, Amity School of Engineering and Technology, Amity University, Gurugram, India.
  3. CSE, Amity School of Engineering and Technology, Amity University, Gurugram, India.

Section:Research Paper, Product Type: Journal Paper
Volume-6 , Issue-3 , Page no. 165-167, Mar-2018

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v6i3.165167

Online published on Mar 30, 2018

Copyright © P. Sharma, S. Chaudhary, Komal . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: P. Sharma, S. Chaudhary, Komal, “Human Activity Recognition Using LSTM Networks,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.3, pp.165-167, 2018.

MLA Style Citation: P. Sharma, S. Chaudhary, Komal "Human Activity Recognition Using LSTM Networks." International Journal of Computer Sciences and Engineering 6.3 (2018): 165-167.

APA Style Citation: P. Sharma, S. Chaudhary, Komal, (2018). Human Activity Recognition Using LSTM Networks. International Journal of Computer Sciences and Engineering, 6(3), 165-167.

BibTex Style Citation:
@article{Sharma_2018,
author = {P. Sharma, S. Chaudhary, Komal},
title = {Human Activity Recognition Using LSTM Networks},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2018},
volume = {6},
Issue = {3},
month = {3},
year = {2018},
issn = {2347-2693},
pages = {165-167},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=1778},
doi = {https://doi.org/10.26438/ijcse/v6i3.165167}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i3.165167}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=1778
TI - Human Activity Recognition Using LSTM Networks
T2 - International Journal of Computer Sciences and Engineering
AU - P. Sharma, S. Chaudhary, Komal
PY - 2018
DA - 2018/03/30
PB - IJCSE, Indore, INDIA
SP - 165-167
IS - 3
VL - 6
SN - 2347-2693
ER -

VIEWS PDF XML
456 367 downloads 263 downloads
  
  
           

Abstract

Deep learning has shown great improvements in all the computer vision and image interpretation tasks. In this paper a fully automated deep model for human activity recognition has been proposed which do not include any prior knowledge. In the first step of the proposed method, model automatically learns all the temporal and spatial features for recognition. In the second stage of the method memory network which is recurrent in nature is used to classify the various human actions. The results obtained from the suggested method are compared with all the rage methods. Outcomes show that the suggested method has better accuracy as compared to various alternative techniques available.

Key-Words / Index Term

MLP, LSTM, TDR

References

[1] [1] G. Antonini, M. Bierlaire, and M. Weber, “Discrete choice models of pedestrian walking behaviour”, Transportation Research Part B: Methodological, Vol:40, Issue:8, pp. 667–687, 2006.
[2] [2] J. Azorin-Lopez, M. Saval-Calvo, A. Fuster-Guillo, and A. Oliver-Albert, “ A predictive model for recognizing human behaviour based on trajectory representation”, Neural Networks (IJCNN), 2014 International Joint Conference, pp. 1494–1501. IEEE, 2014.
[3] [3] A. Krizhevsky, I. Sutskever, and Hinton, G. E.Hinton, “ Image Net classification with deep convolutional neural network”, Advances in Neural Information Processing Systems, pp.1097–1105, 2012.
[4] [4] A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks”, IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 6645– 6649,2013.
[5] [5] D.Weinland, R.Ronfard, and Boyerc, E.Boyerc, “A survey of vision-based methods for action representation, segmentation and recognition”, Computer Vision and Image Understanding Vol:115, Issue:2, pp. 224–241,2011.
[6] [6] K.Cho and Chen, X.Chen, “ Classifying and visualizing motion capture sequences using deep neural networks”, International Conference on Computer Vision Theory and Applications, pp. 122–130,2014.
[7] [7] G. Lefebvre, S.Berlemont, F.Mamalet, and C. Garcia, “BLSTM-RNN based 3D gesture classification”, Proceedings of the International Conference on Artificial Neural Networks and Machine Learning, pp. 381–388,2013.
[8] [8] Y.Du, W.Wang and L.Wang, “Hierarchical recurrent neural network for skeleton based action recognition”, IEEE Conference on Computer Vision and Pattern Recognition, 1110–1118,2015.
[9] [9] V. Pham,T. Bluche, C. Kermorvant, and J. Louradour, “Dropout improves recurrent neural networks for handwriting recognition”, International Conference on Frontiers in Handwriting Recognition, pp. 285–290,2014.
[10] [10] D. Anguita, A. Ghio, L. Oneto, X. Parra and J. L. Reyes-Ortiz, “A Public Domain Dataset for Human Activity Recognition Using Smartphones”, 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.