Open Access   Article Go Back

Entry-Exit event detection from video frames

inay Kumar V1 , P Nagabhushan2

  1. Department of Studies in Computer Science, University of Mysore, Mysuru, India.
  2. Department of Studies in Computer Science, University of Mysore, Mysuru, India.

Correspondence should be addressed to: vkumar.vinay@ymail.com.

Section:Research Paper, Product Type: Journal Paper
Volume-6 , Issue-2 , Page no. 112-118, Feb-2018

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v6i2.112118

Online published on Feb 28, 2018

Copyright © Vinay Kumar V, P Nagabhushan . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Vinay Kumar V, P Nagabhushan, “Entry-Exit event detection from video frames,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.2, pp.112-118, 2018.

MLA Style Citation: Vinay Kumar V, P Nagabhushan "Entry-Exit event detection from video frames." International Journal of Computer Sciences and Engineering 6.2 (2018): 112-118.

APA Style Citation: Vinay Kumar V, P Nagabhushan, (2018). Entry-Exit event detection from video frames. International Journal of Computer Sciences and Engineering, 6(2), 112-118.

BibTex Style Citation:
@article{V_2018,
author = {Vinay Kumar V, P Nagabhushan},
title = {Entry-Exit event detection from video frames},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {2 2018},
volume = {6},
Issue = {2},
month = {2},
year = {2018},
issn = {2347-2693},
pages = {112-118},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=1709},
doi = {https://doi.org/10.26438/ijcse/v6i2.112118}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i2.112118}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=1709
TI - Entry-Exit event detection from video frames
T2 - International Journal of Computer Sciences and Engineering
AU - Vinay Kumar V, P Nagabhushan
PY - 2018
DA - 2018/02/28
PB - IJCSE, Indore, INDIA
SP - 112-118
IS - 2
VL - 6
SN - 2347-2693
ER -

VIEWS PDF XML
812 521 downloads 366 downloads
  
  
           

Abstract

Video surveillance has been one of the ubiquitous aspects of life since few decades. However, there are certain places that demand privacy of an individual like washrooms, changing rooms, baby feeding rooms at airports, etc where cameras cannot be installed / are restricted. Thus, it has raised concerns about safety and security of the public. The objective of our research is to design and analyze the processes and various conceptual models to automate the Entry-Exit surveillance of the people entering into or exiting from the Camera restricted areas. As part of the objective, in this paper, work is carried out to detect or determine the Entry-Exit events using the video frames captured at the entrances of the camera restricted areas by analyzing the variations in histograms of colors-RGB in the video frames using Histogram distance measures. Few grids in the Camera View Scene are selected by continuous learning and are extracted to determine the events happening in the scene thus contributing to improvement in computing time. Confirmation of event happening and classifying it as Entry or Exit or Miscellaneous is presented by temporal analysis of these grids. Experiments are conducted on few standard data sets like SBM datasets transforming them to our scenario, as well as our manual data sets captured in real time with few assumptions to test the techniques proposed.

Key-Words / Index Term

Computer vision, video surveillance, camera prohibited areas ,color histograms, regression lines

References

[1] Collins, R.T., Lipton, A.J., Fujiyoshi, H., Kanade, T., 2001. Algorithms for cooperative multisensor surveillance. Proc. IEEE 89, 1456–1477.
[2] Maria-Florina Balcan, Avrim Blum, Patrick Pakyan Choi, John Lafferty, Brian Pantano, Mugizi Robert Rwebangira and Xiaojin Zhu, Person Identification in Webcam Images:An Application of Semi-Supervised Learning, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 USA
[3] Agustina, J.R., Clavell, G.G. The impact of CCTV on fundamental rights and crime prevention strategies: The case of the Catalan Control Commission of Video surveillance Devices. computer law & security review. 2011, 27, 168-74.
[4] Jeong, J., Gu, Y., He, T., Du, D.H.C. Virtual Scanning Algorithm for Road Network Surveillance. IEEE Transactions On Parallel And Distributed Systems. 2010, 21, 1734-49.
[5] Leotta, M.J., Mundy, J.L. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2011, 33, 1457-69.
[6] Rougier, C., Meunier, J., St-Arnaud, A., Rousseau, J. Robust Video Surveillance for Fall Detection Based on Human Shape Deformation. IEEE Transactions on Circuits and Systems for Video Technology. 2011, 21, 611-22.
[7] Yigithan Dedeoglu, Moving Object Detection,Tracking And Classification For Smart Video Surveillance, a thesis submitted to the Department of Computer Engineering and the Institute of Engineering and science of Bilkent University.
[8] Mrinali M. Bhajibhakare, Pradeep K. Deshmukh, To Detect and Track Moving Object for Surveillance System, International Journal of Innovative Research in Computer and Communication Engineering Vol. 1, Issue 4, June 2013
[9] Kuihe Yang, Zhiming Cai, Lingling Zhao, Algorithm Research on Moving Object Detection of Surveillance Video Sequence, Optics and Photonics Journal, 2013, 3, 308-312
[10] Xiaogang Wang , “Intelligent multi-camera video surveillance: A review”, Pattern Recognition Letters 34(2013) 3-19
[11] Domenico Daniele Bloisi, “Visual Tracking and Data Fusion for Automatic Video Surveillance”, SAPIENZA Universita di Roma, September 2009
[12] Arun Hampapur, Lisa Brown, Jonathan Connell, Ahmet Ekin, Norman Haas, Max Lu, Hans Merkl, Sharath Pankanti, Andrew Senior, Chiao-Fe Shu, and Ying Li Tian, ”Smart Video Surveillance”, IEEE Signal Processing Magazine, March 2005
[13] Zhiyuan Shi, Timothy.M.Hospedales, Tao Xiang, “Transferring a Semantic Representation for Person Re-Identification and Search”, CVPR 2015.
[14] Dong Seon Cheng and Marco Cristani Person, “Re-identification by Articulated Appearance Matching“, Dept. of Computer Science & Engineering, HUFS, Korea
[15] C.-H. Kuo, S. Khamis, and V. Shet. Person re-identification using semantic color names and rankboost. In WACV, 2013.2
[16] R. Layne, T. Hospedales, and S. Gong. Person Reidentification, chapter Attributes-based Re-identification. Springer, 2014. 1, 2, 3, 6, 8.
[17] S. Khamis, C.-H. Kuo, V. K. Singh, V. Shet, and L. S. Davis. Joint learning for attribute-consistent person reidentification. In ECCV Workshop on Visual Surveillance and Re-Identification, 2014. 2
[18] D. Gray and H. Tao. Viewpoint invariant pedestrian recognition with an ensemble of localized features. In ECCV, 2008. Chapter 2, 6.
[19] H. Wang, S. Gong, and T. Xiang. Unsupervised learning of generative topic saliency for person re- dentification. In BMVC, 2014. 2, 3, 5, 6
[20] Y. Xu, L. Lin, W.-S. Zheng, and X. Liu. Human reidentification by matching compositional template with cluster sampling. In ICCV, 2013. 2
[21] Cheng, D.S., Cristani, M., Stoppa, M., Bazzani, L., Murino, V.: Custom pictorial structures for re-identi_cation. In: Proc. BMVC. (2011)
[22] Stauffer C, Grimson W (1999) Adaptive background mixture models for real-time tracking. IEEE Comput Soc Conf Comput Vis Pattern Recogn 2:246–252
[23] Goyal, K. & Singhai, J. Review of background subtraction methods using Gaussian mixture model for video surveillance systems, Artif Intell Rev (2017). https://doi.org/10.1007/s10462-017-9542-x
[24] Garima Mathur, Mahesh Bundele, Research on Intelligent Video Surveillance techniques for suspicious activity detection critical review, International Conference on Recent Advances and Innovations in Engineering (ICRAIE), 2016.
[25] Zheng J, Wang Y, Nihan N, Hallenbeck E (2006) Extracting roadway background image: a mode based approach. J Transp Res Rep 1944:82–88
[26] Pinto RC, Engel PM. A fast incremental Gaussian mixture model. PLoS ONE 2015;10(10):e0139931 doi: 10.1371/journal.pone.0139931 [PMC free article] [PubMed]
[27] S.Indu and S. chaudhury, “Optimal sensor placement for surviellence of large spaces,” Proceedings of ACM/IEEE International conference on Distributed Smart Cameras, Sepember 2009.
[28] R. Al-Hmouz, S. Challa, "Optimal Placement for Opportunistic Cameras Using Genetic Algorithm", ICISSNIP, pp. 337-341, 2005.
[29] Rangarajan, L., Nagabhushan, P., 2000. A new method for pattern classification through dimensionality reduction based on regression analysis. In: Proc. Indian Conf. Computer Vision, Graphics Image Process., December 20– 22, 2000.
[30] S. Gulliver, G. Ghinea, "Stars in their eyes: What eye-tracking reveals about multimedia perceptual quality", IEEE Trans. Syst. Man Cybern. A Syst. Humans, vol. 34, no. 4, pp. 472-482, Jul. 2004
[31] M. Camplani, L. Maddalena, G. Moyà Alcover, A. Petrosino, L. Salgado, A Benchmarking Framework for Background Subtraction in RGBD videos, in S. Battiato, G. Gallo, G.M. Farinella, M. Leo (Eds), New Trends in Image Analysis and Processing-ICIAP 2017 Workshops, Lecture Notes in Computer Science, Springer, 2017