Open Access   Article Go Back

Content Based Video Retrieval System Using Video Indexing

Jaimon Jacob1 , Sudeep Ilayidom2 , V.P. Devassia3

Section:Survey Paper, Product Type: Journal Paper
Volume-7 , Issue-4 , Page no. 478-782, Apr-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7i4.478782

Online published on Apr 30, 2019

Copyright © Jaimon Jacob, Sudeep Ilayidom, V.P. Devassia . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Jaimon Jacob, Sudeep Ilayidom, V.P. Devassia, “Content Based Video Retrieval System Using Video Indexing,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.4, pp.478-782, 2019.

MLA Style Citation: Jaimon Jacob, Sudeep Ilayidom, V.P. Devassia "Content Based Video Retrieval System Using Video Indexing." International Journal of Computer Sciences and Engineering 7.4 (2019): 478-782.

APA Style Citation: Jaimon Jacob, Sudeep Ilayidom, V.P. Devassia, (2019). Content Based Video Retrieval System Using Video Indexing. International Journal of Computer Sciences and Engineering, 7(4), 478-782.

BibTex Style Citation:
@article{Jacob_2019,
author = {Jaimon Jacob, Sudeep Ilayidom, V.P. Devassia},
title = {Content Based Video Retrieval System Using Video Indexing},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {4 2019},
volume = {7},
Issue = {4},
month = {4},
year = {2019},
issn = {2347-2693},
pages = {478-782},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4061},
doi = {https://doi.org/10.26438/ijcse/v7i4.478782}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i4.478782}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4061
TI - Content Based Video Retrieval System Using Video Indexing
T2 - International Journal of Computer Sciences and Engineering
AU - Jaimon Jacob, Sudeep Ilayidom, V.P. Devassia
PY - 2019
DA - 2019/04/30
PB - IJCSE, Indore, INDIA
SP - 478-782
IS - 4
VL - 7
SN - 2347-2693
ER -

VIEWS PDF XML
380 415 downloads 157 downloads
  
  
           

Abstract

Searching for a Video in World Wide Web has augmented expeditiously as there’s been an explosion of growth in video on social media channels and networks in recent years. At present video search engines use the title, description, and thumbnail of the video for identifying the right one. In this paper, a novel video searching methodology is proposed using the Video indexing method. Video indexing is a technique of preparing an index, based on the content of video for the easy access of frames of interest. Videos are stored along with an index which is created out of video indexing technique. The video searching methodology check the content of index attached with each video to ensure that video is matching with the searching keyword and its relevance ensured, based on the word count of searching keyword in video index. Video captions are generated by the deep learning network model by combining global local (glocal) attention and context cascading mechanisms using VIST-Visual Story Telling dataset. Video Index generator uses Wormhole algorithm, that ensure minimum worst-case time for searching a key with a length of L Also, Video searching methodology extracts the video clip where the frames of interest lies from the original huge sized source video. Hence, searcher can get and download a video clip instead of downloading entire video from the video storage. This reduces the bandwidth requirement and time taken to download the videos.

Key-Words / Index Term

Video Indexing, Video Searching methodology, VIST- Visual Story Telling dataset

References

[1] Cisco, “Visual Networking Index: Forecast and Trends, 2017–2022”, CISCO, February 27, 2019.
[2] Z. Cao, and M. Zhu, “An Efficient Video Similarity Search Algorithm”, IEEE Transactions on Consumer Electronics, Vol. 56, No. 2, May 2010.
[3] Q. Chen , K. Kotani , F. Lee and T. Ohmi, “A fast search algorithm for large video database using HOG based features”, David C., Wyld et al. (Eds) : ITCS, JSE, SIP, ARIA, NLP-2016, pp. 35–41, 2016.
[4] H.Aradhye, G. Toderici, and J. Yagnik. “Video2text: Learning to annotate video content”,.ICDM Workshop on Internet Multimedia Mining, Google, Inc,USA,2009.
[5] N. Krishnamoorthy, G. Malkarnenkar, R. J. Mooney, K. Saenko, and S. Guadarrama,. “Generating Natural-language video descriptions using text-mined knowledge”. In Proceedings of the Workshop on Vision and Natural Language Processing, pp 10-19,July 2013.
[6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description”, arXiv:1411.4389v4 [cs.CV], May 2016
[7] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A Neural Image Caption Generator”, arXiv:1411.4555v2 [cs.CV], April 2015.
[8] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko. “Translating videos to natural language using deep recurrent neural networks”, arXiv:1412.4729v3 [cs.CV], April, 2015.
[9] J. Liu , Q. Yu , O. Javed , S.Ali, A. Tamrakar, A. Divakaran, H.Cheng, H. Sawhney “Video event recognition using concept attributes”, IEEE Workshop on Applications of Computer Vision (WACV), March 2013
[10] M. Mazloom, A. Habibian and C.G. M. Snoek ISLA,. “Querying for Video Events by Semantic Signatures from Few Examples”. Proceedings of the 21st ACM International Conference on multimedia, pp 609-612 October, 2013.
[11] S. Venugopalan, M. Rohrbach, J.Donahue Raymond Mooney, T. D. Kate Saenko, ”Sequence to Sequence – Video to Text”, Proceedings of the 2015, IEEE International Conference on Computer Vision (ICCV),Pages 4534-4542 December, 2015.
[12] N.Gayathri, K.Mahesh, “A Systematic study on Video Indexing”, International Journal of Pure and Applied Mathematics Volume 118 No. 8 2018, 425-428
[13] M.Ravinder, T.Venugopal, Sultanpur, Medak, ”Content-Based Video Indexing and Retrieval using Key frames Texture, Edge and Motion Features”, International Journal of Current Engineering and Technology, Vol.6, No.2,April,2016.
[14] N. Laokulrat, S. Phan, N. Nishida, R. Shu , Y.Ehara , N. Okazaki, Y. Miyao and H. Nakayama, “Generating Video Description using Sequence-to-sequence Model with Temporal Attention”, Proceedings of International Conference on Computational Linguistics: Technical Papers, pages 44–52, Osaka, Japan, December, 2016.
[15] A. Kumar , R. K. Goel , “An Efficient Algorithm for Text Localization and Extraction in Complex Video Text Images”, IEEE International Conference on Information Management in the Knowledge Economy,2013.
[16] T. Hao K. Huang, F. Ferraro, N.. Mostafazadeh, I. Misra, J. Devlin, A.Agrawal, R. Girshick, Xiaodong He, “Visual storytelling”. Annual Conference of the North American Chapter of the Association for Computational Linguistics”, arXiv:1604.03968v1 [cs.CL], 2016.
[17] T. Kim, M. OhHeo, S. Kyoung-WhaPark ,B. T. Zhang “GLAC-Net: GLobal Attention Cascading Networks for Multi-Image Cued Story Generation”, arXiv:1805.10973v3, Feb 2019.
[18] X. Wu, Fan Ni , S. Jiang, “Wormhole: A Fast Ordered Index for In-memory Data Management”, arXiv:1805.02200v2 [cs.DB], May 2018.