Active Object Detection Model with Deep Neural Network for Object Recognition
R. Kapila1 , H. Wadhwa2
Section:Research Paper, Product Type: Journal Paper
Volume-6 ,
Issue-9 , Page no. 265-269, Sep-2018
CrossRef-DOI: https://doi.org/10.26438/ijcse/v6i9.265269
Online published on Sep 30, 2018
Copyright © R. Kapila, H. Wadhwa . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: R. Kapila, H. Wadhwa, “Active Object Detection Model with Deep Neural Network for Object Recognition,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.9, pp.265-269, 2018.
MLA Style Citation: R. Kapila, H. Wadhwa "Active Object Detection Model with Deep Neural Network for Object Recognition." International Journal of Computer Sciences and Engineering 6.9 (2018): 265-269.
APA Style Citation: R. Kapila, H. Wadhwa, (2018). Active Object Detection Model with Deep Neural Network for Object Recognition. International Journal of Computer Sciences and Engineering, 6(9), 265-269.
BibTex Style Citation:
@article{Kapila_2018,
author = {R. Kapila, H. Wadhwa},
title = {Active Object Detection Model with Deep Neural Network for Object Recognition},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {9 2018},
volume = {6},
Issue = {9},
month = {9},
year = {2018},
issn = {2347-2693},
pages = {265-269},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=2856},
doi = {https://doi.org/10.26438/ijcse/v6i9.265269}
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i9.265269}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=2856
TI - Active Object Detection Model with Deep Neural Network for Object Recognition
T2 - International Journal of Computer Sciences and Engineering
AU - R. Kapila, H. Wadhwa
PY - 2018
DA - 2018/09/30
PB - IJCSE, Indore, INDIA
SP - 265-269
IS - 9
VL - 6
SN - 2347-2693
ER -
VIEWS | XML | |
475 | 276 downloads | 209 downloads |
Abstract
The number of the computations and feature transformations along with the normalization and automatic categorization is required by the object classification algorithms. In this paper, the robust feature descriptor used with the active object detection method (AODM) along with the probabilistic equation enabled deep neural networks (DNN). The multi-category DNN (mDNN) has been described with the repetitious phases so that it is simple to do job with the multi-category dataset. In every iterative phase mDNN shows the training data of main class as primary class and remaining all other training data are divided as the secondary class for the supervised classification. In the object image dataset, designed model is proficient of working with the variations which are observed in the configuration of the color, texture, light, image orientation, and occlusion and color illuminations. Certain analysis has been organized over the designed model for the performance calculation of the object identification system in the designed model. The results which we collected are in the shape of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. In the terms of the overall accuracy the designed model has clearly outperformed the existing models. The designed model growth has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.
Key-Words / Index Term
Deep neural network, active object model, object recognition, SIFT, SURF
References
[1] Esehaghbeygi, A., Ardforoushan, M., Monajemi, S.A.H. and Masoumi, A.A.. Digital image processing for quality ranking of saffron peach. Int. Agrophysics, 24(2), pp.115-120, 2010.
[2] Plebe, A. and Grasso, G… Localization of spherical fruits for robotic harvesting. Machine Vision and Applications, 13(2), pp.70-79,2001.
[3] Alfatni, M.S.M., Shariff, A.R.M., Shafri, H.Z.M., Saaed, O.B. and Eshanta, O.M. Oil palm fruit bunch grading system using red, green and blue digital number. Journal of Applied Sciences, 8(8), pp.1444-1452, 2008.
[4] Jiménez, A.R., Ceres, R. and Pons, J.L.. A vision system based on a laser range-finder applied to robotic fruit harvesting. Machine Vision and Applications, 11(6), pp.321-329, 2000.
[5] Bama, B.S., Valli, S.M., Raju, S. and Kumar, V.A. Content based leaf image retrieval (CBLIR) using shape, color and texture features. Indian Journal of Computer Science and Engineering, 2(2), pp.202-211, 2011.
[6] Bucksch, A. and Fleck, S.. Automated detection of branch dimensions in woody skeletons of fruit tree canopies. Photogrammetric engineering & remote sensing, 77(3), pp.229-240, 2011.
[7] Margaritis, D. and Thrun. Learning to Locate an Object in 3D Space from a Sequence of Camera Images. In ICML Vol. 98, no. July, pp. 332-340, 1988.
[8] Eitz, M., Richter, R., Boubekeur, T., Hildebrand, K. and Alexa, M. Sketch-based shape retrieval. ACM Trans. Graph., 31(4), pp.31-1, 2012.
[9] Fleites, F.C., Wang, H. and Chen, S.C.. Enhancing product detection with multicue optimization for TV shopping applications. IEEE Transactions on Emerging Topics in Computing, 3(2), pp.161-171, 2015.
[10] Hassan, S.N.A., Rahman, N.N.S.A., Htike, Z.Z. and Win, S.L Vision Based Entomology-How to Effectively Exploit Color and Shape Features. Computer Science & Engineering, 4(2), p.1., 2014.
[11] Ishak, W.I.W. and Hudzari, R.M. Image based modeling for oil palm fruit maturity prediction. Journal of Food, Agriculture & Environment, 8(2),
.pp.469-476, 2010.
[12] Ismail, W.I.W. and Razali, M.H... Hue optical properties to model oil palm fresh fruit bunches maturity index. In Proceedings of World Multi-Conference on Systemics, Cybernetics and Pattern Recognition (pp. 168-173) , 2010.
[13] Jiang, L., Zhu, B., Cheng, X., Luo, Y. and Tao, Y.. 3D surface reconstruction and analysis in automated apple stem-end/calyx identification. Transactions of the ASABE, 52(5), pp.1775-1784, 2009.
[14] Dadwal, M. and Banga, V.K.. Estimate ripeness level of fruits using RGB color space and fuzzy logic technique. International Journal of Engineering and Advanced Technology, 2(1), pp.225-229, 2012.
[15] Dadwal, M. and Banga, V.K.. . Estimate ripeness level of fruits using RGB color space and fuzzy logic technique. International Journal of Engineering and Advanced Technology, 2(1), pp.225-229, 2012.
[16] Li, P., Lee, S.H. and Hsu, H.Y.. Use of a cold mirror system for citrus fruit identification. In Computer Science and Automation Engineering (CSAE), 2011 IEEE International Conference on Vol. 2 ,no. June, pp. 376-381, 2011.
[17] Venkatraman, D. and Makur, A.. A compressive sensing approach to object-based surveillance video coding. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on pp. 3513-3516, 2009.
[18] Girshick, R., Donahue, J., Darrell, T. and Malik, J.. Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1), pp.142-158, 2016.
[19] Singh, P. and Sharma, A. Face recognition using principal component analysis in MATLAB. International journal of Scientific research in Computer science and Engineering, 3(1), pp.1-5, 2015.
[20] Shivakumar, M., Subalakshmi, R., Shanthakumari, S. and Joseph, S.J... Architecture for Network-Intrusion Detection and Response in open Networks using Analyzer Mobile Agents. International Journal of Scientific Research in Network Security and Communication, 1, pp.1-7, 2013.