Open Access   Article Go Back

The Classification of Data: A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization

Kshitij Tripathi1 , Rajendra G. Vyas2 , Anil K. Gupta3

  1. Department of Computer Applications, The Maharaja Sayajirao University of Baroda,Vadodara, India.
  2. Department of Mathematics, The Maharaja Sayajirao University of Baroda, Vadodara, India.
  3. Department of Computer Science and Applications, Barkatullah University, Bhopal, India.

Section:Research Paper, Product Type: Journal Paper
Volume-6 , Issue-5 , Page no. 241-254, May-2018

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v6i5.241254

Online published on May 31, 2018

Copyright © Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta, “The Classification of Data: A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.5, pp.241-254, 2018.

MLA Style Citation: Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta "The Classification of Data: A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization." International Journal of Computer Sciences and Engineering 6.5 (2018): 241-254.

APA Style Citation: Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta, (2018). The Classification of Data: A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization. International Journal of Computer Sciences and Engineering, 6(5), 241-254.

BibTex Style Citation:
@article{Tripathi_2018,
author = {Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta},
title = {The Classification of Data: A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {5 2018},
volume = {6},
Issue = {5},
month = {5},
year = {2018},
issn = {2347-2693},
pages = {241-254},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=1969},
doi = {https://doi.org/10.26438/ijcse/v6i5.241254}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i5.241254}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=1969
TI - The Classification of Data: A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization
T2 - International Journal of Computer Sciences and Engineering
AU - Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta
PY - 2018
DA - 2018/05/31
PB - IJCSE, Indore, INDIA
SP - 241-254
IS - 5
VL - 6
SN - 2347-2693
ER -

VIEWS PDF XML
891 293 downloads 232 downloads
  
  
           

Abstract

The Artificial Neural Networks (ANNs) has proved their significance to perform well in the fields of data-mining and machine learning like classification, pattern recognition, forecasting and prediction to have a few of them. This paper explores a novel approach for classification of data on four benchmark datasets from the perspective of ANNs and its intricacies. The proposed approach is successful in overcoming the drawback of over-fitting of data exists in the classification domain. Further, the proposed methodology reflects very improved and consistent results in comparison to existing techniques available in the ANNs as well as non-ANN domain.

Key-Words / Index Term

Classification, Artificial Neural Network, Machine Learning, N-Fold Cross Validation, Transfer function.

References

[1] B.W. Silverman, M. C. Jones, E. Fix, J.L. Hodges, "An important contribution to nonparametric discriminant analysis and density estimation", Int. Stat. Rev, Vol.57, Issue. 3, pp. 233–247, 1989.
[2] B. Widrow, M. E. Hoff, "Adaptive switching circuits", IRE1 WESCON Convention Record, pp. 96-104, 1960.
[3] C. B. Bishop, "Neural Networks for pattern Recognition", Oxford University Press, Oxford, 1995.
[4] C. Cortes, V. Vapnik, "Support-vector networks", Mach. Learn., Vol. 20, Issue. 3, pp. 273–297, 1995.
[5] D. E. Rumelhart, G.E. Hinton, R.J. Williams, "Learning internal representation by error propagation", Parallel distributed processing: Explorations in the microstructure of cognition, Vol.1, Bradford books, Cambridge, MA, 1986.
[6] D. S. Broomhead, Lowe, "Multivariate functional interpolation and adaptive networks", Complex Syst., Vol. 2, pp. 302–355, 1988.
[7] F. Rosenblatt, "The perceptron: A probabilistic model for information storage and optimization in the brain", Psychological Review, Vol. 65, Issue. 6, pp. 386-408, 1958.

[8] G. Shakhnarovich, T. Darell, P. Indyk, "Nearest-Neighbors Methods in Learning and Vision", MIT Press, 2006.
[9] I. Guyon, A. Saffari, G. Dror, G. Cawley, "Model selection: beyond the Bayesian–frequentist divide", The Journal of Machine Learning Research, Vol. 11, pp. 61–87, 2010.
[10] I. V. Tetko, D. J. Livingstone, A.I. Luik, "Neural network studies. 1. Comparison of over-fitting and overtraining", Journal of chemical information and computer sciences, Vol. 35,
Issue.5, pp. 826-833, 1995.
[11] J. J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational abilities". Proc. Natl. Acad. Sci. USA, Vol. 79, pp. 2554-2558, 1982.
[12] N. Barakat, A. P. Bradley, "Rule extraction from support vector machines: a review", Neurocomputing, Vol. 74, Issue. 1–3, pp.178–190, 2010.
[13] N. Rezazadeh, "Initialization of weights in deep belief neural network based on standard deviation of feature values in training data vectors", International journal of scientific research in computer science and engineering, Vol.5, Issue.4, pp.1-8,2017.
[14] O. Chapelle, V. Vapnik, O. Bousquet, S. Mukherjee, "Choosing multiple parameters for support vector machines", Machines Learning, Vol. 46, Issue.1-3, pp. 131–159, 2002.
[15] O. D. Hebb, "Organisation of Behaviour: A Neuropsychological Theory", John Wiley and Sons, New York, 1949.
[16] P. Cortez, A. Cerderia, F. Almeida, T. Matos, J. Reis, "Modelling wine preferences by data mining from physicochemical properties", Decision Support Systems, Elsevier, Vol.47, Issue.4, pp. 547-553, 2009.
[17] P. Piro, R. Nock, F. Nielsen, M. Barlaud, "Leveraging k-NN for generic classification boosting", Neurocomputing, Vol. 80, pp. 3–9, 2012.
[18] P. Jian, M.Kamber, "Data Mining: Concepts and Techniques", 3rd Ed, 2012.
[19] S. N. Sivanandam, M. Paulraj, "Introduction to Artificial Neural Networks", Vikas Pub. Pvt. Ltd., India, 2009.
[20] Satish Kumar, "Neural Networks A Classroom Approach", Tata McGraw Hill, 2013.
[21] T. Jackie, C. Hung, "Neural Network Model: Loss Minimization control of a PMSM with core Resistance Assesment", International journal of scientific research in Network security and Communication,Vol.4, Issue.6, 2016.
[22] T. Kohonen, "Self-organized formation of topologically correct feature maps", Biological Cybernetics, Vol. 43, Issue. 1, pp. 59–69, 1982.
[23] V. N. Vapnik, "Statistical learning theory", Wiley-Interscience, 1998.
[24] W. An, M. Liang (2013), "Fuzzy support vector machine based on within-class scatter for classification problems with outliers or noises", Neurocomputing, Vol. 110, June, pp. 101–110, 2013.
[25] W. Pitts, & W. S. McCulloch, "A logical calculus of the ideas immanent in nervous activity", Bull. Math. Biophys., Vol. 5, pp.115–133, 1943.
[26] Zhun-ga Liu, Quan Pan, Jean Dezertb, "Classification of uncertain and imprecise data based on evidence theory", Neurocomputing,Vol. 133, pp. 459–470, 2014.