Open Access   Article Go Back

KNN and Decision Tree Model to Predict Values in Amount of One Pound Table

Stanley Ziweritin1 , Iduma Aka Ibiam2 , Taiwo Adisa Oyeniran3 , Godwin Epiahe Oko4

Section:Research Paper, Product Type: Journal Paper
Volume-9 , Issue-7 , Page no. 16-21, Jul-2021

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v9i7.1621

Online published on Jul 31, 2021

Copyright © Stanley Ziweritin, Iduma Aka Ibiam, Taiwo Adisa Oyeniran, Godwin Epiahe Oko . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Stanley Ziweritin, Iduma Aka Ibiam, Taiwo Adisa Oyeniran, Godwin Epiahe Oko, “KNN and Decision Tree Model to Predict Values in Amount of One Pound Table,” International Journal of Computer Sciences and Engineering, Vol.9, Issue.7, pp.16-21, 2021.

MLA Style Citation: Stanley Ziweritin, Iduma Aka Ibiam, Taiwo Adisa Oyeniran, Godwin Epiahe Oko "KNN and Decision Tree Model to Predict Values in Amount of One Pound Table." International Journal of Computer Sciences and Engineering 9.7 (2021): 16-21.

APA Style Citation: Stanley Ziweritin, Iduma Aka Ibiam, Taiwo Adisa Oyeniran, Godwin Epiahe Oko, (2021). KNN and Decision Tree Model to Predict Values in Amount of One Pound Table. International Journal of Computer Sciences and Engineering, 9(7), 16-21.

BibTex Style Citation:
@article{Ziweritin_2021,
author = {Stanley Ziweritin, Iduma Aka Ibiam, Taiwo Adisa Oyeniran, Godwin Epiahe Oko},
title = {KNN and Decision Tree Model to Predict Values in Amount of One Pound Table},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {7 2021},
volume = {9},
Issue = {7},
month = {7},
year = {2021},
issn = {2347-2693},
pages = {16-21},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=5358},
doi = {https://doi.org/10.26438/ijcse/v9i7.1621}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v9i7.1621}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=5358
TI - KNN and Decision Tree Model to Predict Values in Amount of One Pound Table
T2 - International Journal of Computer Sciences and Engineering
AU - Stanley Ziweritin, Iduma Aka Ibiam, Taiwo Adisa Oyeniran, Godwin Epiahe Oko
PY - 2021
DA - 2021/07/31
PB - IJCSE, Indore, INDIA
SP - 16-21
IS - 7
VL - 9
SN - 2347-2693
ER -

VIEWS PDF XML
426 423 downloads 173 downloads
  
  
           

Abstract

Machine learning is one of the fast growing areas of interest in artificial intelligence adopted by professional in every spheres of life that uses algorithms with data to sytematically learn patterns and improve from experience. The increasing competitive and robust predicting methods of machine learning are becoming more interesting and popular. This is valuable to investors, surveyors and valuers against manually computed payment table values that depends on emperical results. There are tedious and rigorous processes in valuation practice that involves some aspects of financial analysis in computations for the one pound table values. The aim is to build K-nearest neigbr and decision tree model to predict the nemeric values in amount of one pound table at a give rate of interest and period of years.This model is useful to investors, accountants, data professionals, surveyors and valuers interested in financial analysis and its applications. A cross validation test was carried out with predicted R-squared test to detect overfitting and generalize model performance on testing dataset. We introduced noisy data with smoothing curve expeoneintial function to overcome the risk of overfitting in predicting target varaible. The K-nearest neighbor and decision tree techniques were trained, tested and resulted into 95.76% and 99.86% respectively.

Key-Words / Index Term

Artificial intelligence, Decision tree, K-nearest neighbor, Machine learning

References


[1] Y. LeCun, Y. Bengio, and G. Hinto, "Deep Learning," Nature, vol.521, no.7553, pp.436-444, 2015.
[2] L. Breiman, "Random Forests, "Machine Learning", vol.45, no.1, pp.5-12, 2001.
[3] A. Gupa, A. Mohammad, A. Syed, and M. N. Halgamuge, "A Comparative Study of Classification Algorithms Using Data Mining: Crime and Accidents in Denver City the USA," International Journal of Advanced Computer Science and Applications(IJACSA), vol.7, no.7, pp.374-381, 2016.
[4] K. Wisaeng, "A Comparison of Different Classification Techniques for Bank Direct Marketing," International Journal Of Soft Computing and Engineering(IJSCE), vol.3, no.4, 116–119, 2013.
[5] M. Fatima, M. Pasha, "Survey of Machine Learning Algorithms for Disease Diagnostic," Journal of Intelligent Learning Systems and Applications,vol.9, no.1, pp.1-16, 2017.
[6] M. Abedini, N. C. F. Codella, R. Connell, R. Garnavi, M. Merler, S. Pankanti, J. R. Smith, and T. Yeda-Mahmood, "A Generalized Framework for Medical Image Classification and Recognition," IBM J Res Dev. vol.59, no.(2/3), pp.1-18, 2015.
[7] G. Kashyap, and E. Chauhan, "Parametic Comparisons of Classification Techniques in Data Mining Applications," International Journal of Engineering Development and Research(IJEDR), vol.4, no.2, pp.1116-1121, 2016.
[8] A. Upadhayay, S. Shukla, and S. Kumar, "Empirical Comparison by data mining Classification algorithms (C 4.5 A & C 5.0) For thyroid cancer data set," International Journal of Computer Science and Communication Networks(IJCSCN), vol.3, no.1, pp.64–68, 2013.
[9] K. H. Rao, G. Srinivas, A. Damodhar, and M. V. Krishna, (2011). Implementation of anomaly detection technique using machine learning algorithms, International Journal of Computer Science and Telecommunication(IJCST), vol.2, no.3, pp.25-31, 2011.
[10] S. Ruggieri, "Efficient C4.5 Classification Algorithm," Knowledge and Data Engineering, IEEE Transactions, vol.14, no.2, pp.438-444, 2002.
[11] J. G. Moreno-Torres, J. A. Saez, and F. Herrera, "Study on the Impact of Partition-Induced Dataset Shift on K-Fold Cross-Validation," IEEE Transactions on Neural Networks and Learning Systems, vol.23, no.8, pp.1304-1312, 2012.
[12] I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, "Practical Machine Learning Tools and Techniques," Morgan Kaufmann, pp.34-89, 2016.
[13] V. Mongia and G. Singh, "Automatic Evaluation of Best Investment Options for Investors Using Optimal Decision Tree Algorithm," International Journal of Computer Applications(IJCA), vol.96, no.20, pp.39-44, 2014.
[14] S. Borde, A. Rane, G. Shende, and S. Shetty, "Real Estate Investment Advising Using Machine Learning," International Research Journal of Engineering and Technology(IRJET), vol.4, no.3, pp.1821-1825, 2017.
[15] D. S. Jadhav, and H. P. Channe, "Comparative Study of K-NN, Naive Bayes and Decision Tree Classification Techniques," International Journal of Science and Research (IJSR), vol.5, no.1, pp.1842-1845, 2014.
[16] K. Alkhatib, H. Najadat, I. Hmeidi, and M. K. Ali-Shatnawi,. "Stock Price Prediction Using K-Nearest Neighbor (kNN) Algorithm," International Journal of Business, Humanities and Technology(IJBHT), vol.3, no.3, pp.32-44, 2013.
[17] A. Singh, M. N. Halgamuge, and R. Lakshmiganthan, "Impact of Different Data Types on Classifier Performance of Random Forest, Naive Bayes, and K-Nearest Neighbors Algorithm," International Journal of Advanced Computer Science and Applications(IJACSA), vol.8, no.12, pp.1-11, 2017.
[18] A. Liaw, and M. Wiener, "Classification and Regression by Random Forest," R News, vol. 2, no.3, pp.18-22, 2003.
[19] T. Mary-Huard, and S. Robin, “Tailored aggregation for classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.31, no.11, pp.2098–2105, 2009.
[20] A. W. Davidson, "Parry`s Valuation and Investment Tables," Estates Gazette, 13th Edition, pp.1-550, 2013.
[21] V. Podgorelec, P. Kokol, B. Stiglic, I. Rozman, "Decision Trees: An Overview andTheir Use in Medicine," Journal of Medical Systems Kluwer Academic/Plenum Press, vol.25, no.5, pp.445-463, 2002.
[22] M. Anyanwu, and S. Shiva, "Comparative Analysis of Serial Detection Tree Classification Algorithms," International Journal of Computer Science and Security, vol.3, no.3, pp.229-240, 2019.
[23] L. Wei-yin, "Fifty Years of Classification and Regression Trees," International Statistical Review, vol.82, no.3, pp.329-348, 2014.
[24] M. Xu, J. Wang, and T. Chen, "Improved Decision Tree Algorithm ID3+, Intelligent Computing in Signal Processing and Pattern Recognition, vol.345, pp.141-149, 2006.
[25] S. B. Imandoust, and M. Bolandraftar, "Application of K-Nearest Neighbor (KNN) Approach for Predicting Economic Events: Theoretical Background", International Journal of Engineering Research and Applications, vol.3, no.5, pp.605-610, 2013.
[26] P. Hall, B. U. Park, and R. J. Samworth, "Choice of Neighbor Order in Nearest-Neighbor Classification," The Annals of Statistics, vol.36, no.5, pp.2135-2152, 2008.
[27] A. Solichin, "Comparison of Decision Tree, Naïve Bayes and KNearest Neighbors for Predicting Thesis Graduation," Proceedings of the Electrical Engineering Computer Science and Informatics, pp.18-20, 2019.
[28] M. T. Van, N. V. Tuan, T. T. Son, H. Le-Minh, and A. Burton, "Weighted K-nearest Neighbor Model for Indoor VLC Positioning," IET Communications, vol.11, no.6, pp.864-871, 2017.
[29] R. Benetis, C. Jensen, G. Karciauskas, and S. Saltenis, "Nearest and Reverse Nearest Neighbor Queries for Moving Objects," The International on Very Large Data Bases, vol.15, no.3, pp.229-250, 2006.
[30] Y. S. Chen, Y. P. Hung, T. F. Yen, and C. S. Fuh, "Fast and Versatile Algorithm for Nearest Neighbor Search Based on a Lower Bound Tree," Pattern Recognition, vol.40, no.2, pp.360–375, 2007.
[31] J. Geu, L. Du, Y. Zhang, and T. Xiog, "A New Distance-Weighted K-nearest Neighbor Classifier," Journal of Information and Computational Science, vol.9, no.6, pp.1429-1436, 2012.
[32] V. Athitsos, J. Alon, S. Sclaroff, and G. Kollios, "Boostmap: An Embedding Method for Efficient Nearest Neighbor Retrieval," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.30, no.1, pp.89–104, 2007.
[33] C. Domeniconi, J. Peng, and D. Gunopulos, "Local Adaptive Metric Nearest-Neighbor Classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.9, pp.1281-1285, 2002.
[34] H. Wang, “Nearest Neighbors by Neighborhood Counting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.28, no.6, pp.942–953, 2009.
[35] E. Bax, "Validation of Nearest Neighbor Classifiers," IEEE Transaction Information Theory, vol.46, pp.2746-2752, 2007.
[36] S. Zhao, C. Rui, and Y. Zhang, "MICKNN Multi-instance Covering KNN Algorithm," Tsinghua Science and Technology, vol.18, no.4, 360-368, 2013.
[37] X. Zhang, and S. A. Jiang, "A Splitting Criteria Based on Similarity in Decision Tree Learning," JSW, vol. 7, no. 8, pp.82-1775, 2012.
[38] Koli, S. "Sentiment Analysis with Machine Learning," International Journal of Computer Science and Engineering(IJCSE), Vol.9, no.6, pp.77-82, 2012.
[39] Ignatious, J., Kulkarni, Y., Bari, S. and Naglot, D. "Comparative Analysis of Machine Learning Algorithms For Credit Card Fraud Detection," International Journal of Computer Science and Engineering(IJCSE), vol.8, no.6, pp.6-9, 2020.