Open Access   Article Go Back

Automatic Decision of Findings Text in Development Models

S.V.Paulraj 1 , L.Jayasimman 2 , N.sugavaneswaran 3

Section:Research Paper, Product Type: Journal Paper
Volume-06 , Issue-02 , Page no. 393-397, Mar-2018

Online published on Mar 31, 2018

Copyright © S.V.Paulraj, L.Jayasimman, N.sugavaneswaran . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: S.V.Paulraj, L.Jayasimman, N.sugavaneswaran, “Automatic Decision of Findings Text in Development Models,” International Journal of Computer Sciences and Engineering, Vol.06, Issue.02, pp.393-397, 2018.

MLA Style Citation: S.V.Paulraj, L.Jayasimman, N.sugavaneswaran "Automatic Decision of Findings Text in Development Models." International Journal of Computer Sciences and Engineering 06.02 (2018): 393-397.

APA Style Citation: S.V.Paulraj, L.Jayasimman, N.sugavaneswaran, (2018). Automatic Decision of Findings Text in Development Models. International Journal of Computer Sciences and Engineering, 06(02), 393-397.

BibTex Style Citation:
@article{_2018,
author = {S.V.Paulraj, L.Jayasimman, N.sugavaneswaran},
title = {Automatic Decision of Findings Text in Development Models},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2018},
volume = {06},
Issue = {02},
month = {3},
year = {2018},
issn = {2347-2693},
pages = {393-397},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=273},
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=273
TI - Automatic Decision of Findings Text in Development Models
T2 - International Journal of Computer Sciences and Engineering
AU - S.V.Paulraj, L.Jayasimman, N.sugavaneswaran
PY - 2018
DA - 2018/03/31
PB - IJCSE, Indore, INDIA
SP - 393-397
IS - 02
VL - 06
SN - 2347-2693
ER -

           

Abstract

Automated feature choice is essential for text categorization to range back the feature size and to hurry up the educational method of classifiers. Allocated engineering tasks are often conducted using process models. In this circumstance, it is necessary that these models do not contain structural or terminological inconsistencies. To this end, many automatic analysis techniques have been proposed to provide quality assurance. While appropriate properties of control flow can be checked in an automated fashion, there is a lack of techniques addressing textual quality. More particularly, there is currently no technique available for handling the issue of lexical ambiguity caused by homonyms and synonyms. In this paper, we tackle this research gap and intend a modus operandi that detect and resolve lexical ambiguities in practice models.

Key-Words / Index Term

Text categorization, X-Drop Algorithm

References

[1] T. Joachims, “Text categorization with support vector machines: Learning with many relevant features,” in European Conference on Machine Learning. Springer, 1998.
[2] W. Lam, M. Ruiz, and P. Srinivasan, “Automatic text categorization and its application to text retrieval,” IEEE Transactions on Knowledge and Data Engineering, vol. 11, no. 6, pp. 865–879, 1999.
[3] F. Sebastiani, “Machine learning in automated text categorization,” ACM computing surveys (CSUR), vol. 34, no. 1, pp. 1–47, 2002.
[4] H. Al-Mubaid and S. Umair, “A new text categorization technique using distributional clustering and learning logic,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 9, pp. 1156–1165, 2006.
[5] Y. Aphinyanaphongs, L. D. Fu, Z. Li, E. R. Peskin, E. Efstathiadis, C. F. Aliferis, and A. Statnikov, “A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization,” Journal of the Association for Information Science and Technology, vol. 65, no. 10, pp. 1964–1987, 2014.
[6] H. Liu and L. Yu, “Toward integrating feature selection algorithms for classification and clustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 4, pp. 491–502, 2005.
[7] G. Salton and C. Buckley, “Term-weighting approaches in automatic text retrieval,” Information Processing and Management, vol. 24, no. 5, pp. 513–523, 1988.
[8] N. G¨overt, M. Lalmas, and N. Fuhr, “A probabilistic description-oriented approach for categorizing web documents,” in International Conference on Information and Knowledge Management, 1999, pp. 475–482.
[9] A. Mnih and G. E. Hinton, “A scalable hierarchical distributed language model,” in Advances in Neural Information Processing Systems, 2009, pp. 1081–1088.
[10] J. Turian, L. Ratinov, and Y. Bengio, “Word representations: a simple and general method for semi-supervised learning,” in Association for Computational Linguistics, 2010, pp. 384–394.
[11] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
[12] Q. V. Le and T. Mikolov, “Distributed representations of sentences and documents,” arXiv preprint arXiv:1405.4053, 2014.
[13] D. D. Lewis, “Naive (Bayes) at forty: The independence assumption in information retrieval,” in European Conference on Machine Learning, 1998, pp. 4–15.
[14] D. Koller and M. Sahami, “Hierarchically classifying documents using very few words,” International Conference on Machine Learning, pp. 170–178, 1997.
[15] Y. H. Li and A. K. Jain, “Classification of text documents,” The Computer Journal, vol. 41, no. 8, pp. 537–546, 1998.
[16] G. Forman, “An extensive empirical study of feature selection metrics for text classification,” The Journal of Machine Learning Research, vol. 3, pp. 1289–1305, 2003.
[17] Y. Yang and J. O. Pedersen, “A comparative study on feature selection in text categorization,” in International Conference on Machine Learning, vol. 97, 1997, pp. 412–420.
[18] A. Genkin, D. D. Lewis, and D. Madigan, “Large-scale bayesian logistic regression for text categorization,” Technometrics, vol. 49, no. 3, pp. 291–304, 2007.
[19] B. Tang and H. He, “ENN: Extended nearest neighbor method for pattern recognition [research frontier],” IEEE Computational Intelligence Magazine, vol. 10, no. 3, pp. 52–60, 2015.