An Efficient First Order Logical Casual Decision Tree in High Dimensional Dataset
S.Preethi 1 , C.Rathika 2
- Dept. of Computer Science, Sri Ramakrishna college of Arts and Science for women, Coimbatore, India.
- Dept. of Computer Science, Sri Ramakrishna college of Arts and Science for women, Coimbatore, India.
Correspondence should be addressed to: preethisenthil1969@gmail.com .
Section:Research Paper, Product Type: Journal Paper
Volume-6 ,
Issue-2 , Page no. 73-78, Feb-2018
CrossRef-DOI: https://doi.org/10.26438/ijcse/v6i2.7378
Online published on Feb 28, 2018
Copyright © S.Preethi , C.Rathika . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: S.Preethi , C.Rathika, “An Efficient First Order Logical Casual Decision Tree in High Dimensional Dataset,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.2, pp.73-78, 2018.
MLA Style Citation: S.Preethi , C.Rathika "An Efficient First Order Logical Casual Decision Tree in High Dimensional Dataset." International Journal of Computer Sciences and Engineering 6.2 (2018): 73-78.
APA Style Citation: S.Preethi , C.Rathika, (2018). An Efficient First Order Logical Casual Decision Tree in High Dimensional Dataset. International Journal of Computer Sciences and Engineering, 6(2), 73-78.
BibTex Style Citation:
@article{_2018,
author = {S.Preethi , C.Rathika},
title = {An Efficient First Order Logical Casual Decision Tree in High Dimensional Dataset},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {2 2018},
volume = {6},
Issue = {2},
month = {2},
year = {2018},
issn = {2347-2693},
pages = {73-78},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=1703},
doi = {https://doi.org/10.26438/ijcse/v6i2.7378}
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i2.7378}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=1703
TI - An Efficient First Order Logical Casual Decision Tree in High Dimensional Dataset
T2 - International Journal of Computer Sciences and Engineering
AU - S.Preethi , C.Rathika
PY - 2018
DA - 2018/02/28
PB - IJCSE, Indore, INDIA
SP - 73-78
IS - 2
VL - 6
SN - 2347-2693
ER -
VIEWS | XML | |
1166 | 807 downloads | 470 downloads |
Abstract
Uncovering causal interactions in data is a most important objective of data analytics. Causal relationships are usually exposed with intended research, e.g. randomised controlled examinations, which however are costly or insufficient to be performed in several cases. In this research paper aims to present a new Casual Decision tree structure of first-order logical casual decision tree called FOL-CDT structure. The proposed method follows a well-recognized pruning approach in causal deduction framework and makes use of a standard arithmetical test to create the causal relationship connecting a analyst variable and the result variable. At the similar instance, by taking the advantages of standard decision trees, a FOL-CDT presents a compact graphical illustration of the causal relationships with pruning method, and building of a FOL-CDT is quick as a effect of the divide and conquer strategy in use, making FOL-CDTs realistic for representing and resulting causal signals in large data sets.
Key-Words / Index Term
Data Mining, First order Logical, Decision Tree, Pruning, Classification
References
[1] E. A. Stuart, “Matching methods for causal inference: A review and a look forward,” Statistical Sci., Vol. 25, No. 1, pp. 1–21, 2010
[2] N. Cartwright, “What are randomised controlled trials good for?” Philosophical Studies, vol. 147, no. 1, pp. 59–70, 2009.
[3] P. R. Rosenbaum, Design of Observational Studies. Berlin, Germany: Springer, 2010.
[4] R. P. Rosenbaum and B. D. Rubin, “Reducing bias in observational studies using subclassification on the propensity score,” J. Amer. Statistical Assoc., Vol. 79, No. 387, pp. 516–524, 1984.
[5] N. Mantel and W. Haenszel, “Statistical aspects of the analysis of data from retrospective studies of disease,” J. Nat. Cancer Inst., Vol. 22, No. 4, pp. 719–748, 1959.
[6] C. F. Aliferis, A. Statnikov, I. Tsamardinos, S. Mani, and X. D. Koutsoukos, “Local causal and Markov blanket induction for causal discovery and feature selection for classification part I: Algorithms and empirical evaluation,” J. Mach. Learn. Res., Vol. 11, pp. 171–234, 2010.
[7] J. Foster, J. Taylor, and S. Ruberg, “Subgroup identification from randomized clinical trial data,” Statistics Med., Vol. 30, No. 24, pp. 2867–2880, 2011
[8] R. P. Rosenbaum and B. D. Rubin, “The central role of the propensity score in observational studies for causal effects,” Biometrika, Vol. 70, No. 1, pp. 41–55, 1983.
[9] D. B. Rubin, “Causal inference using potential outcomes: Design, modeling, decision,” J. Amer. Statistical Assoc., Vol. 100, No. 469, pp. 322–331, 2005.
[10] S. Greenland and B. Brumback, “An overview of relations among causal modelling methods,” Int. J. Epidemiology, Vol. 31, pp. 1030–1037, 2002
[11] N. Mantel and W. Haenszel, “Statistical aspects of the analysis of data from retrospective studies of disease,” J. Nat. Cancer Inst., Vol. 22, No. 4, pp. 719–748, 1959.
[12] B. K. Lee, J. Lessler, and E. A. Stuart, “Improving propensity score weighting using machine learning,” Statistics Med., Vol. 29, No. 3, pp. 337–46, 2010.
[13] D. Chickering, D. Heckerman, and C. Meek, “Large-sample learning of Bayesian networks is NP-hard,” J. Mach. Learn. Res., Vol. 5, pp. 1287–1330, 2004.
[14] M. K. P. B€uehlmann and M. Maathuis, “Variable selection for highdimensional linear models: Partially faithful distributions and the PC-simple algorithm,” Biometrika, Vol. 97, pp. 261–278, 2010.
[15] Z. Jin, J. Li, L. Liu, T. D. Le, B. Sun, and R. Wang, “Discovery of causal rules using partial association,” in Proc. IEEE 12th Int. Conf. Data Mining, Dec. 2012, pp. 309–318.
[16] S. Athey and G. Imbens, “Recursive partitioning for heterogeneous causal effects,” in Proc.Natl Academy Sci., Vol.113, No. 27, pp. 7353–7360, 2016.
[17] P.T.Kavitha, Dr.T.Sasipraba , Knowledge Driven HealthCare Decision Support System using Distributed Data Mining, Indian Journal of Computer Science and Engineering (IJCSE), Vol. 3 No. 3 Jun-Jul 2012.
[18] K. Bache and M. Lichman, “ Evolutionary Learning of concepts” , Journal of Computers and Communications,Vol.2 No. 8, June 27, 2014.