Open Access   Article Go Back

Vector Similarity Measure for ASAG

Chandralika Chakraborty1 , Udit Kr. Chakraborty2 , Bhairab Sarma3

Section:Research Paper, Product Type: Journal Paper
Volume-7 , Issue-4 , Page no. 959-963, Apr-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7i4.959963

Online published on Apr 30, 2019

Copyright © Chandralika Chakraborty, Udit Kr. Chakraborty, Bhairab Sarma . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Chandralika Chakraborty, Udit Kr. Chakraborty, Bhairab Sarma, “Vector Similarity Measure for ASAG,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.4, pp.959-963, 2019.

MLA Style Citation: Chandralika Chakraborty, Udit Kr. Chakraborty, Bhairab Sarma "Vector Similarity Measure for ASAG." International Journal of Computer Sciences and Engineering 7.4 (2019): 959-963.

APA Style Citation: Chandralika Chakraborty, Udit Kr. Chakraborty, Bhairab Sarma, (2019). Vector Similarity Measure for ASAG. International Journal of Computer Sciences and Engineering, 7(4), 959-963.

BibTex Style Citation:
@article{Chakraborty_2019,
author = {Chandralika Chakraborty, Udit Kr. Chakraborty, Bhairab Sarma},
title = {Vector Similarity Measure for ASAG},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {4 2019},
volume = {7},
Issue = {4},
month = {4},
year = {2019},
issn = {2347-2693},
pages = {959-963},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4150},
doi = {https://doi.org/10.26438/ijcse/v7i4.959963}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i4.959963}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4150
TI - Vector Similarity Measure for ASAG
T2 - International Journal of Computer Sciences and Engineering
AU - Chandralika Chakraborty, Udit Kr. Chakraborty, Bhairab Sarma
PY - 2019
DA - 2019/04/30
PB - IJCSE, Indore, INDIA
SP - 959-963
IS - 4
VL - 7
SN - 2347-2693
ER -

VIEWS PDF XML
448 258 downloads 137 downloads
  
  
           

Abstract

Automated Short Answer Grading (ASAG) has been an area of active research for quite some time now. Several theories and implementation have come up, but a stable method, suitable for all genres of answers is yet to be standardized. The most accurate results for short answer grading have been found for substantially longer texts which have scope for information retrieval. Smaller answers however suffer on this front and have been a bottleneck of sorts. This paper presents a simple method to evaluate very short answers, using cosine similarity method between students’ answers and model answers prepared by subject experts. The proposed method is simple, fast and easy to implement and returns scores having fair correlation with human evaluated scores.

Key-Words / Index Term

ASAG, Students Answer, Model Answer, Cosine Similarity

References

[1]. Madhumita Chakraborty, 2018, ‘Here’s why DU teachers are not evaluating answer papers since May 24’, Hindustan Times, June 15, 2018.
[2]. K. A. Gafoor, T.K. Umer Farooque, “Incongruence in Scoring Practices of Answer Scripts and Their Implications: Need for Urgent Examination Reforms in Secondary Pre-Service Teacher Education”, Proceedings of UGC sponsored national seminar on Fostering 21st Century Skills: Challenges to Teacher quality, August 22-23, 2014, Kerala, pp. 2-5.
[3]. Ritu Sharma, 2017, ‘’Model Rules’: Board to train teachers how to evaluate answer-sheets’, The Indian Express, September 8, 2017.
[4]. Priyanka Dhondi, 2015, ‘Different Types of Questions in E-learning Assessments’, ElearningDesign, CommLabIndia, January 20, 2015.
[5]. Komi Reddy Deepika, 2014, ‘Different Types of Assessments Used in E-learning’, ElearningDesign, CommLabIndia, June 27, 2014.
[6]. S. Ramesh,”Exploring the potential of Multiple Choice Questions in Computer Based Assessment of Student Learning”, Malaysian Online Journal of Instructional Science, 2005, Vol. 2.
[7]. M. Bush, “Alternative Marking Schemes Fof On-line Multiple-choice Tests”, Proceedings of 7th Annual Conference on the Teaching of Computing, Belfast, 1999.
[8]. Megan Clendenon, Hannah Holley, Mauro Schimf ‘Responding to Short Answer and Essay Questions’, StudentCaffe, Updated on April 2018.
[9]. Allen Grove, ‘What is the ideal word count for the short answer on the common application?’, ThoughtCo, Updated on 22 November, 2018.
[10]. S. Burrows, I. Gurevych, B. Stein, “The Eras and Trends of Automatic Short Answer Grading”, International Journal of Artificial Intelligence in Education , 25, IOS Press, pp. 60-117, 2015.
[11]. Y. Li, A. Tripathi, A. Srinivasan, “Challenges in Short Text Classification: The Case of Online Auction Disclosure”, Tenth Mediterranean Conference on Information Systems (MCIS), Paphos, Cyprus, September 2016.
[12]. M. Hermet, S. Szpakowicz , L. Duquette and S. N. Leuven, “Automated Analysis of Students` Free-text Answers for Computer-Assisted Assessment”, Proceedings of TAL and ALAO Workshop, pp. 835--845, 2006.
[13]. A. Adam, A. Ismail, A. Rafiu, A. Mohamed, G. Shafeeu, M. Ashir, “Pedagogy and Assessment Guide”, National Institute of Education, Male, Maldives, 2014. Accessed on: 02nd September 2018.
[14]. J. Burstein, R. Kaplan, S. Wolff, & C. Lu, Using Lexical Semantic Techniques to Classify Free-Responses. In E. Viegas, editor, Proceedings of the ACL SIGLEX Workshop on Breadth and Depth of Semantic Lexicons, pages 20–29, Santa Cruz, California. Association for Computational Linguistics, 1996.
[15]. J. Cowie, Y. Wilks, Information Extraction. In R. Dale, H. Moisl, and H. Somers, editors, Handbook of Natural Language Processing, chapter 10, pages 241–260. Marcel Dekker, New York City, New York, First Edition, 2000.
[16]. C. Y. Lin, ROUGE: A Package for Automatic Evaluation of Summaries. In M.-F. Moens and S. Szpakowicz, editors, Proceedings of the First Text Summarization Branches Out Workshop at ACL, pages 74–81, Barcelona, Spain. Association for Computational Linguistics, 2004.
[17]. M. Mohler, R. Mihalcea, Text-to-text Semantic Similarity for Automatic Short Answer Grading. In A. Lascarides, C. Gardent, and J. Nivre, editors, Proceedings of the Twelfth Conference of the European Chapter of the Association for Computational Linguistics, pages 567–575, Athens, Greece. Association for Computational Linguistics, 2009.
[18]. Automated Short Answer Grading – Dataset 1 [https://sites.google.com/site/uditkc/home/reading-stuff]