Open Access   Article Go Back

Sign and Voice Translation using Machine Learning and Computer Vision

Nandini 1 , Avni Verma2 , Sandeep Kumar3

  1. Dept. of Computer Science and Technology,Sharda School of Engineering & Technology,Sharda University,Greater Noida, Uttar Pradesh, India.
  2. Dept. of Computer Science and Technology,Sharda School of Engineering & Technology,Sharda University,Greater Noida, Uttar Pradesh, India.
  3. Dept. of Computer Science and Technology,Sharda School of Engineering & Technology,Sharda University,Greater Noida, Uttar Pradesh, India.

Section:Research Paper, Product Type: Journal Paper
Volume-11 , Issue-4 , Page no. 7-13, Apr-2023

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v11i4.713

Online published on Apr 30, 2023

Copyright © Nandini, Avni Verma, Sandeep Kumar . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Nandini, Avni Verma, Sandeep Kumar, “Sign and Voice Translation using Machine Learning and Computer Vision,” International Journal of Computer Sciences and Engineering, Vol.11, Issue.4, pp.7-13, 2023.

MLA Style Citation: Nandini, Avni Verma, Sandeep Kumar "Sign and Voice Translation using Machine Learning and Computer Vision." International Journal of Computer Sciences and Engineering 11.4 (2023): 7-13.

APA Style Citation: Nandini, Avni Verma, Sandeep Kumar, (2023). Sign and Voice Translation using Machine Learning and Computer Vision. International Journal of Computer Sciences and Engineering, 11(4), 7-13.

BibTex Style Citation:
@article{Verma_2023,
author = {Nandini, Avni Verma, Sandeep Kumar},
title = {Sign and Voice Translation using Machine Learning and Computer Vision},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {4 2023},
volume = {11},
Issue = {4},
month = {4},
year = {2023},
issn = {2347-2693},
pages = {7-13},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=5550},
doi = {https://doi.org/10.26438/ijcse/v11i4.713}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v11i4.713}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=5550
TI - Sign and Voice Translation using Machine Learning and Computer Vision
T2 - International Journal of Computer Sciences and Engineering
AU - Nandini, Avni Verma, Sandeep Kumar
PY - 2023
DA - 2023/04/30
PB - IJCSE, Indore, INDIA
SP - 7-13
IS - 4
VL - 11
SN - 2347-2693
ER -

VIEWS PDF XML
158 275 downloads 84 downloads
  
  
           

Abstract

Sign and voice translation is a critical tool for individuals who cannot hear or speak , or for those who speak different languages. Machine learning techniques have been increasingly used to find or improve the accuracy and efficiency of sign and voice translation systems. These systems make use of machine learning models to analyze and interpret sign language or speech and translate them into written or spoken language. Machine learning models can recognize patterns in sign language gestures or speech, and convert them into text or speech output. The model`s accuracy is dependent on the quality of its training data and the complexity of the model architecture. Recent improvisation in machine learning has increased the performance of sign and voice translation systems, enabling them to recognize more complex gestures and accents. Overall, the use of machine learning in sign and voice translation has the potential to improve the accessibility of information and communication for individuals who are deaf or hard of hearing, or for those who speak different languages. However, there is still much room for improvement, and ongoing research and development are needed to optimize the performance of these systems

Key-Words / Index Term

Computer Vision, Recognition of Sign Language, Hand Gesture Recognition, Features Extraction

References

[1]Baranwal N, Nandi GC.”An efficient gesture based humanoid learning using wavelet descriptor and MFCC techniques. Int J Mach Learn Cybern” 2017
[2] D. Bragg, O. Koller, M. Bellard, L. Berke, P. Boudreault, A. Braffort,N. Caselli, M. Huenerfauth, H. Kacorri, T. Verhoef et al., “Sign language recognition, generation, and translation: An interdisciplinary perspective,” arXiv preprint arXiv:1908.08597, 2019
[3] Desa, Hazry. “SIGN LANGUAGE INTO VOICE SIGNAL CONVERSION USING HEAD AND HAND GESTURES.” 2008
[4] F. Ronchetti, F. Quiroga, C. A. Estribou, L. C. Lanzarini, and A. Rosete,“Lsa64: an argentinian sign language dataset,” in XXII Congreso Argentino de Ciencias de la Computación (CACIC 2016)., 2016.
[5] G. T. Papadopoulos and P. Daras, “Human action recognition using 3d reconstruction data,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 8, pp. 1807–1823, 2016.
[6] H. Cooper, B. Holt, and R. Bowden, “Sign language recognition,” in Visual Analysis of Humans. Springer, 2011, pp. 539–562.
[7]Kshitij Bantupalli,Ying Xie,“American Sign Language Recognition using Deep Learning and Computer Vision”, IEEE International Conference on Big Data (Big Data),2018
[8] K.S, Tamilselvan & Balakumar, P & Rajalakshmi, B & Roshini, C & S., Suthagar. “ Translation of Sign Language for Deaf and Dumb People. International Journal of Recent Technology and Engineering. 8. 2277-3878. 10.35940/ijrte.E6555.018520”, 2020
[9]Kurdyumov R, Ho P, Ng J. “Sign language classification using webcam images”, pp 1–4, 2011.
[10]Lancaster, Glenn & Alkoby, Karen & Campen, Jeff & Carter, Roymieco & Davidson, Mary & Ethridge, Dan & Furst, Jacob & Hinkle, Damien & Kroll, Bret & Leyesa, Ryan & Loeding, Barbara & Mcdonald, John & Ougouag, Nedjla & Smallwood, Lori & Srinivasan, Prabhakar & Toro, Jorge & Wolfe, Rosalee. “Voice activated display of American Sign Language for airport security.”, 2003
[11] O. Koller, S. Zargaran, H. Ney, and R. Bowden, “Deep sign: Enabling robust statistical continuous sign language recognition via hybrid cnn hmms,” International Journal of Computer Vision, vol. 126, no. 12, pp. 1311–1325, 2018.
[12]P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kauz, “Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network”, in Proc. IEEE Conf. Comput. Vis. Pattern Recog, 2016.
[13]Rekha J, Bhattacharya J, Majumder S. “Hand gesture recognition for sign language: a new hybrid approach. In: International Conference on ImageProcessing, Computer Vision, and Pattern Recognition” (IPCV), pp 80–86, 2011
[14]R. Sharma et al.” Recognition of Single Handed Sign Language Gestures using Contour Tracing descriptor. Proceedings of the World Congress on Engineering”Vol. II, WCE 2013, July 3 - 5, 2013, London, U.K.,
[15]R. Sharma, R. Khapra, N. Dahiya. June 2020. Sign Language Gesture Recognition, pp.14-19
[16]Sepp Hochreiter et al.,“Long Short-Term Memory,”, Neural Computation 9(8): 1735-1780,1997.
[17]S. Shahriar et al., "Real-Time American Sign Language Recognition Using Skin Segmentation and Image Category Classification with Convolutional Neural Network and Deep Learning," TENCON 2018 - 2018 IEEE Region 10 Conference, 2018, pp. 1168-1171, doi: 10.1109/TENCON.2018.8650524.
[18]Wang RY, Popovi? J. 2009. Real-time hand-tracking with a color glove. ACM Trans Graph 28(3):63
[19]Zhang, F., Bazarewsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C. L., & Grundmann, M. 2020. MediaPipe Hands: On-device Real-time Hand Tracking. arXiv preprint arXiv:2006.10214
[20]Z. Ren, J. Yuan, J. Meng, and Z. Zha,“Robust PartBased Hand Gesture Recognition Using Kinect Sensor”, ” IEEE Trans. Multimedia, vol. 15, no. 5, pp. 1110–1120,2013