Open Access   Article Go Back

Real Time Face Driven Speech Animation Using Neural Networks in with Expressions

K. Rajasekhar1 , C. Usharani2 , A. Mrinalini3

Section:Research Paper, Product Type: Journal Paper
Volume-7 , Issue-5 , Page no. 781-786, May-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7i5.781786

Online published on May 31, 2019

Copyright © K. Rajasekhar, C. Usharani, A. Mrinalini . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: K. Rajasekhar, C. Usharani, A. Mrinalini, “Real Time Face Driven Speech Animation Using Neural Networks in with Expressions,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.5, pp.781-786, 2019.

MLA Style Citation: K. Rajasekhar, C. Usharani, A. Mrinalini "Real Time Face Driven Speech Animation Using Neural Networks in with Expressions." International Journal of Computer Sciences and Engineering 7.5 (2019): 781-786.

APA Style Citation: K. Rajasekhar, C. Usharani, A. Mrinalini, (2019). Real Time Face Driven Speech Animation Using Neural Networks in with Expressions. International Journal of Computer Sciences and Engineering, 7(5), 781-786.

BibTex Style Citation:
@article{Rajasekhar_2019,
author = {K. Rajasekhar, C. Usharani, A. Mrinalini},
title = {Real Time Face Driven Speech Animation Using Neural Networks in with Expressions},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {5 2019},
volume = {7},
Issue = {5},
month = {5},
year = {2019},
issn = {2347-2693},
pages = {781-786},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4314},
doi = {https://doi.org/10.26438/ijcse/v7i5.781786}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i5.781786}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4314
TI - Real Time Face Driven Speech Animation Using Neural Networks in with Expressions
T2 - International Journal of Computer Sciences and Engineering
AU - K. Rajasekhar, C. Usharani, A. Mrinalini
PY - 2019
DA - 2019/05/31
PB - IJCSE, Indore, INDIA
SP - 781-786
IS - 5
VL - 7
SN - 2347-2693
ER -

VIEWS PDF XML
241 149 downloads 111 downloads
  
  
           

Abstract

The process of building the machines intelligent is called Artificial intelligence. Doing the work with foresight with the given environment is called as intelligence. To understand the people feelings and choices we use computers. These computer systems are trained with intelligent computer programs. So artificial intelligence has become a vital topic in human life and varying this life enormously. This artificial intelligence has occupied its importance in many domains like education, health and safety also and changed the lifestyle also. For generating the character animation speech animation is a main and time taking feature. In the existing system for the given input speech to produce a natural-looking animation, we used a simple and effective deep learning approach. It uses the sliding window predictor by using the phoneme label input series and it learns the arbitrary nonlinear mapping to mouth activities. One of the important parameters in the human communication is nonverbal gestures and also, these ought to be considered by speech-driven face animation system. In this paper, we utilize the neural systems to recognize the real-time speech-driven face animation with appearance. By utilizing the MU-based facial movement following algorithm we can gather an audio-visual training database. The visual portrayal of facial distortions is called as movement units (MUS). By preparing the arrangement of neural systems with the assistance of the gathered audio training database we can develop a real-time audio-to-MUP mapping.

Key-Words / Index Term

Artificial intelligence, neural networks, machine learning algorithms, speech animation, phoneme label,MUS

References

[1] Aryan Singh DPS “Faridabad Artificial Intelligence in Various Domains of Life”, International Journal of Computer Science and Information Technologies, Vol. 7 (5) 2016, 2353-2355.
[2] Ratnesh Kumar Shukla, Ajay Agarwal, Anil Kumar Malviya “An Introduction of Face Recognition and Face Detection for Blurred and Noisy Images” , International Journal of Computer Sciences and Engineering, Vol.6, Issue.3, pp.39-43 , June (2018).
[3] K. Nagao and A. Takeuchi, “Speech dialogue with facial displays,” in Proc. 32nd Ann. Meet. Assoc. Comput. Linguistics (ACL-94), 1994, pp. 102–109.
[4] K. Waters and J. M. Rehg et al., “Visual Sensing of Humans for Active Public Interfaces,” Cambridge Res. Lab., CRL 96-5, 1996.
[5] Unnati Chawda, Shanu K Rakesh, “Implementation and Analysis of Depression Detection Model using Emotion Artificial Intelligence” Vol.-7, Issue-4, April 2019.
[6] K. Mase, “Recognition of facial expression from optical flow,” ICICE Trans., vol. E74, pp. 3474–3483, Oct. 1991.
[7] A. Lanitis, C. J. Taylor, and T. F. Cootes, “A unified approach to coding and interpreting face images,” in Proc. International Conference of Computer Vision, 1995, pp. 368–373.
[8] M. J. Black and Y. Yacoob, “Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion,” in Proc. International Conference of Computer Vision, 1995, pp. 371–384.
[9] Y. Yacoob and L. Davis, “Recognizing human facial expressions from long image sequences using optical flow,” IEEE Transactions of Pattern Analysis Machine Intel., vol. 18, pp. 636–642, Jan. 1996.
[10] Chaitanya Gupte, Shruti Gadewar “Diagnosis of Parkinson’s Disease using Acoustic Analysis of Voice” Int. J. Sc. Res. In Network Security and Communication, Volume-5, Issue-3, June 2017.
[11] I. A. Essa and A. Pentland, “Coding, analysis, interpretation, and recognition of facial expressions,” IEEE Trans. Pattern Anal. Machine Intel., vol. 10, pp. 757–763, July 1997.
[12] M. Nahas, H. Huitric, and M. Saintourens, “Animation of a b-spline figure,” Visual Comput., vol. 3, pp. 272–276, 1988.
[13 H. Li, P. Roivainen, and R. Forchheimer, “3-D motion estimation in model-based facial image coding,” IEEE Transactions of Pattern Analaysis Machine Intel., vol. 15, pp. 545–555, June 1993.
[14] T. F. Cootes and C. J. Taylor et al., “Active shape models—Their training and application,” Computer Vision Image Understanding, vol. 61, no. 1, pp. 38–59, Jan. 1995.
[15] D. W. Massaro, J. Beskow, and M. M. Cohen et al., “Picture my voice: Audio to visual speech synthesis using artificial neural networks,” in Proc. AVSP’99, 1999.
[16] D. DeCarlo and D. Mataxas, “Optical flow constraints on deformable models with applications to face tracking,” International Journal of Computer Vision, vol. 38, no. 2, pp. 99–127, 2000.