메뉴 건너뛰기




Volumn , Issue , 2011, Pages 131-135

Speech-driven lip motion generation for tele-operated humanoid robots

Author keywords

formant; humanoid robot; lip motion; synchronization; tele operation

Indexed keywords

LINGUISTICS; SPEECH PROCESSING;

EID: 85133324061     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (22)

References (10)
  • 2
    • 0031625721 scopus 로고    scopus 로고
    • Text-to-visual speech synthesis based on parameter generation from HMM
    • Tamura, M., Kondo, S., Masuko, T., Kobayashi, T. (1998). “Text-to-visual speech synthesis based on parameter generation from HMM,” in Proc. ICASSP98, 3745-3748.
    • (1998) Proc. ICASSP98 , pp. 3745-3748
    • Tamura, M.1    Kondo, S.2    Masuko, T.3    Kobayashi, T.4
  • 3
    • 0036650837 scopus 로고    scopus 로고
    • Real-time speech-driven face animation with expressions using neural networks
    • Jul. 2002
    • Hong, P., Wen, Z., Huang, T. (2002). “Real-time speech-driven face animation with expressions using neural networks,” IEEE Trans. on Neural Networks, vol. 13, no. 4, pp. 916-927, Jul. 2002.
    • (2002) IEEE Trans. on Neural Networks , vol.13 , Issue.4 , pp. 916-927
    • Hong, P.1    Wen, Z.2    Huang, T.3
  • 4
    • 33745218748 scopus 로고    scopus 로고
    • Data-driven synthesis of expressive visual speech using an MPEG-4 talking head
    • Beskow, J., Nordenberg, M. (2005). “Data-driven synthesis of expressive visual speech using an MPEG-4 talking head,” Proc. Interspeech2005, 793-796.
    • (2005) Proc. Interspeech2005 , pp. 793-796
    • Beskow, J.1    Nordenberg, M.2
  • 5
    • 84867226738 scopus 로고    scopus 로고
    • Speech-driven lip motion generation with a trajectory HMM
    • Hofer, G., Yamagishi, J., Shimodaira, H. (2008). “Speech-driven lip motion generation with a trajectory HMM,” Proc. Interspeech 2008, 2314-2317.
    • (2008) Proc. Interspeech 2008 , pp. 2314-2317
    • Hofer, G.1    Yamagishi, J.2    Shimodaira, H.3
  • 6
    • 70450168976 scopus 로고    scopus 로고
    • Direct, modular and hybrid audio to visual speech conversion methods - a comparative study
    • Takacs, G., (2009). “Direct, modular and hybrid audio to visual speech conversion methods - a comparative study,” in Proc. Interspeech09, 2267-2270.
    • (2009) Proc. Interspeech09 , pp. 2267-2270
    • Takacs, G.1
  • 7
    • 79959851452 scopus 로고    scopus 로고
    • Comparison of HMM and TMDN methods for lip synchronization
    • Hofer, G., Richmond, K. (2010). “Comparison of HMM and TMDN methods for lip synchronization,” Proc. Interspeech 2010, 454-457.
    • (2010) Proc. Interspeech 2010 , pp. 454-457
    • Hofer, G.1    Richmond, K.2
  • 8
    • 79959844243 scopus 로고    scopus 로고
    • A minimum converted trajectory error (MCTE) approach to high quality speech-to-lips conversion
    • Zhuang, X., et al. (2010). “A minimum converted trajectory error (MCTE) approach to high quality speech-to-lips conversion,” Proc. Interspeech 2010, 1726-1739.
    • (2010) Proc. Interspeech 2010 , pp. 1726-1739
    • Zhuang, X.1
  • 9
    • 84872317272 scopus 로고    scopus 로고
    • Statistical correlation analysis between lip contour parameters and formant parameters for Mandarin monophthongs
    • Wu, J., et al. (2008). “Statistical correlation analysis between lip contour parameters and formant parameters for Mandarin monophthongs,” Proc. AVSP2008, 121-126.
    • (2008) Proc. AVSP2008 , pp. 121-126
    • Wu, J.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.