Motion Editing tool for Reproducing Grammatical Elements of Japanese Sign Language Avatar Animation

59
Not scheduled
20m
Von-Melle-Park 4

Von-Melle-Park 4

Poster

Speakers

Hiroyuki Kaneko (NHK Science & Technology Research Laboratories) Masanori Sano (NHK Science & Technology Research Laboratories) Naoki Nakatani (NHK Science & Technology Research Laboratories) Taro Miyazaki (NHK Science & Technology Research Laboratories)

Description

Avatar animation is one of the ways to produce sign language content. Using motion capture data allows for fluent avatar movements closer to human signer. However, the generated animation usually cannot adjust grammatical features such as intonation, signing space, and facial expression to match a specific context. This is because all motion data is in a fixed form at the time it is captured.
Therefore, we propose a motion editing tool that can reproduce grammatical elements of Japanese Sign Language (JSL) by editing each motion data. Evaluation experiment shows that editing the speed and blend span of multiple words corresponding to the delimitation of phrases and clauses can reduce the error rate of understanding JSL avatar animations.

Keywords

sign language
avatar animation
motion capture
motion editing

Find me @ my poster 4

Author

Tsubasa Uchida (NHK Science & Technology Research Laboratories, Universität Hamburg, Institut für Deutsche Gebärdensprache und Kommunikation Gehörloser)

Co-authors

Hiroyuki Kaneko (NHK Science & Technology Research Laboratories) Masanori Sano (NHK Science & Technology Research Laboratories) Naoki Nakatani (NHK Science & Technology Research Laboratories) Taro Miyazaki (NHK Science & Technology Research Laboratories)

Presentation materials