Speakers
Description
Avatar animation is one of the ways to produce sign language content. Using motion capture data allows for fluent avatar movements closer to human signer. However, the generated animation usually cannot adjust grammatical features such as intonation, signing space, and facial expression to match a specific context. This is because all motion data is in a fixed form at the time it is captured.
Therefore, we propose a motion editing tool that can reproduce grammatical elements of Japanese Sign Language (JSL) by editing each motion data. Evaluation experiment shows that editing the speed and blend span of multiple words corresponding to the delimitation of phrases and clauses can reduce the error rate of understanding JSL avatar animations.
Keywords
sign language
avatar animation
motion capture
motion editing
Find me @ my poster | 4 |
---|