Beschreibung
To provide subtitles for videos in sign language the translation has to be aligned with the signing.
Doing this manually is time consuming but fully automatic alignment is often not accurate enough. We therefore propose an interactive tool that uses human feedback to optimize the alignment process. For the automatic alignment we use signs detected by a sign spotter and map them to words in the translation. These mappings are used as landmarks for the alignment algorithm. Users can change the alignment directly but also alter, add or delete landmarks to guide the automatic process, so that a correct alignment is reached after a few feedback loops.
Keywords
Sign Languages
Machine Learning
Natural Language Processing
Interface Design
Semi-Automation
Find me @ my poster | 2, 4 |
---|
Autor
Maren Brumm
(IDGS Universität Hamburg)
Co-Autor
Sam Bigeard
(IDGS Universität Hamburg)