Description
The poster presents the cross-disciplinary methods from artificial intelligence and humanities that we plan to apply in the project MuMokA, which deals with the multimodal modeling of cultural artefacts. We apply digital methods from the area of digital restauration, natural language processing, computer vision, and affective computing to extract modalities from the unfinished multimodal work Ortslinien by Walter Kempowski, and to annotate and interlink between them automatically. We also explore generative AI methods to develop concepts of autocompletion of the unfinished parts of the work. Additionally, we work on concepts to present multimodal cultural artefacts in a digital, immersive environment in a sustainable way, which takes into account the FAIR and CARE principles for storage and representation of digital data.
Keywords
multimodality
cultural artefacts
artifical intelligence
natural language processing
computer vision