Few-shot learning for automated content analysis (FLACA) in the German media debate on arms deliveries to Ukraine

12
Not scheduled
20m
Von-Melle-Park 4

Von-Melle-Park 4

Poster

Description

The use of pre-trained language models based on transformer neural networks has significantly advanced the field of NLP and offers considerable potential for improving automatic content analysis, e.g., in communication science, where their widespread adoption is still limited. In our poster, we highlight challenges and promises by employing transformer models combined with parameter-efficient few-shot fine-tuning to need less labeled data in complex annotation tasks using automated procedures. The results indicate a noteworthy zero-shot understanding of ChatGPT of our definitions of claims and arguments while our tailormade few-shot methods outperform it using a medium number of human annotations as training data.

Keywords

natural language processing
large language models
few-shot
annotation
content analysis

Primary authors

Jonas Rieger (Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI)) Mattes Ruckdeschel (Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI)) Kostiantyn Yanchenko (Universität Hamburg) Gerret von Nordheim (Universität Hamburg) Katharina Kleinen von Königslöw (Universität Hamburg) Gregor Wiedemann (Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI))

Presentation materials