Speaker
Maximilian Plazotta
(Universität Regensburg)
Description
Abstract
With the emergence of large language models, the long studied field of the Text-to-SQL problem was elevated
into new spheres. In this paper, we test how our LLM fine-tuning approach performs on two relational databases
(small vs. big) and compare it to a default setting. The results are convincing: using in-context learning boosts the
performance from a merely 35% (default) to over 85%. Furthermore, we present a detailed architectural framework
for for such a system, emphasizing its exclusive reliance on open-source components.
Authors
Maximilian Plazotta
(Universität Regensburg)
Prof.
Meike Klettke
(Universität Regensburg)