Monday, 12 January 2026 @ 13:30–14:30 CET
On-site:
University of Vienna
Seminarraum 8 (OG01)
Kolingasse 14–16
1090 Vienna
Online:
https://univienna.zoom.us/j/67032386717?pwd=g8HOG2oRrWK6T5cvmRA7bv17QRzq72.1
Meeting ID: 670 3238 6717
Passcode: 440328
Exploring the Intersection of Large Language Models and Literary Theory: Contextual Semantic Plasticity and Theoretical Alignments
Abstract :
Computational Literary Studies (CLS) aim to complement traditional literary studies with a quantitative approach by leveraging techniques from statistics, machine learning, and natural language processing (NLP). A key question currently being explored in CLS is whether the shift from the pre-train-and-fine-tune paradigm to the prompt-and-predict paradigm aligns with the field's research questions and methodologies.
In my talk, I will present two projects that contribute to this evaluation. The first project examines a critical aspect of classification in CLS: domain-specific conceptual engineering. 'Conceptual engineering' refers to the process of describing, evaluating, and revising scientific concepts to ensure their applicability. In CLS, this process is essential because digital methods require the operationalization of literary concepts, which involves defining them in a way that allows for measurable application. This often necessitates adapting the meaning and scope of concepts during operationalization. Our research investigates whether large language models (LLMs) can effectively support such re-conceptualizations within the prompt-and-predict paradigm.
The second project focuses on developing and applying a workflow to evaluate how closely the behavior of a specific LLM aligns with that of a competent user of theories of meaning, fiction, and interpretation. To achieve this, we are creating test datasets that simulate the behavior of a user of these theories. These datasets will allow us to assess whether LLMs can replicate this behavior and, if so, to what extent.
By addressing these questions, the talk aims to shed light on the potential and limitations of LLMs in advancing computational approaches to literary studies.
Bio :
Axel Pichler studied German Studies and Philosophy in Graz and Vienna, completing his PhD in 2009 at the University of Graz (Karl-Franzens-Universität Graz) with a dissertation on Friedrich Nietzsche. From 2014 to early 2021, he worked as a postdoctoral researcher at the Stuttgart Research Centre for Text Studies. In the summer semester of 2021, he served as a guest professor for Digital Humanities in the Excellence Cluster "Temporal Communities" at Freie Universität Berlin. Subsequently, he held a postdoctoral position at the Institute for Natural Language Processing at the University of Stuttgart. Since the summer semester of 2025, he is an assistant professor for Modern German Literature and Digital Literary Studies at the University of Vienna.
His research in the Digital Humanities focuses on interdisciplinary methodological reflection, the operationalization of literary studies concepts for computational text analysis, and the investigation of the opportunities and challenges associated with the use of large language models (LLMs) in Computational Literary Studies (CLS).
