Pichler, Axel and Reiter, Nils (2021). On the Operationalization of literary scientific Concepts in algorithmic Text Analysis. An Approach to Norbert Altenhofer's hermeneutic Model Interpretation of Kleist's The Earthquake in Chili. J. Lit. Theory, 15 (1-2). S. 1 - 30. BERLIN: WALTER DE GRUYTER GMBH. ISSN 1862-8990

Full text not available from this repository.

Abstract

The present article discusses and reflects on possible ways of operationalizing the terminology of traditional literary studies for use in computational literary studies. By operationalization, we mean the development of a method for tracing a (theoretical) term back to text-surface phenomena; this is done explicitly and in a rule-based manner, involving a series of substeps. This procedure is presented in detail using as a concrete example Norbert Altenhofer's model interpretation (Modellinterpretation) of Heinrich von Kleist's The Earthquake in Chile. In the process, we develop a multi-stage operation - reflected upon throughout in terms of its epistemological implications - that is based on a rational-hermeneutic reconstruction of Altenhofer's interpretation, which focuses on mysteriousness (Ratselhaftigkeit), a concept from everyday language. As we go on to demonstrate, when trying to operationalize this term, one encounters numerous difficulties, which is owing to the fact that Altenhofer's use of it is underspecified in a number of ways. Thus, for instance, and contrary to Altenhofer's suggestion, Kleist's sentences containing relativizing or perspectivizing phrases such as it seemed or it was as if (Altenhofer 2007, 45) do by no means, when analyzed linguistically, suggest a questioning or challenge of the events narrated, since the unreal quality of those German sentences only relates to the comparison in the subordinate clause, not to the respective main clause. Another indicator central to Altenhofer's ascription of mysteriousness is his concept of a complete facticity (luckenlose Faktizitat) which does not seem to leave anything open (Altenhofer 2007, 45). Again, the precise designation of what exactly qualifies facticity as complete is left open, since Kleist's novella does indeed select for portrayal certain phenomena and actions within the narrated world (and not others). The degree of factuality in Kleist's text may be higher than it is in other texts, but it is by no means complete. In the context of Altenhofer's interpretation, complete facticity may be taken to mean a narrative mode in which terrible events are reported using conspicuously sober and at times drastic language. Following the critical reconstruction of Altenhofer's use of terminology, the central terms and their relationship to one another are first explicated (in natural language), which already necessitates intensive conceptual work. We do so implementing a hierarchical understanding of the terms discussed: the definition of one term uses other terms which also need to be defined and operationalized. In accordance with the requirements of computational text analysis, this hierarchy of terms should end in directly measurable terms - i.e., in terms that can be clearly identified on the surface of the text. This, however, leads to the question of whether (and, if so, on the basis of which theoretical assumptions) the terminology of literary studies may be traced back in this way to text-surface phenomena. Following the pragmatic as well as the theoretical discussion of this complex of questions, we indicate ways by which such definitions may be converted into manual or automatic recognition. In the case of manual recognition, the paradigm of annotation - as established and methodologically reflected in (computational) linguistics - will be useful, and a well-controlled annotation process will help to further clarify the terms in question. The primary goal, however, is to establish a recognition rule by which individuals may intersubjectively and reliably identify instances of the term in question in a given text. While it is true that in applying this method to literary studies, new challenges arise - such as the question of the validity and reliability of the annotations -, these challenges are at present being researched intensively in the field of computational literary studies, which has resulted in a large and growing body of research to draw on. In terms of computer-aided recognition, we examine, by way of example, two distinct approaches: 1) The kind of operationalization which is guided by precedent definitions and annotation rules benefits from the fact that each of its steps is transparent, may be validated and interpreted, and that existing tools from computational linguistics can be integrated into the process. In the scenario used here, these would be tools for recognizing and assigning character speech, for the resolution of coreference and the assessment of events; all of these, in turn, may be based on either machine learning, prescribed rules or dictionaries. 2) In recent years, so-called end-to-end systems have become popular which, with the help of neural networks, infer target terms directly from a numerical representation of the data. These systems achieve superior results in many areas. However, their lack of transparency also raises new questions, especially with regard to the interpretation of results. Finally, we discuss options for quality assurance and draw a first conclusion. Since numerous decisions have to be made in the course of operationalization, and these, in practice, are often pragmatically justified, the question quickly arises as to how good a given operationalization actually is. And since the tools borrowed from computational linguistics (especially the so-called inter-annotator agreement) can only partially be transferred to computational literary studies and, moreover, objective standards for the quality of a given implementation will be difficult to find, it ultimately falls to the community of researchers and scholars to decide, based on their research standards, which operationalizations they accept. At the same time, operationalization is the central link between the computer sciences and literary studies, as well as being a necessary component for a large part of the research done in computational literary studies. The advantage of a conscious, deliberate and reflective operationalization practice lies not only in the fact that it can be used to achieve reliable quantitative results (or that a certain lack of reliability at least is a known factor); it also lies in its facilitation of interdisciplinary cooperation: in the course of operationalization, concrete sets of data are discussed, as are the methods for analysing them, which taken together minimizes the risk of misunderstandings, false friends and of an unproductive exchange more generally.

Item Type: Journal Article
Creators:
CreatorsEmailORCIDORCID Put Code
Pichler, AxelUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Reiter, NilsUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
URN: urn:nbn:de:hbz:38-573873
DOI: 10.1515/jlt-2021-2008
Journal or Publication Title: J. Lit. Theory
Volume: 15
Number: 1-2
Page Range: S. 1 - 30
Date: 2021
Publisher: WALTER DE GRUYTER GMBH
Place of Publication: BERLIN
ISSN: 1862-8990
Language: German
Faculty: Unspecified
Divisions: Unspecified
Subjects: no entry
Uncontrolled Keywords:
KeywordsLanguage
Literary Theory & CriticismMultiple languages
URI: http://kups.ub.uni-koeln.de/id/eprint/57387

Downloads

Downloads per month over past year

Altmetric

Export

Actions (login required)

View Item View Item