CHWP A.3 | Winder, "Reading the text's mind" |
Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
T.S. Eliot, "Choruses from 'the Rock'"
All novel, innovative approaches to a field of study go through a period of scrutiny where their methods and distinctive goals are debated. Computational approaches to interpretation have not been spared such debates, nor should they be. One instance can be found in Serge Lusignan's call for a moratorium on what was in 1985 the staple of editors of computational criticism: concordances, indexes, and frequency lists. His complaint was that critics would soon be better served by on-line, interactive services and that, in any case, "no [critical] question has been fundamentally changed by text processing" (Lusignan 1985: 212).[1] Computers produce data, and a great deal of it, but unfortunately we have no guarantee that the data produced will be pertinent for any given inquiry into the nature of texts.
Lusignan suggested that our efforts would be better spent elsewhere: "Computer-assisted research on texts will make no real progress unless it concentrates on perfecting models of analysis that are specific to the electronic text" (Lusignan 1985: 211). Until such models exist, the status of computer-generated data will be that of a quotation, i.e. something that "only has an emblematic value for a [critical] question that is defined independently in the silent communion between the printed text and its interpreter" (Lusignan 1985: 212).
Though the world of computers has evolved at a considerable pace, computational criticism still seems to be at the same juncture that Lusignan so pessimistically described nearly ten years ago. We still lack a clear statement of what the electronic text means for interpretation. If Lusignan's analysis is accurate, electronic texts may mean very little indeed.
In this paper, I hope to contribute to the development of a conceptual framework for computational criticism by placing one of its central practices, lemmatisation, in the context of C.S. Peirce's general theory of signs. Lemmatisation is on the practical level an essential and distinctive method of computational criticism -- Choueka and Lusignan have described it as "one of the most important and crucial steps in many non-trivial text-processing cycles" (Choueka & Lusignan 1985: 147)--; I will argue that it has a central theoretical position as well, which, if given a systematic treatment, will lead to a more coherent way of understanding what makes computational criticism distinctive among interpretative practices.
The role of a semiotic framework is simply to organise and develop the terminology we use to describe our interpretative practices. While such meta-interpretative reflection may seem like an awkward, painfully abstract, and useless digression from the real work of interpreting texts, it is unavoidable since, in a very real way, a method of inquiry is substantially a working, growing terminology. The Russian formalists Yuri Shcheglov and Alexander Zholkovsky describe the relation between a theory and its terminology in these terms:
One of the crucial problems in the establishment of a science is the development of a special language in which it formulates the results of its inquiries. This language must be unambiguous (i.e. understood in the same way by all specialists), explicit and formally strict. Only then will it be able to perform its function of accumulating, ordering and storing information. The function of a metalanguage does not consist merely in introducing special terminology and notation, but in providing a conceptual framework capable of adequately reflecting the described object. In other words, a metalanguage, as a means for making statements about the object, is also the means of existence of a scientific theory. [...] An example of a result in chemistry is the periodic table of elements: it is not necessary to consult Mendeleev's works in order to use it, since the table is given in every textbook. Thus the metalanguage makes it possible for a science to keep a strict account of what it has achieved, to make use of the results of previous studies and to reduce the likelihood of repetitious research. (Shcheglov & Zholkovsky 1987: 19-20)
If we are to respond to such negative appraisals as Lusignan's, we must systematise the fundamental procedures and findings of computational criticism. In that way, we can avoid repetition and avoid misunderstandings about the meaning and value of our methodological constructs. For instance, Lusignan seems to suggest that a quotation has a subordinate role in interpretation. On the contrary, I hope to show that the unique approach to textual meaning practised in computational criticism inevitably makes quotations -- taken as attestations -- central to the interpretative process.
Such terminological clarifications are necessary since most traditional interpretative constructs take on a new meaning in the electronic medium. The following discussion of lemmatisation is thus inevitably but a small step in the more general effort to translate interpretative practices and terminology for the new medium.[2]
[Return to table of contents] [Continue]
[1] Quotations from this text are my translation.
[2] See Winder 1993 for a discussion of the icon, index and symbol dimensions of electronic texts.