Category: Quant Theory

  • What did Stanley Fish count, and when did he start counting it?

    We have been observing the reaction to Stanley Fish’s critique of the Digital Humanities with great interest. Here is the full text of our comment, which could only be partially displayed on the New York Times comment window.

    You know you’ve come up in the world if you’re being needled by Stanley Fish in The New York Times. Having done our share of work in the data mines, we believe Fish is right to insist that nothing in a text becomes evidence unless you have an interpretation which makes that evidence count. No amount of digital tabulation will substitute for a coherent, defensible reading.

    As traditionally trained humanities scholars who use computers to study Shakespeare’s genres, we have pointed out repeatedly that nothing in literary studies will be settled by an algorithm or visualization, however seductively colorful. We have also argued that any pattern found through an iterative, computer-assisted analysis is meaningless without a larger interpretive framework in which to view it. It is the job of literary critics and historians to provide those interpretations, something they do by returning to the text and re-reading it with fresh eyes.

    The job of digital tools is to draw our attention to evidence impossible or hard to see during normal reading, prompting us to ask new questions about our texts. This ability to redirect attention and pose new questions is the strong suit of certain kinds of digital humanities research. Indeed, we believe the addition of a digital prosthetic to our insistently human reading complements the skills of close textual analysis that are the staple of literary training. Not everyone in the so-called Digital Humanities community would agree with this position, but we believe the old and new techniques are entirely compatible.

    What does it matter why Stanley Fish started minding his ps and bs in Milton? The point is that he has produced a plausible interpretation of Milton’s work based on evidence that fits his larger claim. The fact that an algorithm (“count ps and bs”) has directed his attention to something he hadn’t noticed doesn’t make the resulting pattern gibberish. You bet there are interesting patterns that show up in Milton when you mind his ps and bs. They existed before you counted them, and they exist after. However he found it, Fish has used that patterning to produce an interesting argument about the role of sound in Milton’s prose. And he has the evidence to back this argument up. In the end, he’s doing what most literary critics do in their work: create an interpretation that builds meaningfully on evidence in the text. Is there really any other way?

    Yours sincerely,

    Jonathan Hope, Strathclyde University

    Michael Witmore, Folger Shakespeare Library

    You can view a sample of our work at here.

  • Finding the Sherlock in Shakespeare: some ideas about prose genre and linguistic uniqueness

    An unexpected point of linguistic similarity between detective fiction and Shakespearean comedy recently led me to consider some of the theoretical implications of tools like DocuScope, which frequently identify textual similarities that remain invisible in the normal process of reading.

    A Linguistic Approach to Suspense Plot

    Playing around with a corpus of prose, we discovered that the linguistic specs associated with narrative plot are surprisingly unique. Principle Component Analysis performed on the linguistic features counted by DocuScope suggested the following relationship between the items in the corpus:

    I interpreted the two strongest axes of differentiation seen in the graph (PC 1 and PC 2) as (1) narrative, and (2) plot. The two poles of the narrative axis are Wuthering Heights (most narrative) and The Communist Manifesto (least narrative). The plot axis is slightly more complicated. But on the narrative side of the spectrum, plot-driven mysteries like “The Speckled Band” and The Canterville Ghost score high on plot, while the least plotted narrative is Samuel Richardson’s Clarissa (9 vols.). For now, I won’t speculate about why Newton’s Optics scores so astronomically high on plot. It is enough that when dealing with narrative, PC 2 predicts plot.

    The fact that something as qualitative and amorphous as plot has a quantitative analogue leads to several questions about the meaning of the data tools like DocuScope turn up.

    Linguistic Plot without Actual Plot

    Because linguistic plot is quantifiable, it allows us to look for passages where plot is present to a relative degree. Given a large enough sample, it is more than likely that some relatively plotted passages will occur in texts that are not plotted in any normal sense. This would at minimum raise questions about how to handle genre boundaries in digital literary research.

    Our relative-emplotment test (done in TextViewer) yielded intuitive results when performed on the dozen or so stories in The Adventures of Sherlock Holmes: the passages exhibiting the strongest examples of linguistic plot generally narrated moments of discovery, and moved the actual plot forward in significant ways. Often, these passages showed Holmes and Watson bursting into locked rooms and finding bodies.

    When we performed the same test on the Shakespeare corpus, something intriguing happened. The passages identified by TextViewer as exhibiting linguistic plot look very different from the corresponding passages in Sherlock Holmes. There were no dead bodies, no broken-down doors, and no exciting discoveries. Nonetheless, the ‘plotted’ Shakespeare scenes were remarkably consistent with each other. Perhaps most significant in the context of their genre, these scenes had a strong tendency to show characters putting on performances for other characters. Additionally, in a factor that is fascinating even though it is probably a red herring, the ‘plotted’ Shakespeare scenes had an equally strong tendency to involve fairies.

    The consistent nature of the ‘plotted’ Shakespeare scenes suggests that the linguistic specs associated with plot when they occur in Sherlock Holmes may have different, but equally specific, effects in other genres. The next step would be to find a meaningful correspondence between the two seemingly disparate literary devices that accompany linguistic plot – detectives bursting into rooms to solve murders, and plays within plays involving fairies. I have some hunches about this. But in many ways the more important question is what is at stake in using DocuScope to identify such unexpected points of overlap.

    Enough measurable links between seemingly unlike texts could suggest an invisible web of cognates, which share an underlying structure despite their different appearances and literary classifications. Accordingly, we might hypothesize that reading involves selective ignorance of semantic similarities that could otherwise lead to the socially deviant perception that A Midsummer Night’s Dream resembles a Sherlock Holmes mystery.

    The question, then, is this: if the act of reading consists in part of ignoring unfruitful similarities, then what happens when these similarities nonetheless become apparent to us? Looking back at the corpus graph, we begin to see all sorts of possibilities, many of which would be enough make us literary outcasts if voiced in the wrong company. Could Newton’s Optics contain the most exciting suspense plot no one has ever noticed? Could Martin Luther be secretly more sentimental than Clarissa?

    Estranging Capacities of Digital Cognates

    I have been using the term ‘cognate’ to describe the relationship between linguistically similar but otherwise dissimilar texts. These correspondences will only be meaningful if we can connect them in a plausible way to our readerly understanding of the texts or genres in question. In the case of detective fiction and Shakespearean comedy, this remains to be seen. But our current lack of an explanation does not mean we should feel shy about pursuing the cognates computers direct us to. My analogy is the pop-culture ritual of watching The Wizard of Oz, starting the Pink Floyd album Dark Side of the Moon on the third roar of the MGM lion. The movie and the record sync up in a plausible pattern, prompting the audience to grasp a connection between the cognate genres of children’s movies and psychedelic rock.

    If digital methods routinely direct our attention to patterns we would never notice in the normal process of reading, then we can expect them to turn up a large number of such cognates. If we want to understand the results these tools are turning up, we should develop a terminology and start thinking about implications – not just for the few correspondences we can explain, but also for the vast number we cannot explain, at least right now.

  • The Ancestral Text

    Rosamond Purcell, "The Book, the Land"

    In this post I want to understand the consequences of “massive addressability” for “philosophies of access”–philosophies which assert that all beings exist only as correlates of our own consciousness. The term “philosophy of access” is used by members of the Speculative Realist school: it seems to have been coined largely as a means of rejecting everything the term names. Members of this school dismiss the idea that any speculative analysis of the nature of beings can be replaced by an apparently more basic inquiry into how we access to the world, an access obtained either through language or consciousness. The major turn to “access” occurs with  Kant, but the move is continued in an explicitly linguistic register by Heidegger, Wittgenstein, Derrida, and a range of post-structuralists.

    One reason for jettisoning the priority of access, according to Ray Brassier, is that it violates “the basic materialist requirement that being, though perfectly intelligible, remain irreducible to thought.” As will become clear below, I am sympathetic to this materialist requirement, and more broadly to the Speculative Realist project of dethroning language as our one and only mode of access to the world. (There are plenty of ways of appreciating the power and complexity of language without making it the wellspring of Being, as some interpreters of Heidegger have insisted.) Our quantitative work with texts adds an unexpected twist to these debates: as objects of massive and variable address, we grasp things about texts in precisely the ways usually reserved for non-linguistic entities. When taken as objects of quantitative description, texts possess qualities that–at some point in the future–could be said to have existed in the present, regardless of our knowledge of them. There is thus a temporal asymmetry surrounding quantitative statements about texts: if one accepts the initial choices about what gets counted, such statements can be “true” now even if they can only be produced and recognized later. Does this asymmetry, then, mean that language itself, “though perfectly intelligible, remain[s] irreducible to thought?” Do iterative methods allow us to satisfy Brassier’s materialist requirement in the realm of language itself?

    Let us begin with the question of addressability and access. The research described on this blog involves the creation of digitized corpora of texts and the mathematical description of elements within that corpus. These descriptions obtain at varying degrees of abstractions (nouns describing sensible objects, past forms of verbs with an auxiliary, etc.). If we say that we know something quantitatively about a given corpus, then, we are saying that we know it on the basis of a set of relations among elements that we have provisionally decided to treat as countable unities. Our work is willfully abstract in the sense that, at crucial moments of the analysis, we foreground relations as such, relations that will then be reunited with experience. When I say that objects of the following kind – “Shakespearean texts identified as comedies in the First Folio” – contain more of this type of thing–first and second person singular pronouns–than objects of a different kind (Shakespeare’s tragedies, histories), I am making a claim about a relation between groups and what they contain. These groupings and the types of things that we use to sort them are provisional unities: the circle we draw around a subset of texts in a population could be drawn another way if we had chosen to count other things. And so, we must recognize several reasons why claims about these relations might always be revised.

    Every decision about what to count offers a caricature of the corpus and the modes of access this corpus allows. A caricature is essentially a narrowing of address: it allows us to make contact with an object in some of the ways Graham Harman has described in his work on vicarious causation. One can argue, for example, that the unity “Shakespeare’s Folio comedies” is really a subset of a larger grouping, or that the group can itself be subdivided into smaller groups. Similarly, one might say that the individual plays in a given group aren’t really discrete entities and so cannot be accurately counted in or out of that group. There are certain words that Hamlet may or may not contain, for example, because print variants and multiple sources have made Hamlet a leaky unity. (Accommodating such leaky unities is one of the major challenges of digital text curation.) Finally, I could argue that addressing these texts on the level of grammar–counting first and second person singular pronouns–is just one of many modes of address. Perhaps we will discover that these pronouns are fundamentally linked to semantic patterns that we haven’t yet decided to study, but should. All of these alternatives demonstrate the provisional nature of any decision to count and categorize things: such decisions are interpretive, which is why iterative criticism is not going to put humanities professors out of business. But such counting decisions are not–and this point is crucial–simply another metaphoric reduction of the world. PCA, cluster analysis and the other techniques we use are clearly inhuman in the number of comparisons they are able to make. The detour through mathematics is a detour away from consciousness, even if that detour produces findings that ultimately converge with consciousness (i.e., groupings produced by human reading).

    Once the counting decisions are made, our claims to know something in a statistical sense about texts boils down to a claim that a particular set of relations pertains among entities in the corpus. Indeed, considered mathematically, the things we call texts, genres, or styles simply are such sets of relations–the mathematical reduction being one of many possible caricatures. But counting is a very interesting caricature: it yields what is there now–a real set of relations–but is nevertheless impossible to contemplate at present. Once claims about texts become mathematical descriptions of relations, such statements possess what the philosopher Quentin Meillassoux calls ancestrality, a quality he associates primarily with statements about the natural world. Criticizing the ascendance of what he calls the Kantian dogma of correlationism—the assumption that everything which can be said “to be” exists only as correlate of consciousness—Meillassoux argues that the idealist or critical turn in Continental philosophy has impoverished our ability to think about anything that exceeds the correlation between mind and world. This “Great Outdoors,” he goes on to suggest, is a preserve that an explicitly speculative philosophy must now rediscover, one which Meillassoux believes becomes available to us through mathematics. So, for example, Meillassoux would agree with the statement, “the earth existed 4.5 billion years ago,” precisely because it can be formulated mathematically using measured decay rates of carbon isotopes. The statement itself may be ideal, but the reality it points to is not. What places The Great Outdoors out of doors, then, is its indifference to our existence or presence as an observer. Indeed, for Meillassoux, it is only those things which are “mathematically conceivable” that exceed the post-Kantian idealist correlation. For Meillassoux,

    all those aspects of the object that can be formulated in mathematical terms can be meaningfully conceived as properties of the object in itself.

    Clearly such a statement is a goad for those who place mind or natural language at the center of philosophy. But the statement is also a philosophical rallying cry: be curious about objects or entities that do not reference human correlates! I find this maxim appealing in the wake of the “language is everything” strain of contemporary theory, which is itself a caricature of the work of Wittgenstein, Derrida and others. Such exaggerations have been damaging to those of us working in the humanities, not least because they suggest that our colleagues in the sciences do nothing but work with words. By making language everything–and, not accidentally, making literary studies the gatekeeper of all disciplines–this line of thought amounts to a new kind of species narcissism. Meillassoux and others are finding ways to not talk about language all the time, which seems like a good thing to me.

    But would Meillassoux, Harman and other Speculative Realists consider texts to be part of The Great Outdoors? Wouldn’t they have to? After all, statements about groupings in the corpus can be true now even when there is no human being to recognize that truth as a correlate of thought. Precisely because texts are susceptible to address and analysis on a potentially infinite variety of levels, we can be confident that a future scholar will find a way of counting things that turns up a new-but-as-yet-unrecognized grouping. Human reading turned up such a thing when scholars in the late nineteenth century “discovered” the genre of Shakespeare’s Late Romances. (Hope and I have, moreover, re-described these grouping statistically.) Like our future mathematical sleuth might do a century from now, nineteenth-century scholars were arguing that Romance was already a real feature of the Shakespearean corpus, albeit one that no one had yet recognized. They had, in effect, picked out a new object by emphasizing a new set of relations among elements in a collection of words. Couldn’t we expect another genre to emerge from this sort of analysis–a Genre X, let’s say–given sufficient time and resources? Would we accept such a genre if derived through iterative means?

    I can imagine a day, 100 years from now, when we have different dictionaries that address the text on levels we have not thought to explore at present. What if someone creates a dictionary that allows me to use differences in a word’s linguistic origin (Latinate, Anglo-Saxon, etc.) to relate the contents of one text to another? What if a statistical procedure is developed that allows us to “see” groupings we could recognize today but simply have not developed the mathematics to expose? When you pair the condition of massive addressability with (a)  the possibility of new tokenizations (new elements or strata of address) or (b) the possibility that all token counts past and future can be subjected to new mathematical procedures, you arrive at a situation in which something that is arguably true now about a collection of texts can only be known in the future.

    And if something can be true about an object now without itself being a correlate of human consciousness, isn’t that something part of the natural world, the one that is supposed to be excluded from the charmed circle of the correlation? Does this make texts more like objects in nature, or objects in nature more like texts? Either way, The Great Outdoors has become larger.

  • Text: A Massively Addressable Object

    Phone Book Dress by Jolis PaonsFirst

    At the Working Group for Digital Inquiry at Wisconsin, we’ve just begun our first experiment with a new order of magnitude of texts. Hope and I started working with 36 items about 6 years ago when we began to study Shakespeare’s First Folio plays. Last year we expanded to 320 items with the help of Martin Mueller at Northwestern, exploring the field of early modern drama. Now that UW has negotiated a license with the University of Michigan to begin working with the files from the Text Creation Partnership (TCP, which contains over 27000 items from early modern print), we can up the number again. By January we will have begun our first 1000 item experiment, spanning items printed in Britain and North America from 1530-1809. Robin Valenza and I, along with our colleagues in Computer Science and the Library, will begin working up the data in the spring. Stay tuned for results.

    New experiments provide opportunities for thought that precede the results. What does it mean to collect, tag and store an array of texts at this level of generality? What does it mean to be an “item” or “computational object” within this collection? What is such a collection? In this post, I want to think further about the nature of the text objects and populations of texts we are working with.

    What is the distinguishing feature of the digitized text – that ideal object of analysis considered in all of its hypothetical relations with other ideal objects? The question itself goes against the grain of recent materialist criticism, which focuses on the physical existence of books and practices involved in making and circulating them. Unlike someone buying an early modern book in the bookstalls around St. Paul’s four hundred years ago, we encounter our TCP texts as computational objects. That doesn’t mean that they are immaterial, however. Human labour has transformed them from microfilm facsimiles of real pages into diplomatic quality digital transcripts, marked up in TEI so that different formatting features can be distinguished. That labor is as real as any other.

    What distinguishes this text object from others? I would argue that a text is a text because it is massively addressable at different levels of scale. Addressable here means that one can query a position within the text at a certain level of abstraction. In an earlier post, for example, I argued that a text might be thought of as a vector through a meta-table of all possible words. Why is it possible to think of a text in this fashion? Because a text can be queried at the level of single words and then related to other texts at the same level of abstraction: the table of all possible words could be defined as the aggregate of points of address at a given level of abstraction (the word, as in Google’s new n-gram corpus). Now, we are discussing ideal objects here: addressability implies different levels of abstraction (character, word, phrase, line, etc) which are stipulative or nominal: such levels are not material properties of texts or Pythagorean ideals; they are, rather, conventions.

    Here’s the twist. We have physical manifestations of ideal objects (the ideal 1 Henry VI, for example), but these manifestations are only provisional realizations of that ideal. (I am using the word manifestation in the sense advanced in OCLC’s FRBR hierarchy.) The book or physical instance, then, is one of many levels of address. Backing out into a larger population, we might take a genre of works to be the relevant level of address. Or we could talk about individual lines of print; all the nouns in every line; every third character in every third line. All of this variation implies massive flexibility in levels of address. And more provocatively: when we create a digitized population of texts, our modes of address become more and more abstract: all concrete nouns in all the items in the collection, for example, or every item identified as a “History” by Heminges and Condell in the First Folio. Every level is a provisional unity: stable for the purposes of address, but also: stable because it is the object of address. Books are such provisional unities. So are all the proper names in the phone book.

    The ontological status of the individual text is the same as that of the population of texts: both are massively addressable, and when they are stored electronically, we are able to act on this flexibility in more immediate ways through iterative searches and comparisons. At first glance, this might seem like a Galilean insight, similar to his discipline-collapsing claim that the laws which apply to heavens (astronomy) are identical with the ones that apply to the sublunar realm (physics). But it is not.

    Physical texts were already massively addressable before they were ever digitized, and this variation in address was and is registered at the level of the page, chapter, the binding of quires, and the like. When we encounter an index or marginal note in a printed text — for example, a marginal inscription linking a given passage of a text to some other in a different text — we are seeing an act of address. Indeed, the very existence of such notes and indexes implies just this flexibility of address.

    What makes a text a text – its susceptibility to varying levels of address – is a feature of book culture and the flexibility of the textual imagination. We address ourselves to this level, in this work, and think about its relation to some other. “Oh, this passage in Hamlet points to a verse in the Geneva bible,” we say. To have this thought is to dispose relevant elements in the dataset in much the same way a spreadsheet aggregates a text in ways that allow for layered access. A reader is a maker of such a momentary dispositif, and reading might be described as the continual redisposition of levels of address in this manner. We need a phenomenology of these acts, one that would allow us to link quantitative work on a culture’s “built environment” of words to the kinesthetic and imaginative dimensions of life at a given moment.

    A physical text or manifestation is a provisional unity. There exists a potentially infinite array of such unities, some of which are already lost to us in history: what was a relevant level of address for a thirteenth century monk reading a manuscript? Other provisional unities can be operationalized now, as we are doing in our experiment at Wisconsin, gathering 1000 texts and then counting them in different ways. Grammar, as we understand it now, affords us a level of abstraction at which texts can be stabilized: we lemmatize texts algorithmically before modernizing them, and this lemmatization implies provisional unities in the form of grammatical objects of address.

    One hundred years from now, the available computational objects may be related to one another in new ways. I can only imagine what these are: every fourth word in every fourth document, assuming one could stabilize something like “word length” in any real sense. (The idea of a word is itself an artifact of manuscript culture, one that could be perpetuated in print through the affordances of moveable type.) What makes such thought experiments possible is, once again, the addressability of texts as such. Like a phone book, they aggregate elements and make these elements available in multiple ways. You could even think of such an aggregation as the substance of another aggregation, for example, the phone book dress designed by Jolis Paons above. But unlike a phone-book, the digitized text can be reconfigured almost instantly into various layers of arbitrarily defined abstraction (characters, words, lines, works, genres). The mode of storage or virtualization is precisely what allows the object to be addressed in multiple ways.

    Textuality is massive addressability. This condition of texts is realized in various manifestations, supported by different historical practices of reading and printing. The material affordances of a given medium put constraints on such practices: the practice of “discontinuous reading, ” for example, develops alongside the fingerable discrete leaves of a codex. But addressability as such – this is a condition rather than a technology, action or event. And its limits cannot be exhausted at a given moment. We cannot, in a Borgesian mood, query all of the possible datasets that will appear in the fullness of time. And we cannot import future query types into the present. But we can and do approximate such future searches when we automate our modes of address in unsupervised multi-variate statistical analysis – for example, factor analysis or PCA. We want all the phonebooks. And we can simulate some of them now.