Texts as Objects II: Object Oriented Philosophy. And Criticism?
In the previous post I laid out several questions about the nature of texts, objects and interpretation that arise when we subject texts — for example, the Folio plays of Shakespeare — to statistical analysis. Above is a sketch of two texts, T1 and T2 (forgive the hand-drawn visuals), that exist as documents we might read. This is our point of contact as scholars, and we know where to take it from here. But for machine analysis, these texts are transformed into objects — relational, formalized mathematical entities — which means that they are containers of containers of things. So let’s think this way about texts for a moment.
T1 and T2 are both texts of 1000 words in length. We can think of these texts as a set of tokens drawn from a larger set of tokens that represents the totality of English words at a given moment. (Such a totality is an abstraction, just as Saussure’s parole was an abstraction; let’s leave that aside for now.) Now an mathematically-minded critic might say the following: Table 1 is a topologically flat representation of all possible words in English, arrayed in a two-dimensional matrix. The text T1 is a vector through that table, a needle that carries the “thread” through various squares on the surface, like someone embroidering a quilt. One possible way of describing the text, then, would be to chart its movement through this space, like a series of stitches.
Generalizations about the syntax and meaning of that continuously threading line would be generalizations about two things: the sequence of stitches and the significance of different regions in the underlying quilt matrix. I have arranged the words alphabetically in this table, which means that a “stitch history” of movements around the table would not be very revealing. But the table could be rendered in many other ways (it could be rendered three- or multi-dimensionally, for example). What if I put all of the verbs in the lower left-hand corner (southwest) of the table and all of the pronouns in the upper right (northeast). Based on this act of spatial classification, you could then come up with statements like: “I see many threads passing between the northeast and southwest,” a meaningless descriptive statement unless you add: “this is because verbs are here and pronouns are there, and they tend to follow one another in written and spoken English.” So this spatializing approach to textual analysis would require three things: (1) arrangement of the matrix in a meaningful way; (2) description of the movement through the matrix; and (3) analysis of patterns in that movement. Based on (1) you might have something interesting to say about (3), and as the note says, a text is a “vector through a hypothetical Table” and “a theory of rhetoric, grammar, semantics is an attempt to rationalize this vector — as sequence — by regrouping the words in the table by region.” In effect, any mathematical or container-based analysis of a text must ultimately be some kind of mapping of a vector-space (semantic, ideological, grammatical, generic, etc).
Now, Docuscope is itself a built form of this type of container-based analysis, one that eliminates the temporal dimension of “stitching” described above by transforming the hypothetical table into buckets or classes of words and then decanting the text into those buckets. Instead of regional movement, we get inclusion or exclusion of words (strings) from classes of words. The architecture of the classes matters, of course, since only if that architecture is good will we find patterns that we recognize and understand, understanding being the ultimate goal here. (It is also possible to simply look for correlated patterns among documents that might allow someone to find an entire class of objects based on a few tokens they already know (a very small “class”), as Google does; but finding is not criticism.) So what is a text in the eyes of Docuscope, or for than matter, any device that tags documents? One answer is that the text “is” the items circled above M1 and M2: words or sequences of words that have been classed into buckets. At the level of M1 and M2, the text becomes a set of local subsets, each of which contains a number of tokens. Statistical analysis of this partitioned object yields quantitative relations — R1, R2 and R3 — which differentiate one text from another.
Now for the philosophical question, the one where object oriented philosophy might be useful: when asked to describe the nature of the statistical entity undergoing analysis here (the data object rendered by Docuscope and then explored within R), do we say that it is simply the local contents (M1, M2) of the containers (T1 and T2)? If I begin by saying that the being of this object is, rather, the structure of these elements in their containers — a better answer, I think — then I probably mean that T1 and T2 are really the sum of all relations that can be posited (R1, R2, R3) among rendered elements (M1, M2). This rather Leibnizian sounding answer suggests that a text’s existence is ultimately differential: it is the sum of that object’s relations with all other objects. The statistical analysis of texts would be the quantitative description of this totality of relations given a set of classes — classes that we, as humanists, want to debate because they may be the source of any meaning in the result (because a certain kind of meaning or “purpose in pattern” is distributed into the classes).
But here is where I think Harman adds something crucial. If the argument he has been developing in Tool Being, Prince of Networks and elsewhere is correct, then an object of this or any other kind would not be the sum of its relations with other objects, as is the case in Latour’s analysis. To this relational model, Harman opposes the metaphysical integrity of the object over and beyond its relations, an integrity which holds that object together in its “domestic” being over and above its relational “alliances.” In Prince of Networks, he writes:
I hold that there is an absolute distinction between the domestic relations a thing needs to some extent in order to exist [see above, M1, M2] and the external alliances that it does not need [above, R1, R2, R3]. But the actor itself [i.e., object of analysis] cannot be identified with either. An object cannot be exhausted by a set of alliances. But neither is it exhausted by a summary of its pieces, since any genuine object will be an emergent reality over and above its components, oversimplifying those components and able to withstand a certain degree of turbulent change in them. (135)
What I find fascinating and important about Harman’s idea here is that he is providing a rationale for (1) accommodating the kind of container analysis I have outlined above while (2) arguing that this type of analysis is not the end of the story. Now, Harman and the Speculative Realists have been reluctant to discuss what constitutes a text and how language might itself be an object, a reluctance that stems — understandably, I think — from fatigue with the post-Heidegerrian “language is everything” trend in Continental philosophy and cultural studies. But language is definitely something, and it is as real as anything else I can think of. So too are our encounters (in the theater, the library, the cinema) with things like genre, style, ideology and pleasure.
Object oriented philosophy should have something to say about texts, since they too provide a particularly good example of why the purely relational criterion for an object’s identity (whether it is a text, a word, a thought, feeling, or piece of wood) is insufficient. As literary critics and theorists, we may have something to add to Harman’s account of the inexhaustibility of an object’s relations and its emergent reality over and above its components. In fact, this is what many of us have been arguing is wrong about the kinds of reductive claims that can be made about texts on the grounds that they yield statistical regularities.
What does it mean for the reality of an object to “simplify” its “components”? Perhaps the process that Harman refers to as simplification is what we as literary critics refer to as interpretation: the contingent coming into being of a portion of an object’s reality — here, a text — through that object’s interrelation with other objects and the subtractive unveiling of its inexhaustible contents. (Whitehead describes this as the process of “objectification.”) Harman would argue that such emergent realities don’t just take hold between texts and readers, but between sunlight and plant leaves or fire and cotton. All objects can be oversimplified, all of them can survive (and resist) some degree of turbulent change.
If objects are really this universal, then the process of “pattern recognition” that I describe as object oriented criticism is really something more involved than the collating of sets and relations among sets. Clearly, if a text is understood as a container of relations, then statistics can model the complexity of that object and its relations — even the immense complexity of a textual object. But that model, like the map of relations above, will always be just an approximation. As Harman insists, the inner reality of the object — itself alluring with the promise of something more — is never fully available, whether that object is a piece of wood or a piece of writing. As literary critics, I think we can find plenty to work with when objects are defined in this way.
This entry was posted in Quant Theory
. Bookmark the permalink
. or leave a trackback: Trackback URL