Tag: classifiers

  • Now Read This: A Thought Experiment

    MRILet’s say that we believe we can learn something more about what literary critics call “authorial style” or “genre” by quantitative work. We want to say what that “more” is. We assemble a community of experts, convening a panel of early modernists to identify 10 plays that they feel are comedies based on prevailing definitions (they end in marriage), and 10 they feel are tragedies (a high born hero falls hard). To test these classifications, we randomly ask others in the profession (who were not on the panel) to sort these 20 plays into comedies and tragedies and see how far they diverge from the classifications of our initial panel. That subsequent sorting matches the first one, so we start to treat these labels (comedy/tragedy) as “ground truths” generated by “domain experts.” Now assume that I take a computer program, it doesn’t matter what that program is, and ask for it to count things in these plays and come up with a “recipe” for each genre as identified by our experts. The computer is able to do so, and the recipes make sense to us. (Trivially: comedies are filled with words about love, for example, while tragedies use more words that indicate pain or suffering.) A further twist: because we have an unlimited, thought-experiment budget, we decide to put dozens of early modernists into MRI machines and measure the activity in their brains while they are reading any of these 20 plays. After studying the brain activity of these machine-bound early modernists, we realize that there is a distinctive pattern of brain activity that corresponds with what our domain experts have called “comedies” and “tragedies.” When someone reads a comedy, regions A, B and C become active, whereas when a person reads tragedies, regions C, D, E, and F become active. These patterns are reliably different and track exactly the generic differences between plays that our subjects are reading in the MRI machine.

    So now we have three different ways of identifying – or rather, describing – our genre. The first is by expert report: I ask someone to read a play and she says, “This is a comedy.” If asked why, she can give a range of answers, perhaps connected to plot, perhaps to her feelings while reading the play, or even to a memory: “I learned to call this and other plays like it ‘comedies’ in graduate school.” The second is a description, not necessarily competing, in terms of linguistic patterns: “This play and others like it use the conjunction ‘if’ and ‘but’ comparatively more frequently than others in the pool, while using ‘and’ less frequently.” The last description is biological: “This play and others like it produce brain activity in the following regions and not in others.” In our perfect thought experiment, we now have three ways of “getting at genre.” They seem to be parallel descriptions, and if they are functionally equivalent, any one of them might just be treated as a “picture” of the other two. What is a brain scan of an early modernist reading comedy? It is a picture of the speech act: “The play I’m reading right now is a comedy.”

    Now the question. The first three acts of a heretofore unknown early modern play are discovered in a Folger manuscript, and we want to say what kind of play it is. We have our choice of either:

    • asking an early modernist to read it and make his or her declaration

    • running a computer program over it and rating it on our comedy/tragedy classifiers

    • having an early modernist read it in an MRI machine and characterizing the play on the basis of brain activity.

    Let’s say, for the sake of argument, that you can only pick one of these approaches. Which one would you pick, and why? If this is a good thought experiment, the “why” part should be challenging.

  • The Time Problem: Rigid Classifiers, Classifier Postmarks

     

    Here is a thought experiment. Make the following assumptions about a historically diverse collection of texts:

    1) I have classified them according to genre myself, and trust these classifications.

    2) I have classified the items according to time of composition, and I trust these classifications.

    So, my items are both historically and generically diverse, and I want to understand this diversity in a new way.

    The metadata I have now allows me to partition the set. The partition, by decade, items, and genre class (A, B, C)  looks like this:

    Decade 1, 100 items: A, 25; B, 50; C, 25

    Decade 2, 100 items: A, 30; B, 40; C, 30

    Decade 3, 100 items: A, 30; B, 30; C, 40

    Decade 4, 100 items: A, 40; B, 40; C, 20

    Each decade is labeled (D1, D2 D3) and each contains 100 items. These items are classed by Genre (A, B, C) and the proportions of items belonging to each genre changes from one decade to the next. What could we do with this collection partitioned in this way, particularly with respect to changes in time?

    I am interested in genre A, so I focus on that: how does A’ness change over time? Or how does what “counts as A” change over time? I derive a classifier (K) for A in the first Decade and use this distance metric to arrange all items in this decade with respect to A’ness. So my new description allows me to supply the following information about every item: Item 1 participates in A to this degree, and A’ness means “not being B or C in D1.” Let’s call this classifier D1Ka. I can now derive the set of all classifiers with respect to these metadata: D1Ka, D1Kb, D1Kc, D2Ka, D2Kb, etc. And let’s say I derive a classifier for A using the whole dataset. So we add DKa, DKb, DKc. What are these things I have produced and how can they be used to answer interesting questions?

    I live in D1, and am confident I know what belongs to A having seen lots of examples. But I get access to a time travel machine and someone sends me a text written much later in time. It is a visitor from D4, and by my own lights, it looks like another example of A. So, I have projected D1Ka onto an item from D4 and made a judgment. Now we lift the curtain and find that for a person living in D4, the item is not an A but a B. Is my classifier wrong? Is this type of projection illegitimate? I don’t think so. We have learned that classifiers themselves have postmarks, and these postmarks are specific to the population in which they are derived. D1Ka is an *artifact* of the initial partitioning of my data: if there were different proportions of A, B, and C within D1, or different items in each of these categories, the classifier would change.

    Experiment two. I live in D4 and I go to a used bookstore, where I find a beautifully preserved copy of an item produced in D1. The title page of the this book says, “The Merchant of Venice, a Comedy.” Nonsense, I say. There’s nothing funny about this repellent little play. So D1Ka fails to classify an A for someone in D4. Why? Because the classifier D4Ka is rigidly determined by the variety of the later population, and this variety is different from that found in D1. When classifiers are themselves rigidly aligned with their population of origin, they generalize in funny ways.

    Wait, you say. I have another classifier, namely Ka produced over the entire population, which represents all of the time variation in the dataset of 400 items. Perhaps this is useful for describing how A’ness changes over time? Could I compare D1Ka, D2Ka, D3Kz and D4Ka to one another using DKa as my reference? Perhaps, but you have raised a new question: who, if anyone, ever occupies this long interval of time? What kind of abstraction or artifact is DKa, considering that most people really think 10 years ahead or behind when they classify a book? If we are dealing with 27 decades (as we do in the case of our latest big experiment), we have effectively created a classifier for a time interval that no one could ever occupy. Perhaps there is a very well-read person who has read something from each decade and so has an approximation of this longer perspective: that is the advantage of the durability of print, the capacity of memory, and perhaps the viability of reprinting, which in effect imports some of the variation from an earlier decade into a newer one. When we are working with DKa, everything is effectively written at the same time. Can we use this strange assumption — everything is written at once — to explore the real situation, which is that everything is written at a different time?

    Another interesting feature of the analysis. This same type of “all written at the same time” reasoning is occurring in our single decade blocks, since when we create the metadata that allows us to treat a subpopulation of texts and belonging to *a* decade, we once again say they were written simultaneously. We use obvious untruths to get at underlying truths, like an astronomer using the inertial assumption to calculate forces, even though we’ve never seen a body travel in a straight line forever.

    If classifiers are artifacts of an arbitrarily scalable partitioning of the population, and if these partitions can be compared, what is the ideal form of “classifier time travel” to use when thinking about how actual writing is influenced by other writing, and how a writer’s memory of texts produced in the past can be projected forward into new spaces? Is there anything to be learned about genre A by comparing the classifiers that can be produced to describe it over time? If so, whose perspective are we approximating, and what does that implied perspective say about our underlying model of authorship and literary history?

    If classifiers have postmarks, when are they useful in generalizing over — or beyond — a lifetime’s worth of reading?