Category: Counting Other Things

  • What Do People Read During a Revolution?

    These two visualizations spark two interesting questions: What do people read during a revolution?  What is the connection between what people read and political events? Both images spike dramatically around moments of upheaval in the Western World: The English, American, and French Revolutions, the mid-19th-century Europe-wide overthrow of governments, and World War I, to name just a few.  These images are all the more striking because they did not arise from a historical study of warfare or publishing, but from a more workaday task—that of categorizing all books from 1600 to 2010 according to Library of Congress subject headings. (The source of the data was Google’s catalog of books as of 2010.)  The visualizations were shown in passing during a 2010 meeting between researchers at Google, where the data had been produced, and a group of humanities scholars and advocates, who were meeting with the Google team to exchange ideas. When Google’s Jon Orwant flashed this image on the screen, the professors in the assembly gasped. Genuinely gasped. We could see in this visualization of data things that had been debated for centuries, but that had never been seen: a connection between the world of print and the world of political action, a link between revolution and reading a certain kind of book.

    We are experienced readers of books, book history, and—we like to think—of book diagrams. Humans invented stream charts well before the age of computing; this style of conveying information is at least 250 years old and draws on sources that are even older.  (See Rosenberg and Grafton, Cartographies of Time, 2010.)  However, the union of technologies—modern cataloging systems, the increasingly systematized concatenation of library catalogs worldwide, and the capacity to render data chronologically in the style of a geological diagram—produces a compact vision of Western print culture hitherto unseen. Simple in execution, the visualization prompts new thinking.

    Like any metaphorical or mathematical rendering, the diagram below should be read with care: the strata are normalized so that the spikes do not necessarily indicate a greater number of books published, but rather a shift in the proportion of books composed of a given subject.  A spike in one layer of the diagram can give the illusion that all strata of the diagram have increased in size, a trick of the eye that the mind needs to combat.  The second visualization helps with this by zooming in and thereby singling out the area of greatest mathematical change, but it, too, needs to be viewed critically.

    Now that the caveats have been put to one side, we return to the original questions and then offer a reformulation.

    What are people reading during a revolution? Poetry? Books on military technology? Theology? No. If we take the first spike, the years leading up to the English Revolution, the answer in the years leading up to the 1642 regicide seems to be “Old World History.” The second chronological peak—in the decades around the American (1776) and French (1789) Revolutions—shows the same pattern. In periods that historians would link to major political upheaval, the world of print shows similar disruptions: publishers are offering more history for readers who, perhaps, think of themselves as living through important historical changes.

    We should be precise: these data don’t indicate that more people are reading history, but that a higher proportion of books published by presses can be classed by cataloguers as history. There are many follow up questions one might ask here. Does publication tie strongly to actual reading, or are these only loosely connected? Are publishers reducing the number of books in other subject areas because of scarcity of resources or some other factor, which would again lead to the proportional spikes seen above? Are the cataloguing definitions of what counts as Old World History or history in general themselves modeled on the books published during the spike years?

    One has to ask questions about the size and representativity of the dataset, the uniformity of the classifications, and the nature of the spatial plot in order to understand what is going on. And, crucially in this case, one has to have the initial insight—born of a reading knowledge of history itself—that the timing of the spikes is important. But if you’ve got that kind of knowledge in the room, you might see something you haven’t seen before.

     

     

  • The Time Problem: Rigid Classifiers, Classifier Postmarks

     

    Here is a thought experiment. Make the following assumptions about a historically diverse collection of texts:

    1) I have classified them according to genre myself, and trust these classifications.

    2) I have classified the items according to time of composition, and I trust these classifications.

    So, my items are both historically and generically diverse, and I want to understand this diversity in a new way.

    The metadata I have now allows me to partition the set. The partition, by decade, items, and genre class (A, B, C)  looks like this:

    Decade 1, 100 items: A, 25; B, 50; C, 25

    Decade 2, 100 items: A, 30; B, 40; C, 30

    Decade 3, 100 items: A, 30; B, 30; C, 40

    Decade 4, 100 items: A, 40; B, 40; C, 20

    Each decade is labeled (D1, D2 D3) and each contains 100 items. These items are classed by Genre (A, B, C) and the proportions of items belonging to each genre changes from one decade to the next. What could we do with this collection partitioned in this way, particularly with respect to changes in time?

    I am interested in genre A, so I focus on that: how does A’ness change over time? Or how does what “counts as A” change over time? I derive a classifier (K) for A in the first Decade and use this distance metric to arrange all items in this decade with respect to A’ness. So my new description allows me to supply the following information about every item: Item 1 participates in A to this degree, and A’ness means “not being B or C in D1.” Let’s call this classifier D1Ka. I can now derive the set of all classifiers with respect to these metadata: D1Ka, D1Kb, D1Kc, D2Ka, D2Kb, etc. And let’s say I derive a classifier for A using the whole dataset. So we add DKa, DKb, DKc. What are these things I have produced and how can they be used to answer interesting questions?

    I live in D1, and am confident I know what belongs to A having seen lots of examples. But I get access to a time travel machine and someone sends me a text written much later in time. It is a visitor from D4, and by my own lights, it looks like another example of A. So, I have projected D1Ka onto an item from D4 and made a judgment. Now we lift the curtain and find that for a person living in D4, the item is not an A but a B. Is my classifier wrong? Is this type of projection illegitimate? I don’t think so. We have learned that classifiers themselves have postmarks, and these postmarks are specific to the population in which they are derived. D1Ka is an *artifact* of the initial partitioning of my data: if there were different proportions of A, B, and C within D1, or different items in each of these categories, the classifier would change.

    Experiment two. I live in D4 and I go to a used bookstore, where I find a beautifully preserved copy of an item produced in D1. The title page of the this book says, “The Merchant of Venice, a Comedy.” Nonsense, I say. There’s nothing funny about this repellent little play. So D1Ka fails to classify an A for someone in D4. Why? Because the classifier D4Ka is rigidly determined by the variety of the later population, and this variety is different from that found in D1. When classifiers are themselves rigidly aligned with their population of origin, they generalize in funny ways.

    Wait, you say. I have another classifier, namely Ka produced over the entire population, which represents all of the time variation in the dataset of 400 items. Perhaps this is useful for describing how A’ness changes over time? Could I compare D1Ka, D2Ka, D3Kz and D4Ka to one another using DKa as my reference? Perhaps, but you have raised a new question: who, if anyone, ever occupies this long interval of time? What kind of abstraction or artifact is DKa, considering that most people really think 10 years ahead or behind when they classify a book? If we are dealing with 27 decades (as we do in the case of our latest big experiment), we have effectively created a classifier for a time interval that no one could ever occupy. Perhaps there is a very well-read person who has read something from each decade and so has an approximation of this longer perspective: that is the advantage of the durability of print, the capacity of memory, and perhaps the viability of reprinting, which in effect imports some of the variation from an earlier decade into a newer one. When we are working with DKa, everything is effectively written at the same time. Can we use this strange assumption — everything is written at once — to explore the real situation, which is that everything is written at a different time?

    Another interesting feature of the analysis. This same type of “all written at the same time” reasoning is occurring in our single decade blocks, since when we create the metadata that allows us to treat a subpopulation of texts and belonging to *a* decade, we once again say they were written simultaneously. We use obvious untruths to get at underlying truths, like an astronomer using the inertial assumption to calculate forces, even though we’ve never seen a body travel in a straight line forever.

    If classifiers are artifacts of an arbitrarily scalable partitioning of the population, and if these partitions can be compared, what is the ideal form of “classifier time travel” to use when thinking about how actual writing is influenced by other writing, and how a writer’s memory of texts produced in the past can be projected forward into new spaces? Is there anything to be learned about genre A by comparing the classifiers that can be produced to describe it over time? If so, whose perspective are we approximating, and what does that implied perspective say about our underlying model of authorship and literary history?

    If classifiers have postmarks, when are they useful in generalizing over — or beyond — a lifetime’s worth of reading?

     

  • Google Books: Ratio of Inked Space to Blank Space

     

    How could we create a proxy measure for the relative luxury of a book, and by extension the social prestige of its contents? One way of getting at this might be to measure the ratio of inked to non-inked space for a given work. While the measure is flawed  — verse uses less page space, illustrations may sometimes apply more ink across the page — it is at least a starting point. What if Google Books were to publish the ratio of inked to non-inked space for all of the items it has scanned? We could then see how writing of different types, for example, plays or prose fiction, move into larger print formats such as the Folio.

  • Shakespeare’s mythic vocabulary – and his invisible grammar

    Universities in the UK are under pressure to demonstrate the ‘impact’ of their research. In many ways, this is fair enough: public taxes account for the vast majority of UK University income, so it is reasonable for the public to expect academics to attempt to communicate with them about their work.

    University press offices have become more pro-active in seeking out stories to present to the media as a way of raising the profile of institutions. Recently, the Strathclyde press office contacted me after they read one of my papers on Strathclyde’s internal research database: they wanted to do a press release to see if any outlets would follow-up on the story.

    The paper they’d read was a survey article I’d written for an Open University course reader. My article reported recent papers by Hugh Craig and Ward Elliott & Robert Valenza, which demolish some common myths about Shakespeare’s vocabulary (its size and originality – and see Holger Syme on this too) – and went on to argue that Shakespeare’s originality might lie in his grammar, rather than in the words he does not make up.

    Indeed they did want to pick up on the story, though I’d have preferred the article to have been a bit clearer, and not to have had a headline that was linguistic nonsense. The Huffington Post did a bit better.

    One particularly galling aspect of the stories: the articles failed to attribute the work on Shakespeare’s vocabulary to Craig or Elliott and Valenza, so it might have looked as though I was taking credit for other people’s work

    Looking back, I don’t think I explained my ideas very well either to Strathclyde’s press office, or to the Daily Telegraph when they rang – hence the rather confused reports. But I was extremely careful to attribute the work to those who had done it – even to the point of sending my original text to the journalist I talked to, and pointing him to the relevant footnote. I did not expect a news story to contain full academic references of course – but a clearly written story could easily have mentioned the originators of the work.

    A minor episode, but it also made me think that there is a fundamental problem with trying to explain complex linguistic issues in the daily press – even if you use Newcastle United’s greatest goalscorers to illustrate the statistics. They want a clear story: you want to get the nuances across. Luckily, this blog allows me to make the full text of my article available (click through twice for a pdf of my article):

    Shakespeare and the English Language

     

    Jonathan Hope, Strathclyde University, Glasgow, February 2012

  • Finding the Sherlock in Shakespeare: some ideas about prose genre and linguistic uniqueness

    An unexpected point of linguistic similarity between detective fiction and Shakespearean comedy recently led me to consider some of the theoretical implications of tools like DocuScope, which frequently identify textual similarities that remain invisible in the normal process of reading.

    A Linguistic Approach to Suspense Plot

    Playing around with a corpus of prose, we discovered that the linguistic specs associated with narrative plot are surprisingly unique. Principle Component Analysis performed on the linguistic features counted by DocuScope suggested the following relationship between the items in the corpus:

    I interpreted the two strongest axes of differentiation seen in the graph (PC 1 and PC 2) as (1) narrative, and (2) plot. The two poles of the narrative axis are Wuthering Heights (most narrative) and The Communist Manifesto (least narrative). The plot axis is slightly more complicated. But on the narrative side of the spectrum, plot-driven mysteries like “The Speckled Band” and The Canterville Ghost score high on plot, while the least plotted narrative is Samuel Richardson’s Clarissa (9 vols.). For now, I won’t speculate about why Newton’s Optics scores so astronomically high on plot. It is enough that when dealing with narrative, PC 2 predicts plot.

    The fact that something as qualitative and amorphous as plot has a quantitative analogue leads to several questions about the meaning of the data tools like DocuScope turn up.

    Linguistic Plot without Actual Plot

    Because linguistic plot is quantifiable, it allows us to look for passages where plot is present to a relative degree. Given a large enough sample, it is more than likely that some relatively plotted passages will occur in texts that are not plotted in any normal sense. This would at minimum raise questions about how to handle genre boundaries in digital literary research.

    Our relative-emplotment test (done in TextViewer) yielded intuitive results when performed on the dozen or so stories in The Adventures of Sherlock Holmes: the passages exhibiting the strongest examples of linguistic plot generally narrated moments of discovery, and moved the actual plot forward in significant ways. Often, these passages showed Holmes and Watson bursting into locked rooms and finding bodies.

    When we performed the same test on the Shakespeare corpus, something intriguing happened. The passages identified by TextViewer as exhibiting linguistic plot look very different from the corresponding passages in Sherlock Holmes. There were no dead bodies, no broken-down doors, and no exciting discoveries. Nonetheless, the ‘plotted’ Shakespeare scenes were remarkably consistent with each other. Perhaps most significant in the context of their genre, these scenes had a strong tendency to show characters putting on performances for other characters. Additionally, in a factor that is fascinating even though it is probably a red herring, the ‘plotted’ Shakespeare scenes had an equally strong tendency to involve fairies.

    The consistent nature of the ‘plotted’ Shakespeare scenes suggests that the linguistic specs associated with plot when they occur in Sherlock Holmes may have different, but equally specific, effects in other genres. The next step would be to find a meaningful correspondence between the two seemingly disparate literary devices that accompany linguistic plot – detectives bursting into rooms to solve murders, and plays within plays involving fairies. I have some hunches about this. But in many ways the more important question is what is at stake in using DocuScope to identify such unexpected points of overlap.

    Enough measurable links between seemingly unlike texts could suggest an invisible web of cognates, which share an underlying structure despite their different appearances and literary classifications. Accordingly, we might hypothesize that reading involves selective ignorance of semantic similarities that could otherwise lead to the socially deviant perception that A Midsummer Night’s Dream resembles a Sherlock Holmes mystery.

    The question, then, is this: if the act of reading consists in part of ignoring unfruitful similarities, then what happens when these similarities nonetheless become apparent to us? Looking back at the corpus graph, we begin to see all sorts of possibilities, many of which would be enough make us literary outcasts if voiced in the wrong company. Could Newton’s Optics contain the most exciting suspense plot no one has ever noticed? Could Martin Luther be secretly more sentimental than Clarissa?

    Estranging Capacities of Digital Cognates

    I have been using the term ‘cognate’ to describe the relationship between linguistically similar but otherwise dissimilar texts. These correspondences will only be meaningful if we can connect them in a plausible way to our readerly understanding of the texts or genres in question. In the case of detective fiction and Shakespearean comedy, this remains to be seen. But our current lack of an explanation does not mean we should feel shy about pursuing the cognates computers direct us to. My analogy is the pop-culture ritual of watching The Wizard of Oz, starting the Pink Floyd album Dark Side of the Moon on the third roar of the MGM lion. The movie and the record sync up in a plausible pattern, prompting the audience to grasp a connection between the cognate genres of children’s movies and psychedelic rock.

    If digital methods routinely direct our attention to patterns we would never notice in the normal process of reading, then we can expect them to turn up a large number of such cognates. If we want to understand the results these tools are turning up, we should develop a terminology and start thinking about implications – not just for the few correspondences we can explain, but also for the vast number we cannot explain, at least right now.