Data and Metadata

(Post by Jonathan Hope and Beth Ralston; data preparation by Beth Ralston.)

It is all about the metadata. That and text processing. Currently (July 2015) Visualising English Print (Strathclyde branch) is focussed on producing a hand-curated list of all ‘drama’ texts up to 1700, along with checked, clean metadata. Meanwhile VEP (Wisconsin branch) works on text processing (accessing TCP texts in a suitable format, cleaning up rogue characters, splitting up collected volumes into individual plays, stripping-out speech prefixes and non-spoken text, modernising/regularising).

We are not the only people doing this kind of work on Early Modern drama: Meaghan Brown at The Folger Shakespeare Library is working on a non-Shakespearean corpus, and Martin Mueller has just released the ‘Shakespeare His Contemporaries’ corpus. We’ve been talking to both, and we are very grateful for their help, advice, and generosity with data. In a similar spirit, we are making our on-going metadata collections available – we hope they’ll be of use to people, and that you will let us know of any errors and omissions.

You are welcome to make use of this metadata in any way you like, though please acknowledge the support of Mellon to VEP if you do, and especially the painstaking work of Beth Ralston, who has compared and cross-checked the various sources of information about Early Modern plays.

We hope to be in a position to release tagged texts once we have finalised the make-up of the corpus, and established our processing pipeline. Watch this space.

Many of the issues surrounding the development of usable corpora from EEBO-TCP will be discussed at SAA in 2016 in a special plenary round-table:

SAA session


In preparing these lists of plays and metadata we have made extensive use of Martin Wiggins and Catherine Richardson, British Drama 1533-1642: A Catalogue (Oxford), Alfred Harbage, Annals of English Drama 975-1700, the ESTC, and, most of all,  Zach Lesser and Alan Farmer’s DEEP (Database of Early English Playbooks).

Definitions and History 

One of the usefully bracing things about digital work is that it forces you to define your terms precisely – computers are unforgiving of vagueness, so a request for a corpus of ‘all’ Early Modern drama turns out to be no small thing. Of course everyone defines ‘all’, ‘Early Modern’ and ‘drama’ in slightly different ways – and those using these datasets should be aware of our definitions, and of the probability that they will want to make their own.

The current cut-off date for these files is the same as DEEP – 1660 (though one or two post-1660 plays have sneaked in). Before long, we will extend them to 1700.

By ‘drama’ we mean plays, masques, and interludes. Some dialogues and entertainments are included in the full data set, but we have not searched deliberately for them. We have included everything printed as a ‘play’, including closet dramas not intended for performance.

The immediate history of the selection is that we began with a ‘drama’ corpus chosen automatically by Martin Mueller (using XML tags in the TCP texts to identify dramatic genres). Beth Ralston then checked this corpus against the reference sources listed above for omissions, adding a considerable number of texts. This should not be regarded as ‘the’ corpus of Early Modern drama: it is one of many possible versions, and will continue to change as more texts are added to TCP (there are some transcriptions still in the TCP pipeline, and scholars are working on proposals to continue transcription of EEBO texts after TCP funding ends).

It is likely that each new scholar will want to re-curate a drama corpus to fit their research question – VEP is working on tools to allow this to be done easily.

Files and corpora

1    The 554 corpus

This spreadsheet lists only what we regard as the ‘central’ dramatic texts: plays.

Entertainments, masques, interludes, and dialogues are not included. We have also excluded around 35 play transcriptions in TCP which duplicate transcriptions of the same play made from different volumes (usually a collected edition and a stand-alone quarto).

The spreadsheet includes frequency counts for Docuscope LATs, tagged by Ubiquity, which can be visualised using any statistical analysis program (columns W-EE). For a descriptive list of the LATs, see <Docuscope LATs: descriptions>. For a description of all columns in the spreadsheet, see the <READ ME> file.

[In some of their early work, Hope and Witmore used a corpus of 591 plays which included these duplicates.]

554 metadata

README for 554 metadata

Docuscope LATs: descriptions 


2   The 704 corpus

The 704 corpus spreadsheet lists information for the 554 plays included above, and adds other types of dramatic text, such as masques, entertainments, dialogues, and interludes (mainly drawn from DEEP, and with the same date cut-off: 1660). This corpus also includes the 35 duplicate transcriptions excluded from the 554 spreadsheet.

Docuscope frequency counts are only available for items also in the 554 spreadsheet.

704 metadata

README for 704 metadata


3  The master metadata spreadsheet

Our ‘master metadata’ spreadsheet is intended to be as inclusive as possible. The current version has 911 entries, and we have sought to include a listing for every extant, printed ‘dramatic’ work we know about up to 1660 (from DEEP, Harbage, ESTC, and Wiggins). The spreadsheet does not include every edition of every text, but it does include the duplicate texts found in the 704 corpus. (When we extend the cut-off date to 1700, we expect the number of entries in this spreadsheet to exceed 1500.)

This master list includes all the texts in the 704 list (and therefore the 554 list as well). But it also includes:
• plays which are in TCP but which do not appear in the 554 or 704 corpora (i.e. they were missed first time round). These texts have ‘yes’ in the ‘missing from both’ column (M) of the master spreadsheet.
• plays which are absent from TCP at this time (we note possible reasons for this: some are in Latin, some are fragments, and we assume some have yet to be transcribed). These are texts which have ‘yes’ listed in the ‘missing from both’ column (M) of the master spreadsheet, as well as ‘not in tcp’ listed in the ‘tcp’ column (A).

master metadata

README for master metadata


TCP transcriptions

TCP is one of the most important Humanities projects ever undertaken, and scholars should be grateful for the effort and planning that has gone into it, as well as the free release of its data. It is not perfect however: as well as the issue of texts being absent from TCP, we are also currently dealing with problematic transcriptions on a play-by-play basis. Take Jonson’s 1616 folio (TCP: A04632, ESTC: S112455) for example – it has a very fragmentary transcription, especially during the masques.

page 1

First page of The Irish Masque


In the above image from The Irish Masque, you can see on the right-hand side that the text for this page is not available.

page 2

Second page of The Irish Masque

…However, on the next page the text is there (as far as we can work out, this seems to be due to problems with the original imaging of the book, rather than the transcribers).

Texts with fragmentary transcriptions have been excluded for now, assuming that at some point in the future TCP will re-transcribe them.

As we come across other examples of this, we will add them to page

Posted in Early Modern Drama, Shakespeare, Uncategorized, Visualizing English Print (VEP) | Leave a comment

Finding “Distances” Between Shakespeare’s Plays 2: Projecting Distances onto New Bases with PCA

It’s hard to conceive of distance measured in anything other than a straight line. The biplot below, for example, shows the scores of Shakespeare’s plays on the two Docuscope LATs discussed in the previous post, FirstPerson and AbstractConcepts:

BScreen-Shot-2015-06-22-at-10.01.43-PM2-e1435899027618Plotting the items in two dimensions gives the viewer some general sense of the shape of the data. “There are more items here, less there.” But when it comes to thinking about distances between texts, we often measure straight across, favoring either a simple line linking two items or a line that links the perceived centers of groups.

The appeal of the line is strong, perhaps because it is one dimensional. And brutally so. We favor the simple line because want to see less, not more. Even if we are looking at a biplot, we can narrow distances to one dimension by drawing athwart the axes. The red lines linking points above — each the diagonal of a right triangle whose sides are parallel to our axes — will be straight and relatively easy to find. The line is simple, but its meaning is somewhat abstract because it spans two distinct kinds of distance at once.

Distances between items become slightly less abstract when things are represented in an ordered list. Scanning down the “text_name” column below, we know that items further down have less of the measured feature that those further up. There is a sequence here and, so, an order of sorts:

Screen Shot 2015-07-03 at 9.49.01 AM

If we understand what is being measured, an ordered list can be quite suggestive. This one, for example, tells me that The Comedy of Errors has more FirstPerson tokens than The Tempest. But it also tells me, by virtue of the way it arranges the plays along a single axis, that the more FirstPerson Shakespeare uses in a play, the more likely it is that this play is a comedy. There are statistically precise ways of saying what “more” and “likely” mean in the previous sentence, but you don’t need those measures to appreciate the pattern.

What if I prefer the simplicity of an ordered list, but want nevertheless to work with distances measured in more than one dimension? To get what I want, I will have to find some meaningful way of associating the measurements on these two dimensions and, by virtue of that association, reducing them to a single measurement on a new (invented) variable. I want distances on a line, but I want to derive those distances from more than one type of measurement.

My next task, then, will be to quantify the joint participation of these two variables in patterns found across the corpus. Instead of looking at both of the received measurements (scores on FirstPerson and AbstractConcepts), I want to “project” the information from these two axes onto a new, single axis, extracting relevant information from both. This projection would be a reorientation of the data on a single new axis, a change accomplished by Principal Components Analysis or PCA.

To understand better how PCA works, let’s continue working with the two LATs plotted above. Recall from the previous post that these are the Docuscope scores we obtained from Ubiqu+ity and put into mean deviation form. A .csv file containing those scores can be found here. In what follows, we will be feeding those scores into an Excel spreadsheet and into the open source statistics package “R” using code repurposed from a post on PCA at Cross Validated by Antoni Parellada.

A Humanist Learns PCA: The How and Why

As Hope and I made greater use of unsupervised techniques such as PCA, I wanted a more concrete sense of how it worked.  But to arrive at that sense, I had to learn things for which I had no visual intuition. Because I lack formal training in mathematics or statistics, I spent about two years (in all that spare time) learning the ins and outs of linear algebra, as well as some techniques from unsupervised learning. I did this with the help of a good textbook and a course on linear algebra at Kahn Academy.

Having learned to do PCA “by hand,” I have decided here to document that process for others  wanting to try it for themselves. Over the course of this work, I came to a more intuitive understanding of the key move in PCA, which involves a change of basis via orthogonal projection of the data onto a new axis. I spent many months trying to understood what this means, and am now ready to try to explain or illustrate it to others.

My starting point was an excellent tutorial on PCA by Jonathon Shlens. Schlens shows why PCA is a good answer to a good question. If I believe that my measurements only incompletely capture the underlying dynamics in my corpus, I should be asking what new orthonormal bases I can find to maximize the variance across those initial measurements and, so, provide better grounds for interpretation. If this post is successful, you will finish it knowing (a) why this type of variance-maximizing basis is a useful thing to look for and (b) what this very useful thing looks like.

On the matrix algebra side, PCA can be understood as the projection of the original data onto a new set of orthogonal axes or bases. As documented in the Excel spreadsheet and the tutorial, the procedure is performed on our data matrix, X, where entries are in mean deviation form (spreadsheet item 1). Our task is then to create a 2×2 a covariance matrix S for this original 38×2 matrix X (item 2); find the eigenvalues and eigenvectors for this covariance matrix X (item 3); then use this new matrix of orthonormal eigenvectors, P, to accomplish the rotation of X (item 4). This rotation of X gives us our new matrix Y (item 5), which is the linear transformation of X according to the new orthonormal bases contained in P. The individual steps are described in Shlens and reproduced on this spreadsheet in terms that I hope summarize his exposition. (I stand ready to make corrections.)

The Spring Analogy

In addition to exploring the assumptions and procedures involved in PCA, Shlens offers a suggestive concrete frame or “toy example” for thinking about it. PCA can be helpful if you want to identify underlying dynamics that have been both captured and obscured by initial measurements of a system. He stages a physical analogy, proposing the made-up situation in which the true axis of movement of a spring must be inferred from haphazardly positioned cameras A, B and C. (That movement is along the X axis.)

Screen Shot 2015-07-02 at 6.50.53 AM

Shlens notes that “we often do not know which measurements best reflect the dynamics of our system in question. Furthermore, we sometimes record more dimensions than we actually need!” The idea that the axis of greatest variance is also the axis that captures the “underlying dynamics” of the system is an important one, particularly in a situation where measurements are correlated. This condition is called multicollinearity. We encounter it in text analysis all the time.

If one is willing to entertain the thought that (a) language behaves like a spring across a series of documents and (b) that LATs are like cameras that only imperfectly capture those underlying linguistic “movements,” then PCA makes sense as a tool for dimension reduction. Shlens makes this point very clearly on page 7, where he notes that PCA works where it works because “large variances have important dynamics.” We need to spend more time thinking about what this linkage of variances and dynamics means when we’re talking about features of texts. We also need to think more about what it means to treat individual documents as observations within a larger system whose dynamics they are assumed to express.

Getting to the Projections

How might we go about picturing this mathematical process of orthogonal projection? Shlens’s tutorial focuses on matrix manipulation, which means that it does not help us visualize how the transformation matrix P assists in the projection of the original matrix onto the new bases. But we want to arrive at a more geometrically explicit, and so perhaps intuitive, way of understanding the procedure. So let’s use the code I’ve provided for this post to look at the same data we started with. These are the mean-subtracted values of the Docuscope LATs AbstractConcepts and FirstPerson in the Folger Editions of Shakespare’s plays. Screen Shot 2015-06-22 at 9.43.18 PMTo get started, you must place the .csv file containing the data above into your R working directory, a directory you can change using the the Misc. tab. Paste the entire text of the code in the R prompt window and press enter.  Within that window, you will now see several means of calculating the covariance matrix (S) from the initial matrix (X) and then deriving eigenvectors (P) and final scores (Y) using both the automated R functions and “longhand” matrix multiplication. If you’re checking, the results here match those derived from the manual operations documented the Excel spreadsheet, albeit with an occasional sign change in P.  In the Quartz graphic device (a separate window), we will find five different images corresponding to five different views of this data. You can step through these images by keying control-arrow at the same time.

The first view is a centered scatterplot of the measurements above on our received or “naive bases,” which are our two docuscope LATs. These initial axes already give us important information about distances between texts. I repeat the biplot from the top of the post, which shows that according to these bases, Macbeth is the second “closest” play to Henry V (sitting down and to the right of Troilus and Cressida, which is first):

Screen Shot 2015-06-22 at 10.01.43 PM

Now we look at the second image, which adds to the plot above a line that is the eigenvector corresponding to the highest eigenvalue for the covariance matrix S. This is the line that, by definition, maximizes the variance in our two dimensional data:Screen Shot 2015-07-02 at 10.52.32 PMYou can see that each point is projected orthogonally on to this new line, which will become the new basis or first principal component once the rotation has occurred. This maximum is calculated by summing the squared distances of each the perpendicular intersection point (where gray meets red) from the mean value at the center of the graph. This red line is like the single camera that would “replace,” as it were, the haphazardly placed cameras in Shlens’s toy example. If we agree with the assumptions made by PCA, we infer that this axis represents the main dynamic in the system, a key “angle” from which we can view that dynamic at work. 

The orthonormal assumption makes it easy to plot the next line (black), which is the eigenvector corresponding to our second, lesser eigenvalue. The measured distances along this axis (where gray meets black) represents scores on the second basis or principal component, which by design eliminates correlation with the first. You might think of the variance along this line is the uncorrelated “leftover” from the that which was captured along the first new axis. As you can see, intersection points cluster more closely around the mean point in the center of this line than they did around the first:

Screen Shot 2015-07-02 at 11.09.16 PMNow we perform the change of basis, multiplying the initial matrix X by the transformation matrix P. This projection (using the gray guide lines above) onto the new axis is a rotation of the original data around the origin. For the sake of explication, I highlight the resulting projection along the first component in red, the axis that (as we remember) accounts for the largest amount of variance:

Screen Shot 2015-07-02 at 11.18.41 PMIf we now force all of our dots onto the red line along their perpendicular gray pathways, we eliminate the second dimension (Y axis, or PC2), projecting the data onto a single line, which is the new basis represented by the first principal component.

Screen Shot 2015-07-02 at 11.44.42 PM

We can now create a list of the plays ranked, in descending order, on this first and most principal component. This list of distances represents the reduction of the two initial dimensions to a single one, a reduction motivated by our desire to capture the most variance in a single direction.

How does this projection change the distances among our items? The comparison below shows the measurements, in rank order, of the far ends of our initial two variables (AbstractConcepts and FirstPerson) and of our new variable (PC1). You can see that the plays have been re-ordered and the distances between them changed:

Screen Shot 2015-07-03 at 12.14.04 AM

Our new basis, PC1, looks like it is capturing some dynamic that we might connect to the what the creators of the First Folio (1623) labeled as “comedy.” When we look at similar ranked lists for our initial two variables, we see that individually they too seemed to be connected with “comedy,” in the sense that a relative lack of one (AbstractConcepts) and an abundance of the other (FirstPerson) both seem to contribute to a play’s being labelled a comedy. Recall that these two variables showed a negative covariance in the initial analysis, so this finding is unsurprising.

But what PCA has done is combined these two variables into a new one, which is a linear combination of the scores according to weighted coefficients (found in the first eigenvector). If you are low on this new variable, you are likely to be a comedy. We might want to come up with a name for PC1, which represents the combined, re-weighted power of the first two variables. If we call it the “anti-comedy” axis — you can’t be comic if you have a lot of it! — then we’d be aligning the sorting power of this new projection with what literary critics and theorists call “genre.” Remember that by aligning these two things is not the same as saying one is the cause of the other.

With a sufficient sample size, this procedure for reducing dimensions could be performed on a dozen measurements or variables, transforming that naive set of bases into principal components that (a) maximize the variance in the data and, one hopes, (b) call attention to the dynamics expressed in texts conceived as “system.” If  you see PCA performed on three variables rather than two, you should imagine the variance-maximizing-projection above repeated with a plane in the three dimensional space:


Add yet another dimension, and you can still find the “hyperplane” which will maximize the variance along a new basis in that multidimensional space. But you will not be able to imagine it.

Because principal components are mathematical artifacts — no one begins by measuring an imaginary combination of variables — they must be interpreted. In this admittedly contrived example from Shakespeare, the imaginary projection of our existing data onto the first principal component, PC1, happens to connect meaningfully with one of the sources of variation we already look for in cultural systems: genre. A corpus of many more plays, covering a longer period of time and more authors, could become the basis for still more projections that would call attention to other dynamics we want to study, for example, authorship, period style, social coterie or inter-company theatrical rivalry.

I end by emphasizing the interpretability of principal components because we humanists may be tempted to see them as something other than mathematical artifacts, which is to say, something other than principled creations of the imagination. Given the data and the goal of maximizing variance through projection, many people could come up with the same results that I have produced here. But there will always be a question about what to call the “underlying dynamic” a given principal component is supposed to capture, or even about whether a component corresponds to something meaningful in the data. The ongoing work of interpretation, beginning with the task of naming what a principal component is capturing, is not going to disappear just because we have learned to work with mathematical — as opposed to literary critical — tools and terms.

Axes, Critical Terms, and Motivated Fictions

Let us return to the idea that a mathematical change of basis might call our attention to an underlying dynamic in a “system” of texts. If, per Shlens’s analogy, PCA works by finding the ideal angle from which to view the oscillations of the spring, it does so by finding a better proxy for the underlying phenomenon. PCA doesn’t give you the spring, it gives you a better angle from which to view the spring. There is nothing about the spring analogy or about PCA that contradicts the possibility that the system being analyzed could be much more complicated — could contain many more dynamics. Indeed, there nothing to stop a dimension reduction technique like PCA from finding dynamics that we will never be able to observe or name.

Part of what the humanities do is cultivate empathy and a lively situational imagination, encouraging us to ask, “What would it be like to be this kind of person in this kind of situation?” That’s often how we find our way into plays, how we discover “where the system’s energy is.” But the humanities is also a field of inquiry. The enterprise advances every time someone refines one of our explanatory concepts and critical terms, terms such as “genre,” “period,” “style,” “reception,” or “mode of production.”

We might think of these critical terms as the humanities equivalent of a mathematical basis on which multidimensional data are projected. Saying that Shakespeare wrote “tragedies” reorients the data and projects a host of small observations on a new “axis,” as it were, an axis that somehow summarizes and so clarifies a much more complex set of comparisons and variances than we could ever state economically. Like geometric axes, critical terms such as “tragedy” bind observations and offer new ways of assessing similarity and difference. They also force us to leave things behind.

The analogy between a mathematical change of basis and the application of critical terms might even help explain what we do to our colleagues in the natural and data sciences. Like someone using a transformation matrix to re-project data, the humanist introduces powerful critical terms in order to shift observation, drawing some of the things we study closer together while pushing others further apart. Such a transformation or change of basis can be accomplished in natural language with the aid of field-structuring analogies or critical examples. Think of the perspective opened up by Clifford Geertz’s notion of “deep play,” or his example of the Balinese cock fight, for example. We are also adept at making comparisons that turn examples into the bases of new critical taxonomies. Consider how the following sentence reorients a humanist problem space: “Hamlet refines certain tragic elements in The Spanish Tragedy and thus becomes a representative example of the genre.”

For centuries, humanists have done these things without the aid of linear algebra, even if matrix multiplication and orthogonal projection now produce parallel results. In each case, the researcher seeks to replace what Shlens calls a “naive basis” with a motivated one, a projection that maps distances in a new and powerful way.

Consider, as a final case study in projection, the famous speech of Shakespeare’s Jacques, who begins his Seven Ages of Man speech with the following orienting move: “All the world’s a stage, / And all the men and women merely players.” With this analogy, Jacques calls attention to a key dynamic of the social system that makes Shakespeare’s profession possible — the fact of pervasive play. Once he has provided that frame, the ordered list of life roles falls neatly into place.

This ability to frame an analogy or find an orienting concept —the world is a stage, comedy is a pastoral retreat, tragedy is a fall from a great height, nature is a book — is something fundamental to humanities thinking, yet it is necessary for all kinds of inquiry. Improvising on a theme from Giambattista Vico, the intellectual historian Hans Blumenberg made this point in his work on foundational analogies that inspire conceptual systems, for example the Stoic theater of the universe or the serene Lucretian spectator looking out on a disaster at sea. In a number of powerful studies — Shipwreck with SpectatorParadigms for a Metaphorology, Care Crosses the River — Blumenberg shows how analogies such as these come to define entire intellectual systems; they even open those systems to sudden reorientation.

We certainly need to think more about why mathematics might allow us to appreciate unseen dynamics in social systems, and how critical terms in the humanities allow us to communicate more deliberately about our experiences. How startling that two very different kinds of fiction — a formal conceit of calculation and the enabling, partial slant of analogy — help us find our way among the things we study. Perhaps this should not be surprising. As artifacts, texts and other cultural forms are staggeringly complex.

I am confident that humanists will continue to seek alternative views on the complexity of what we study. I am equally confident that our encounters with that complexity will remain partial. By nature, analogies and computational artifacts obscure some things in order to reveal other things: the motivation of each is expressed in such tradeoffs. And if there is no unmotivated view on the data, the true dynamics of the cultural systems we study will always withdraw, somewhat, from the lamplight of our descriptive fictions.


Posted in Quant Theory, Shakespeare | Tagged , , , , , , , | Leave a comment

Finding “Distances” Between Shakespeare’s Plays 1

swallows-300x199In honor of the latest meeting of our NEH sponsored Folger workshop, Early Modern Digital Agendas, I wanted to start a series of posts about how we find “distances” between texts in quantitative terms, and about what those distances might mean. Why would I argue that two texts are “closer” to one another than they are to a third that lies somewhere else? How do those distances shift when they are measured on different variables? When represented as points in different dataspaces, the distances between texts can shift as variables change — like a murmuration of starlings. So what kind of cloud is a cloud of texts?

This first post begins with some work on the Folger Digital Texts of Shakespeare’s plays, which I’m making available in “stripped” form here. These texts were created by Mike Poston, who developed the encoding scheme for Folger Digital Texts, and who understands well the complexities involved in differentiating between the various encoded elements of a play text.

I’ve said the texts are “stripped.” What does that mean? It means that we have eliminated those words in the Folger Editions that are not spoken by characters. Speech prefixes, paratextual matter, and stage directions are absent from this corpus of Shakespeare plays. There are interesting and important reasons why these portions of the Editions are being set aside in the analyses that follow, and I may comment on that issue at a later date. (In some cases, stripping will even change the “distances” between texts!) For now, though, I want to run through a sequence of analyses using a corpus and tools that are available to as many people as possible. In this case that means text files, a web utility, and in subsequent posts on “dimension reduction,” an excel spreadsheet alongside some code written for the statistics program R.

The topic of this post, however, is “distance” — a term well worth thinking about as our work moves from corpus curation through the “tagging” of the text and on into analysis. As always, the goal of this work is to do the analysis and then return to these texts with a deepened sense of how they achieve their effects — rhetorically, linguistically, and by engaging aesthetic conventions. It will take more than one post to accomplish this full cycle.

So, we take the zipped corpus of stripped Folger Edition plays and upload it to the online text tagger, Ubiqu+ity. This tagger was created with support from the Mellon Foundation’s Visualizing English Print grant at the University of Wisconsin, in collaboration with the creators of the text tagging program Docuscope at Carnegie Mellon University. Uniqu+ity will pass a version of Docuscope over the plays, returning a spreadsheet with percentage scores on the different categories or Language Action Types (LATs) that Docuscope can tally. In this case, we upload the stripped texts and request that they be tagged with the earliest version of Docuscope available on the site, version 3.21 from 2012. (This is the version that Hope and I have used for most of our analyses in our published work. There may be some divergences in actual counts, as this is a new implementation of Docuscope for public use. But so far the results seem consistent with our past findings.) We have asked Ubiqu+ity to create a downloadable .csv file with the Docuscope counts, as well as a series of HTML files (see the checked box below) that will allow us to inspect the tagged items in textual form.


Screen Shot 2015-06-22 at 9.02.58 PM

The results can be downloaded here, where you will find a zipped folder containing the .csv file with the Docuscope counts and the HTML files for all the stripped Folger plays. The .csv file will look like the one below, with abbreviated play names arrayed vertically in the first column, then (moving columnwise to the right) various other pieces of metadata (text_key, html_name, and model_path), and finally the Docuscope counts, labelled by LAT. You will also find that a note on curation was fed into the program. I will want to remove this row when doing the analysis.

Screen Shot 2015-06-23 at 8.35.34 AM

For ease of explication, I’m going to pare down these columns to three: the name of the text in column 1, and then the scores that sit further to the right on the spreadsheet for two LATs: AbstractConcepts and FirstPerson. These scores are expressed as a proportion, which to say, the number of all tokens tagged under this LAT as a fraction of all the included tokens. So now we are looking at something like this:

Screen Shot 2015-06-22 at 9.20.00 PM

Before doing any analysis,  I will make one further alteration, subtracting the mean value for each column (the “average” score for the LAT) from every score in that column. I do this in order to center the data around the zero point of both axes:

Screen Shot 2015-06-22 at 9.43.18 PMNow some analysis. Having identified a corpus (Shakespeare’s plays) and curated our texts (stripping, processing), we have counted some agreed upon features (Docuscope LATs). The features upon which we are basing the analysis are those words or strings of words that Docuscope counts as AbstractConcepts and FirstPerson tokens.

It’s important to note that at any point in this process, we could have made different choices, and that these choices would have lead to different results. The choice of what to count is a vitally important one, so we ought to give thought to what Douscope counted as FirstPerson and AbstractConcepts. To get to know these LATs better — to understand what exactly has been assigned these two tags —we can open one of the HTML files of the plays and “select” that category on the right hand side of the page, scrolling through the document to see what was tagged. Below is the opening scene of Henry V, so tagged:

Screen Shot 2015-06-23 at 8.28.10 AM


Before doing the analysis, we will want explore the features we have been counting by opening up different play files and turning different LATs “on and off” on the left hand side of the HTML page. This is how we get to know what is being counted in the columns of the .csv file.

I look, then, at some of our texts and the features that Ubiqu+ity tagged within them. I will be more or less unsatisfied with some of these choices, of course. (Look at “i’ th’ receiving earth”!)Because words are tagged according to inflexible rules, I will disagree with some of the things that are being included in the different categories. That’s life. Perhaps there’s some consolation in the fact that the choices I disagree with are, in the case of Docuscope, (a) relatively infrequent and (b) implemented consistently across all of the texts (wrong in the same way across all types of document). If I really disagree, I have the option of creating my own text tagger. In practice, Hope and I have found that it is easier to continue to use Docuscope, since we do not want to build into the tagging scheme the self-evident things we may be interested in. It’s a good thing that Docuscope remains a little bit alien to us, and to everyone else who uses it.

Now to the question of distance.

Screen Shot 2015-06-22 at 10.01.43 PM
When we look at the biplot above, generated in R from the mean-adjusted data above, we notice a general shape to the data. We could use statistics to describe the trend — there is a negative covariance between FirstPerson and AbstractConcept LATs — but we can already see that as FirstPerson tokens increase, the proportion of AbstractConcept tokens tends to decrease. The trend is a rough one, but there is the suggestion of a diagonal line running from the upper left hand side of the graph toward the lower right.

What does “distance” mean in this space? It depends on a few things. First, it depends on how the data is centered. Here we have centered the data by subtracting the column means from each entry. Our choice of a scale on either axis will also affect apparent distances, as will our choice of the units represented on the axes. (One can tick off standard deviations around the mean, for example, rather than the original units, which we have not done). These contingencies point up an important fact: distance is only meaningful because the space is itself meaningful — because we can give a precise account of what it means to move an item up or down either of these two axes.

Just as important: distances in this space are a caricature of the linguistic complexity of these plays. We have strategically reduced that complexity in order to simplify a set of comparisons. Under these constraints, it is meaningful to say that Henry V is “closer” to Macbeth than it is to Comedy of Errors. In the image above, you can compare these distances between the labelled texts. The first two plays, connected by the red line, are “closer” given the definitions of what is being measured and how those measured differences are represented in a visual field.

When we plot the data in a two dimensional biplot, we can “see” closeness according to these two dimensions. But if you recall the initial .csv file returned by Ubiq+ity, you know that there can be many more columns — and so, many more dimensions — that can be used to plot distances.

Screen Shot 2015-06-23 at 8.56.23 AM

What if we had scattered all 38 of our points (our plays) in a space that had more than the two dimensions shown in the biplot above? We could have done so in three dimensions — plotting three columns instead of two — but once we arrive at four dimensions we are beyond the capacity for simple visualization.  Yet there may be a similar co-paterning (covariance) among LATs in these higher dimensional spaces, analogous to the ones we can “see” in two dimensions. What if , for example,the frequency of Anger decreases alongside that of AbstractConcepts just when FirstPerson instances increase? How should we understand the meaning of comparatives such as “closer together” and “further apart” in such multidimensional spaces? For that, we need techniques of dimension reduction.

In the next post, I will describe my own attempts to understand a common technique for dimension reduction known as Principal Component Analysis. It took about two years for me to figure that out, however imperfectly. I wanted to pass that along in case others are curious. But it is important to understand that these more complex techniques are just extensions of something we can imagine in more simpler terms. And it is important to remember that there are very simple ways of visualizing distance — for example, an ordered list. We assessed distance visually in the biplot above, a distance that was measured according to two variables or dimensions. But we could have just as easily used only one dimension, say, Abstract Concepts. Here is the list of Shakespeare’s plays, in descending order, with respect to scores on AbstractConcepts:

Screen Shot 2015-06-23 at 9.04.56 AM

Even if we use only one dimension here, we can see once again that Henry V is “closer” to Macbeth than it is to Comedy of Errors. We could even remove the scores and simply use an ordinal sequence of this play, then this, then this. There would still be information about “distances” in this very simple, one dimensional, representation of the data.

Now we ask ourselves: which way of representing the distances between these tests is better? Well, it depends on what you are trying to understand, since distances — whether in one, two, or many more dimensions — are only distances according to the variables or features (LATs) that have been measured. In the next post, I’ll try to explain how the thinking above helped me understand what is happening in a more complicated form of dimension reduction called Principal Component Analysis. I’ll use the same mean adjusted data for FirstPerson and AbstractConcepts discussed here, providing the R code and spreadsheets so that others can follow along. The starting point for my understanding of PCA is an excellent tutorial by Jonathon Shlens, which will be the underlying basis for the discussion.



Posted in Shakespeare, Visualizing English Print (VEP) | Leave a comment

Now Read This: A Thought Experiment

MRILet’s say that we believe we can learn something more about what literary critics call “authorial style” or “genre” by quantitative work. We want to say what that “more” is. We assemble a community of experts, convening a panel of early modernists to identify 10 plays that they feel are comedies based on prevailing definitions (they end in marriage), and 10 they feel are tragedies (a high born hero falls hard). To test these classifications, we randomly ask others in the profession (who were not on the panel) to sort these 20 plays into comedies and tragedies and see how far they diverge from the classifications of our initial panel. That subsequent sorting matches the first one, so we start to treat these labels (comedy/tragedy) as “ground truths” generated by “domain experts.” Now assume that I take a computer program, it doesn’t matter what that program is, and ask for it to count things in these plays and come up with a “recipe” for each genre as identified by our experts. The computer is able to do so, and the recipes make sense to us. (Trivially: comedies are filled with words about love, for example, while tragedies use more words that indicate pain or suffering.) A further twist: because we have an unlimited, thought-experiment budget, we decide to put dozens of early modernists into MRI machines and measure the activity in their brains while they are reading any of these 20 plays. After studying the brain activity of these machine-bound early modernists, we realize that there is a distinctive pattern of brain activity that corresponds with what our domain experts have called “comedies” and “tragedies.” When someone reads a comedy, regions A, B and C become active, whereas when a person reads tragedies, regions C, D, E, and F become active. These patterns are reliably different and track exactly the generic differences between plays that our subjects are reading in the MRI machine.

So now we have three different ways of identifying – or rather, describing – our genre. The first is by expert report: I ask someone to read a play and she says, “This is a comedy.” If asked why, she can give a range of answers, perhaps connected to plot, perhaps to her feelings while reading the play, or even to a memory: “I learned to call this and other plays like it ‘comedies’ in graduate school.” The second is a description, not necessarily competing, in terms of linguistic patterns: “This play and others like it use the conjunction ‘if’ and ‘but’ comparatively more frequently than others in the pool, while using ‘and’ less frequently.” The last description is biological: “This play and others like it produce brain activity in the following regions and not in others.” In our perfect thought experiment, we now have three ways of “getting at genre.” They seem to be parallel descriptions, and if they are functionally equivalent, any one of them might just be treated as a “picture” of the other two. What is a brain scan of an early modernist reading comedy? It is a picture of the speech act: “The play I’m reading right now is a comedy.”

Now the question. The first three acts of a heretofore unknown early modern play are discovered in a Folger manuscript, and we want to say what kind of play it is. We have our choice of either:

• asking an early modernist to read it and make his or her declaration

• running a computer program over it and rating it on our comedy/tragedy classifiers

• having an early modernist read it in an MRI machine and characterizing the play on the basis of brain activity.

Let’s say, for the sake of argument, that you can only pick one of these approaches. Which one would you pick, and why? If this is a good thought experiment, the “why” part should be challenging.

Posted in Quant Theory | Tagged , | 1 Response

Mapping the ‘Whole’ of Early Modern Drama

We’re currently working with two versions of our drama corpus: the earlier version contains 704 texts, while the later one has 554, the main distinction being that the later corpus has a four-way genre split – tragedy, comedy, tragicomedy, and history – while the earlier corpus also includes non-dramatic texts like dialogues, entertainments, interludes, and masques. Recently we’ve been doing PCA experiments with the 704 corpus to see what general patterns emerge, and to see how the non-dramatic genres pattern in the data. The following are a few of the PCA visualisations generated from this corpus, which provide a general overview of the data. We produced the diagrams here using JMP. The spreadsheets of the 704 and 554 corpora are included below as excel files – please note we are still working on the metadata.

704 corpus

554 corpus


Overview (click to enlarge images):

overall PCA space copy

This is the complete data set visualised in PCA space. All 704 plays are included, but LATs with frequent zero values have been excluded.



If we highlight the genres, it looks like this:

all genres copy

Comedies = red

Dialogues = green

Entertainments = blue

Histories = orange

Interludes = blue-green

Masques = dark purple

Non-dramatics = mustard

Tragicomedies = dark turquoise

Tragedies = pink-purple


If we tease this out even more – hiding, but not excluding, the non-dramatic genres – there is a clear diagonal divide between tragedies (red) and comedies (blue):

[Michael Witmore, Jonathan Hope, and Michael Gleicher, forthcoming, ‘Digital Approaches to the Language of Shakespearean Tragedy’, in Michael Neill and David Schalkwyk, eds, The Oxford Handbook of ShakespeareanTragedy (Oxford)]

TR CO split copy

With tragicomedies (green) and histories (purple) falling in the middle:

TR CO TC HI split copy

It seems that tragedies and comedies are characterised by sets of opposing LATs. The LATs associated with comedy are those capturing highly oral language behaviour, while those associated with tragedy capture negative language and psychological states. Tragicomedies and histories – although we have yet to investigate them in detail – seem to occupy an intermediate space. If we unhide the non-dramatic genres, we can see how they pattern in comparison.

In spite of their name, dialogues are not comprised of rapid exchanges (e.g. Oral Cues, Direct Address, First Person etc., the LATs which make up the comedic side of the PCA space) but instead have lengthy monologues, which might explain why they fall mostly on the side of the tragedies:

DI copy

Entertainments do not seem to be linguistically similar to each other:

EN copy

Interludes, on the other hand, seem to occupy a more tightly defined linguistic space:

IN copy

Masques are pulled towards the left of the PCA space:

MA copy



Docuscope was designed to identify genre, rather than authorship, so perhaps we should not be surprised that authorship comes through less clearly than genre in these initial trials. We should also bear in mind that there are only 9 genres in the corpus, compared to approximately 200 authors.

This, for example, shows only the tragedies – all other genres are hidden – and each author is represented by a different colour:

TR authorship copy

We get a clearer picture when considering a smaller group in relation to the whole – for example, one author compared to all the others. Take Seneca, for example – demonstrated by the purple squares:

TR Seneca copy

From this we can deduce that Seneca’s tragedies are linguistically similar, as they are grouped tightly together.



The same applies for looking at date of writing across the corpus, with approximately 100 dates to consider.

This can be visualised on a continuous scale, e.g. the lighter the dot, the earlier the play; the darker the dot, the later the play. While this has a nice ‘heat map’ effect, it is difficult to interpret:

date continuous scale copy

If we narrow this down to three groups of dates – early (red), central (yellow), and late (maroon) – it becomes a little easier to read. As with the Seneca example, the fewer factors there are to consider, the clearer the visualisations become:

early central late split copy

Posted in Early Modern Drama, Shakespeare, Visualizing English Print (VEP) | Leave a comment