Responses for 10/25

This weeks’ readings revolved around Franco Moretti’s Graphs, Maps, Trees. In this series of articles (which also exist as a book), Moretti argues for a comparative literature based on “distant reading.” For Moretti, this amounts to studying large chunks of data on and about texts, as opposed to the traditional “close” reading of a handful of canonical texts.

 In these articles Moretti aims to demonstrate how different visualization techniques of this data could lead the study of literature in new directions and raise new questions. As an example he graphs the rise and fall of different genres of British novels from 1700 to 1840, which leads him to an exploration of the reasons for these cycles.

 He seems rather ambivalent about where this takes him, however. His explanation rooted in generational advancement seems somewhat plausible, but he admits that he himself is not convinced. He further pulls-back a bit and claims that “what is happening is the oscillation.” I myself wasn’t too convinced of his explanations, either. The novel, as a cultural artifact, is situated in a nearly impenetrable web of cultural contexts, shifting fads, politics, economics, etc etc., and I imagine many explanations might sound plausible. For example, in his book “The Railway Journey,” Wolfgang Schivelbusch claims that the drastic increase in the sales of novels in Europe in the mid-nineteenth century was a result of train travel: travelers were bored and so began reading on the journey. Part of his evidence for this claim is the drastic increase the number of booksellers operating at train stations. This seems plausible enough to me (I first read this argument while on an airplane), but Moretti does not address it at all.

 This is not to say that I think one scholar is “correct,” but rather I want to emphasize one of the hazards of distant reading. Numerical data can certainly be illuminating, but I doubt the amount of truth to be found in the data alone.

 In the next article Moretti discussed the use of literary maps, that is, maps of spaces appearing in novels. I wasn’t convinced of the value of this as a method of interpretation, though I admit it was very hard to follow his argument, as I was not familiar with the works he was addressing. It also reminded me of a paper I wrote as an undergraduate English student, wherein I argued that compass directions in “The Joy Luck Club” were embedded in myths concerning fortune: good things typically happened in the north, bad in the south, and so on. As I recall I received a B- on that paper, as the professor thought the argument was decent but ultiamtely pointless.

 Lastly, Moretti discuses morphology (not in the linguistic sense) by way of tree diagrams. Again, I thought this was an interesting exercise better-suited to asking questions than answering them. But I do think the work valuable, as Moretti himself notes that too often academics seek to answer questions they already know the answer to, and are unwilling to tackle questions with no obvious answer. I think the difficulty with this line of work is that interpreting the data is very problematic, and probably best tasked by a group of scholars from different backgrounds working together. I can easily imagine an edited collection on the topic of the rise and fall of genre, with contributors from a wide variety of backgrounds, would be very interesting.

  I found Burke’s critique of Moretti regarding how and what information lands in such archival sources to be spot-on, if slightly low-hanging fruit. It needs be made, however, especially since Moretti claims that “Quantitative research provides a type of data which is ideally independent of interpretations […]” When I read this I thought, much as Burke, that the methods by which data is gathered, and which data is gathered, are themselves interpretive acts regarding value, legitimacy and purpose.

  Batuman’s review of Moretti’s work was, in typical literary scholar fashion, interesting but a bit too self-indulgent. The one point she made that I did appreciate was regarding the temptation and problem of overly-abstracted models. In discussing Propp she questions why his fairytale framework was not merely “a simple sequence of ‘lack,’ ‘obstacles,’ and ‘acquisition.’” Obviously the reason is twofold: at this level of abstraction the framework becomes nearly meaningless, and furthermore, it is no longer a good sell.

 Lastly, it was interesting to look at the various data visualization projects in light of Moretti’s work. I can certainly see the potential of a tool like the Time Magazine Corpus, with a few caveats. First, it is in desperate need of some usability improvements. The system is pretty incomprehensible, or at least enough so to prevent any kind of interesting casual inquiry. Second, the results it returns are often in small paragraphs, and to go see the article they were taken from requires you to be a paid subscriber, which lessens the value of the tool for all but the most invested. Lastly, it is subject to the same critique Burke and I share of Moretti: while this is a vast database, it’s also an extremely narrow one. In using it I wondered how many different people have actually written for Time Magazine in the past ~90 years, and how diverse a background they came from. In other words, the Corpus does not tell us about American culture as much as it tells us about Time Magazine’s view and presentation of American Culture.

 The Smithsonian’s “History Wired” was also a bit baffling, but gradually revealed itself to be a very complex way to categorize a small, oddly eclectic mix of artifacts the museum has. After playing with it for a while it began to strike me more as a “proof of concept” than a particularly useful tool. I think that if the underlying data set—items in the museum—was much larger it could lead to some interesting visualizations, but it would still suffer from mostly being a complicated means of taxonomy.

 With regards to Facebook’s Gross National Happiness, I could not get it to function properly. However on principle I find it suspect, as it is in Facebook’s interest to make its users appear happy.

 The “Many Eyes” project was another abject failure in usability, both in the creation tools and in the resulting projects. (How do you know whether a type of visualization will work, or be appropriate, when the type of data is unclear?) I looked through a bunch of visualizations and often found them a bit mystifying, although I did have better luck by sticking with traditional graphs. This kind of tool does seem very powerful, however, especially as users are able to upload their own data. Unlike the other projects we looked at for this week, I can see myself returning to this one and investing the necessary time to come to understand it.

 Overall what I took from this week’s reading is the potential for these kinds of data visualizations to raise new questions. I am not sure we have good methods for answering them right now, but that in itself is exciting.

 

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s