My final project for this course is on the representation of the Baltimore and Ohio in board games, particularly 1830: Railways and Robber Barons. The site is at http://traingames.omkea.net
My final project for this course is on the representation of the Baltimore and Ohio in board games, particularly 1830: Railways and Robber Barons. The site is at http://traingames.omkea.net
This week’s readings were an interesting selection of work dealing with open access, copyright and archiving in the digital age.
I was particularly interested in the juxtaposition of Willinsky’s The Access Principle and Lessig’s Free Culture, though I must admit I came into these readings already in agreement with much of what both authors said. It is interesting though to consider how many scholars are pushing for a greater degree of access to their work. Willinsky attributes this drive to a variety of factors including prestige and citations, but also the fact that subscription journals have been creating knowledge ‘haves’ and ‘have-nots,’ which is particularly a problem in developing countries. The impression I got from his writing is that scholars are concerned with both of these factors in that there is a desire to fix the latter problem but also a need to do so while being mindful of the former.
The desire for one’s work to be read, and copied by way of citation, is of course fundamental to academia and knowledge production. In reading Lessig’s chapter I found myself wondering how it came about that in the academy we worry about plagiarism a great deal, but seem mostly content to rely on internal methods to deal with it. In my experience copyright rarely factors in, or at least is much less of a concern than plagiarism. The differing “norms,” to use Lessig’s term, between the academic community to culture at large are interesting, and while I would like to think they are somewhat attributable to academic high-mindedness, the cynic in me thinks its because there is simply less money in journal articles than in Disney films.
Similarly the Rosenzweig piece was interesting but largely stuff I was familiar with. Coming from a department that was very much doing digital humanities a lot of the ideas presented this week were familiar, so I wanted to take a moment to talk about some direct experience I have.
In 2011 I my adviser and I published an article in the online journal Game Studies, which was one of the very first academic journals devoted to the humanistic study of games. The journal has always been freely available online and collects no authors’ fees. It is supported by The Swedish Research Council, The Joint Committee for Nordic Research Councils for the Humanities and the Social Sciences, the IT University of Copenhagen, and Lund University. The journal is obviously run on a pretty low budget: when our article was published they had nobody on staff who was technically-minded. I ended-up providing HTML support to the lone staff member for the formatting of our tables, and you can see from how badly they look that it did not go well.
That said, I’m quite happy that the field’s flagship journal is open access. I would much rather have to offer a little technical help than to have to pay fees, or know that my work is locked behind a paywall. This is also my most-cited work, though at 5 citations that is not exactly Earth-shattering.
This article also illustrates some of the problems with archiving digital data that Rosenzweig was discussing. This article comes from a study of Faunasphere, an online casual game that was different from most games in many respects. The game was only playable online, and shortly after this article was published the game was shut-down, and its accompanying forums closed. If you click on any of the 10 or so forum posts we cite in the article the links will redirect you to the homepage of the game’s developer. They used to redirect to a message explaining that the game had been shut-down, but now there is no indication whatsoever that Faunasphere existed.
For game scholars the preservation of our material is exceedingly problematic. We are often working with commercial products, like Faunasphere, that are extremely resistant to archiving and preservation. The source code for the game presumably lives on somewhere at Big Fish Games, but the release of code by a game company to the public, even when the game is no longer available, is almost entirely unheard of (it does happen, just very rarely). Video and computer games are also extremely vulnerable to platforms and hardware becoming obsolete, and many scholars use emulation to illegally play old or unavailable games for research.
To return to my article, it also demonstrates the problem of citing Web sources in a Web-based article: they can vanish. Nobody can verify that what we wrote about the Faunasphere forums is true. So here is a scholarly work about something that does not exist using nonexistent evidence.
I’m not really sure what the solution is here. In this particular case we were not claiming anything controversial, and my coauthor is extremely well-regarded in the field, so I can’t imagine any negative fallout coming from this. As a research though it is disappointing.
Lastly, I wanted to mention a thought I had regarding the Korsakow projects. As I was watching them I found myself enjoying them much more than the other digital projects we have looked at this semester. I occurred to me that the reason for this is that they were not created to present, provide, or deliver information, but were much closer to art projects and documentaries. In this context I was much more willing to engage with their nonlinearity and experiment, and I realized it was because I was much less concerned with missing something. Unfortunately I think this is probably a bad sign for digital projects.
(I am by necessity writing this before Andrew and I have had a chance to plan the discussion questions for class, so I will come back to add them later in the week.)
In reading for this week’s class, I was struck by the difference between the ideal informational map and the reality of the practice today.
In writing on Minard’s map of Napoleon’s ill-fated invasion of Russia in 1812, Edward Tufte identifies six principles of (presumably good) anayltical design:
1. Comparison, contrasts and differences.
2. Show causality.
3. Integrate 3 or more variables.
4. Integrate multiple kinds of evidence.
5. Thoroughly describe the evidence.
6. Content counts most of all.
For Tufte Minard’s map embodies all of these principles, and it is clear from his and Corbett’s writing that Minard’s map is considered exemplary by people interested in these kinds of things. After looking through the various projects assigned for this week, I’m tempted to conclude that Minard’s is still the best. I found it very intuitive and easy to understand, and it makes its point in a very dramatic way.
I found that nearly all of the digital projects suffered from three shortcomings. First, entering into them without a predetermined goal, and without much background knowledge, created a huge barrier to finding things of interest. Sure I can look at pictures of buildings around Philadelphia (note that the homepage of PhilaPlace doesn’t actually tell you it’s about Philadelphia), or superimpose different maps of Rome on each other, but to what end?
The second shortcoming is in usability. After engaging with these various tools what I inevitably learned was how to use them, and far less about the content contained therein. Many of White’s railroad projects are good examples of this. I now know how to compare freight import/exports at various stations, but what that is supposed to tell me eludes me, even after reading the descriptions. “Putting Harlem on the Map” also has weird issues with the things that can be plotted or searched for. For example, searing for billiards games (a category designed into the system) happening at any time and in any place yields 0 results. So why is that category there?
This relates to the first problem: these tools are clearly for specialists already familiar with the topic at hand.
The last shortcoming is the lack of Tufte’s second principle, causation. I may be lacking the background to spot it, but in looking through the digital projects I never saw a suggestion as to what was causing the phenomena I was engaging with. Plotting domestic abuse in Harlem doesn’t tell me much about why it happened. If anything causality was buried in the textual descriptions, not displayed visually.
Ultimately I am forced to conclude that these tools are more valuable to the people making them, and hence already have the requisite specialized knowledge to draw meaningful conclusions based on them, than they are to a generic end-user such as myself. However, I am willing to be convinced otherwise, and this is one of the questions I would like to address on Thursday.
I did think that White’s discussion about the Digital History Project was very interesting, particularly his discussion of Lefebvre’s three kinds of space. I was particularly interested in the idea of relational space, which is the conceptualization of space in relative terms. As an example White talks about how places can seem closer together or further apart depending on travel time. He followed this idea up in one of his digital projects that displayed space is a function of freight rates, which like many of the projects was interesting but hard to use and conceptualize.
White also makes a point that relates to my complaint about the need for specialization to understand these projects:
“…visualization and spatial history are not about producing illustrations or maps to communicate things that you have discovered by other means. It is a means of doing research; it generates questions that might otherwise go unasked, it reveals historical relations that might otherwise go unnoticed, and it undermines, or substantiates, stories upon which we build our own versions of the past.”
He himself admits that these maps are not about communication, yet many of the projects we looked at do purport to communicate something about a given place, particularly PhilaPlace and the Euclid Corridor History Project. I don’t have a problem with constructing these things as part of your research practice, but often they are presented as if they are for public consumption. I think digital historians need to think about why they are making what they are making and for whom they are making it. I would argue that it is fine to make something for your own use, but it is important to be honest about that fact. And perhaps some more attention needs to be paid to Minard’s map, which to a lay person is much more useful, interesting and meaningful than any of the digital projects.
Update: Questions to be discussed today.
Were there any projects you particularly liked or disliked?
What do you think about the effort required to learn how these things work? Is it worth it?
What about the relationship between visualizations and their sources? Do visualizations discourage verification?
Are these just interactive ways of engaging with someone else’s ideas?
Are they primary or secondary sources?
Are these tools more useful for their creators or other users?
Should maps like these be considered research tools or conclusions?
How much does context matter?
Is causation really necessary? Seems hard to present.
Is it worth trying to draw visitors?
Are we putting them on a map to help explain things, or just doing it for the sake of doing it?
What is something we can only learn by mapping it?
Is the map an end?
What do we gain by making it digital?
What kinds of meanings can be derived from spatial relationships?
Who is making these and why?
This weeks’ readings revolved around Franco Moretti’s Graphs, Maps, Trees. In this series of articles (which also exist as a book), Moretti argues for a comparative literature based on “distant reading.” For Moretti, this amounts to studying large chunks of data on and about texts, as opposed to the traditional “close” reading of a handful of canonical texts.
In these articles Moretti aims to demonstrate how different visualization techniques of this data could lead the study of literature in new directions and raise new questions. As an example he graphs the rise and fall of different genres of British novels from 1700 to 1840, which leads him to an exploration of the reasons for these cycles.
He seems rather ambivalent about where this takes him, however. His explanation rooted in generational advancement seems somewhat plausible, but he admits that he himself is not convinced. He further pulls-back a bit and claims that “what is happening is the oscillation.” I myself wasn’t too convinced of his explanations, either. The novel, as a cultural artifact, is situated in a nearly impenetrable web of cultural contexts, shifting fads, politics, economics, etc etc., and I imagine many explanations might sound plausible. For example, in his book “The Railway Journey,” Wolfgang Schivelbusch claims that the drastic increase in the sales of novels in Europe in the mid-nineteenth century was a result of train travel: travelers were bored and so began reading on the journey. Part of his evidence for this claim is the drastic increase the number of booksellers operating at train stations. This seems plausible enough to me (I first read this argument while on an airplane), but Moretti does not address it at all.
This is not to say that I think one scholar is “correct,” but rather I want to emphasize one of the hazards of distant reading. Numerical data can certainly be illuminating, but I doubt the amount of truth to be found in the data alone.
In the next article Moretti discussed the use of literary maps, that is, maps of spaces appearing in novels. I wasn’t convinced of the value of this as a method of interpretation, though I admit it was very hard to follow his argument, as I was not familiar with the works he was addressing. It also reminded me of a paper I wrote as an undergraduate English student, wherein I argued that compass directions in “The Joy Luck Club” were embedded in myths concerning fortune: good things typically happened in the north, bad in the south, and so on. As I recall I received a B- on that paper, as the professor thought the argument was decent but ultiamtely pointless.
Lastly, Moretti discuses morphology (not in the linguistic sense) by way of tree diagrams. Again, I thought this was an interesting exercise better-suited to asking questions than answering them. But I do think the work valuable, as Moretti himself notes that too often academics seek to answer questions they already know the answer to, and are unwilling to tackle questions with no obvious answer. I think the difficulty with this line of work is that interpreting the data is very problematic, and probably best tasked by a group of scholars from different backgrounds working together. I can easily imagine an edited collection on the topic of the rise and fall of genre, with contributors from a wide variety of backgrounds, would be very interesting.
I found Burke’s critique of Moretti regarding how and what information lands in such archival sources to be spot-on, if slightly low-hanging fruit. It needs be made, however, especially since Moretti claims that “Quantitative research provides a type of data which is ideally independent of interpretations […]” When I read this I thought, much as Burke, that the methods by which data is gathered, and which data is gathered, are themselves interpretive acts regarding value, legitimacy and purpose.
Batuman’s review of Moretti’s work was, in typical literary scholar fashion, interesting but a bit too self-indulgent. The one point she made that I did appreciate was regarding the temptation and problem of overly-abstracted models. In discussing Propp she questions why his fairytale framework was not merely “a simple sequence of ‘lack,’ ‘obstacles,’ and ‘acquisition.’” Obviously the reason is twofold: at this level of abstraction the framework becomes nearly meaningless, and furthermore, it is no longer a good sell.
Lastly, it was interesting to look at the various data visualization projects in light of Moretti’s work. I can certainly see the potential of a tool like the Time Magazine Corpus, with a few caveats. First, it is in desperate need of some usability improvements. The system is pretty incomprehensible, or at least enough so to prevent any kind of interesting casual inquiry. Second, the results it returns are often in small paragraphs, and to go see the article they were taken from requires you to be a paid subscriber, which lessens the value of the tool for all but the most invested. Lastly, it is subject to the same critique Burke and I share of Moretti: while this is a vast database, it’s also an extremely narrow one. In using it I wondered how many different people have actually written for Time Magazine in the past ~90 years, and how diverse a background they came from. In other words, the Corpus does not tell us about American culture as much as it tells us about Time Magazine’s view and presentation of American Culture.
The Smithsonian’s “History Wired” was also a bit baffling, but gradually revealed itself to be a very complex way to categorize a small, oddly eclectic mix of artifacts the museum has. After playing with it for a while it began to strike me more as a “proof of concept” than a particularly useful tool. I think that if the underlying data set—items in the museum—was much larger it could lead to some interesting visualizations, but it would still suffer from mostly being a complicated means of taxonomy.
With regards to Facebook’s Gross National Happiness, I could not get it to function properly. However on principle I find it suspect, as it is in Facebook’s interest to make its users appear happy.
The “Many Eyes” project was another abject failure in usability, both in the creation tools and in the resulting projects. (How do you know whether a type of visualization will work, or be appropriate, when the type of data is unclear?) I looked through a bunch of visualizations and often found them a bit mystifying, although I did have better luck by sticking with traditional graphs. This kind of tool does seem very powerful, however, especially as users are able to upload their own data. Unlike the other projects we looked at for this week, I can see myself returning to this one and investing the necessary time to come to understand it.
Overall what I took from this week’s reading is the potential for these kinds of data visualizations to raise new questions. I am not sure we have good methods for answering them right now, but that in itself is exciting.
In his chapter “Simulation, History and Computer Games,” media scholar William Uricchio posits a continuum on which “historical games” can be placed: at one end are games addressing specific historical events, and on the other are games that model historical processes (328, 2005). Games of the former type often include war games seeking to recreate specific battles or campaigns. While these naturally include a certain amount of “what if?” (a game wherein players simply move through pre-determined steps would hardly be a game), the emphasis is on accuracy and simulation. Uricchio notes that in these games “Play emerges in the space between the constraint of detail and the exhilaration of improvisation” (330, 2005). At the other end of the spectrum are the process-type games, the best known example being Sid Meier’s Civilization (MPS Labs 1990). These games allow a freer engagement with the past in exchange for a deemphasis on particular referents: “Rather than a what if simulation with a known case study as the referent, nonspecific simulations provoke a wider range of interrogations, encouraging a more abstract, theoretical engagement of historical processes” (Uricchio 330, 2005). These games are more concerned with general processes, principles and ideologies than specific events.
The research question driving this project is how board and computer games have represented the founding and early years of the Baltimore and Ohio Railroad. This question will be addressed by applying Uricchio’s ‘process-event’ spectrum to a selection of games, and in so doing both expand upon and complicate the spectrum. This will be done so by taking into account not only the rules of the game, but multimedia elements such as audio and video (in the case of computer games), each game’s rules, mechanics and art, and any paratextual elements (Genette 1997) such as rule books, manuals, the game box, and so on. In doing so this project will show how the “historicalness” of a game is built-up through a combination of rules, fictional and thematic elements (Juul 2005), and other multimedia aspects.
Adding another layer, this project will also be mindful of the chronology of the games studied. The history of games themselves as cultural artifacts is not well studied or understood. This project will contribute to our knowledge of game history by analyzing how these games have built on design conventions and act in conversation with each other. Train games are a particularly rich genre for this project because they are one of the few game genres popular in both analog and digital media.
This project will be well-suited to digital scholarship because of the nature of the sources considered. Simply put, games from any media are difficult to depict in text. The project will be built on Omeka with a separate page dedicated to each game. Each page will integrate multimedia elements to enrich the analysis and reduce written description. In the case of board games the pages will include scans and photographs of relevant game materials, such as cards, boards, boxes an so on. Videos or sequences of still images will be used to depict rules or complex actions. Further, links will be provided to the game’s entry on BoardGameGeek, a crowd-sourced board game database. This will allow readers to find more information about each game. Similar strategies will be used on the pages of discussed video games. Recorded video will be particularly valuable in showing the reader how each game functions and in illustrating various multimedia aspects.
The sources I will be using for this project include a selection of board games, video games, and books of railroad history. Example board games I will be using include, but are not limited to, Age of Steam (Wallace 2009), 1830: Railways and Robber Barons (Tresham 2011), and Baltimore and Ohio (Robbins 2009). Age of Steam is a process-driven game where players are building track and delivering goods in the Eastern United States. The game’s supply-and-demand model promotes track construction and economic development. 1830: Railways and Robber Barons is an event-given game of track building but also stock market manipulation; players take on the role of nefarious robber barons seeking to drain publicly-held companies for personal profit. Baltimore and Ohio is perhaps more event-driven than either, as players are more bound by historical detail with respect to track construction. I will further show how the design of these board games as multiplayer competitive experiences in itself problematizes the event-process spectrum.
In terms of video games I will be analyzing artifacts such as Railroad Tycoon II Platinum (PopTop Software 1998) (RRT2) and Sid Meier’s Railroads! (Firaxis Games 2007) (SMR). RRT2 is similar to 1830 in that players act as the president of one or more railroad companies, but it uses the power of digital media to not only be more specific in terms of historical detail, but it also incorporates many multimedia elements that lend an air of “historicalness” to the game experience. For instance, when a player builds track a short, grainy film clip of workers hammering railroad spikes is played. SMR, while coming after RRT2, is simpler and has a stranger relationship to history: players may purchase historically-grounded locomotives, while delivering and manufacturing automobiles in the 1860s. RRT2 seems to be more event-driven and SMR more process-drive, but the multimedia nature of both games complicates the spectrum.
Works of railroad history such as Stover’s History of the Baltimore and Ohio Railroad (1987) and Wolmar’s Blood, Iron and Gold (2010) will naturally be a key component of this project. These histories (and others) frequently provide a great amount of detail to the laying of track: where a company started, how far it got and when is often a topic of great interest to railroad historians. Not surprisingly, then, all of the games mentioned above feature track laying as a “core mechanic,” (Salen & Zimmerman 2004); that is, the building of track is central to the play of the game. How track is laid in a game is thus a key mechanism for that game to represent history.
Lastly, the project will make use of hypertext to enable readers to browse from subject to subject. Each game analysis will be divided into common sections that will link to each other. This would allow users to navigate by theme, such as track building, maps, etc., instead of by game.
All of the materials for this project will be culled from Concordia University’s library system and my own personal collections.
Firaxis Games. 2007. Sid Meier’s Railroads! Steam Edition. 2K Games. PC Game.
Moon, Alan. 2004. Ticket to Ride. Days of Wonder. Board Game.
MPS Labs. 1990. Sid Meier’s Civilization. MicroProse. PC Game.
PopTop Software. 1998. Railroad Tycoon II Platinum. Gathering. PC Game.
PopTop Software. 2003. Railroad Tycoon 3. Gathering. PC Game.
Railsimulator.com. 2011. Railworks 3: Train Simulator 2012. Railsimulator.com. PC Game.
Robbins, Eddie. 2009. Baltimore and Ohio. Eagle Games. Board Game.
Tresham, Francis. 2011. 1830: Railways and Robber Barons. 2nd Edition, Mayfair Games. First Edition Avalon Hill, 1986. Board Game.
Wallace, Martin. 2009. Age of Steam. 3rd Edition, Eagle Games. First Edition Warfrog Games and Winsome Games, 2002. Board Game.
Wallace, Martin. 2009. Steam: Rails to Riches. Mayfair Games. Board game.
Wu, Harry. 2009. Chicago Express. Queen Games. Board Game.
Books and Journal Articles
Bogost, Ian. 2006. Unit Operations. Cambridge: The MIT Press.
Fogu, Claudio. 2009. “Digitalizing Historical Consciousness.” In History and Theory, Theme Issue 47. May 2009, pg. 103 – 121.
Frasca, Gonzalo. 2003. “Simulation versus Narrative: An Introduction to Ludology.” In The Video Game Theory Reader. Edited by Mark J. P. Wolf and Bernard Perron, 221-235. New York: Routledge.
Genette, Gerard. 1997. Paratexts: Thresholds of Interpretation. Cambridge: Cambridge University Press.
Ghys, Tuur. 2012. “Technology Trees: Freedom and Determinism in Historical Strategy Games.” Game Studies vol 12, issue 1. http://gamestudies.org/1201/articles/tuur_ghys
Juul, Jesper. 2005. Half-Real: Video Games Between Real Rules and Fictional Worlds. Cambridge: The MIT Press.
Salen, Katie and Zimmerman, Eric. 2004. Rules of Play: Game Design Fundamentals. Cambridge: The MIT Press.
Schivelbusch, Wolfgang. 1986. The Railway Journey. 2nd English edition. Berkeley and Los Angeles: The University of California Press.
Squire, Kurt. 2004. Replaying History: Learning World History Through Playing Civilization III. Doctoral Thesis, Indiana University.
Stover, John F. 1987. History of the Baltimore and Ohio Railroad. West Lafayette: Purdue University Press.
Uricchio, William. 2005. “Games, Simulation and History.” In Handbook of Computer Game Studies, edited by Joost Raessans and Jeffrey Goldstein, 327-328. Cambridge: MIT Press, 2005.
Wolmar, Christian. 2010. Blood, Iron, and Gold. New York: Public Affairs, 2010.
Woods, Stewart. 2012. Eurogames: The Design, Culture and Play of Modern European Board Games. Jefferson: MacFarland & Company, Inc. Publishers.
Digital Sources and Reference Projects
Rudin, Ronald. Remembering Acadie. http://http://rememberingacadie.concordia.ca/
Thomas, William III and Ayers, Edward. 2003. “The Differences Slavery Made: A Close Analysis of Two American Communities.” American Historical Review, December 2003. http://www2.vcdh.virginia.edu/AHR/
This week I want to respond primarily to some of the comments Cecire’s introduction to the Winter 2011 issue of the Journal of Digital Humanities and the role of theory in the digital humanities.
Cecire quotes Bauer in saying that “the database is the theory.” I wanted to better understand the context behind this phrase (is she asserting or noting that it has been asserted?) so I went to the source to find out, and it turns out this is something someone else said at a conference in defense of Bauer. So: it was an assertion.
In fact Bauer’s piece (turns out it was originally a blog post) is a counter-argument to the argument I was intending on making: that many digital humanities project seem to lack a theoretical component. To me, theory is what separates journalism from academic work; the ethnographer or sociologist working without theory is just reporting what they find.
I am glad I read Bauer’s post, though, as it has helped me to clarify my objection: many digital humanities projects bury, hide, or obfuscate the theories driving them (I am giving their makers the benefit of the doubt and assuming Bauer is right in that the theory drives the creation). For example, what theory drove the Making Memories project? In reading the report there was very little theorization beyond “we thought these activities would achieve what we wanted them to.” The same could be said of The Real Face of White Australia and the Bracero History archive. I do not doubt their creators had some theoretical framework in mind, but I have not been able to find it.
The same could be said about argumentation, which we have already identified as a problem in these kinds of works: are these works making a meaningful argument? Of course one can say that the decisions behind what to include and exclude in a database is a form of argument, but I submit it is a weak one.
(I do think that creating a database is a useful and meaningful end in itself, but I think it unwise to equate that creation with a theoretical contribution.)
I find this lack somewhat frustrating, because this is something we deal with in game studies all the time, and I think we as a field have a pretty good solution: make a game, either to test a theory or as design research, and write a paper about it. By having a place to explain the theories and goals behind the game one can make a theoretically-informed argument in the traditional sense and in the game itself. This pairing has proven quite successful and is a common type of conference paper.
Speaking of design research, I keep expecting to see it come up in the articles we have been reading. Essentially the idea is that one can test an idea or theory through building something, that is, the process of creation is itself a form of research and arguably more important than the finished project. I would love to read some post-mortems or articles from people making DH projects about their process. Why did they build it the way they did? What were the theories that informed them? In the case of nearly all the projects we have looked at so far, I think such a post-project analysis would be just as useful, if not more so, than the project itself.
For this week’s post I’m going to take a cue from Foucault and look at the evolution of a single term: security. In his chapter “Governmentality,” Foucault examines the evolution of the idea of the art of government, focusing his discussion on Machiavelli’s The Prince and the responses it generated. For Foucault, “modern” government (post 15th century in the West, generally) is intimately tied to the government’s ability to enforce security. He writes:
“Accordingly, we need to see things not in terms of the replacement of a society of sovereignty by a disciplinary society and the subsequent replacement of a disciplinary society by a society of government; in reality one has a triangle, sovereignty-discipline-government, which has as its primary target the population and as its essential mechanism the apparatuses of security” (104).
The government’s ability to function, by acting-on the population, depends on these “apparatuses of security,” which includes the modern police force. Thus this is “security” in a classical sense of the term, perhaps best thought of as “protection from harm.”
However, the other readings from this week call out the notion of security in the modern age of networked computing. In this sense “security” often equates to privacy, as if anonymity itself is a protection. Gabriella Coleman’s article “Our Weirdness is Free” shows how the decentralized hacktivist group Anonymous sees the concealment of information about itself as a means of security; and naturally, the exposure of information about others as a weapon. Interestingly, Coleman does not mention any instances of Anonymous acting upon freed information: mere exposure is the goal, with the assumption being this will be bad for the target.
(While not on the topic of security, Coleman’s piece also raised another interesting question when juxtaposed to Foucault: as Foucault addresses “governance” as a concept applicable to the self, the family, religious life, and many other systems, it makes sense to ask what kind of governance Anonymous subjects itself to. I am tempted to speak of postmodern governance here, but half in jest.)
Lynch’s article “’Pls Call, Love, Your Wife’: the online response to WikiLeaks’ 9/11 pager messages,” addresses WikiLeak’s release of hundreds of thousands of pager messages sent in and around New York City on September 11, 2001. Lynch was specifically interested in how the “public,” represented by people engaged in discussing the messages over the Internet, reacted. There was enormous concern over the amount of personal information released this way, however to me this speaks to the enormous disconnect most people have about the severity of personal information being available online, and their unwillingness to do anything about it. A few months ago a small program called Firebug made headlines when it was made available. It allowed any user to sniff traffic on a wireless network, and in particular facilitated stealing log-in credentials. Overnight it became trivially easy to go to a coffee shop and log-in to the facebook and twitter accounts of everyone there with you. Most websites responded soon thereafter by enabling https logins, and the EFF released a browser plugin called HTTPS Everywhere, which tries for a secure connection at every website you visit.
That said, I fear most people don’t know about these tools and freely post an enormous amount of identifiable content online. Andrew Tolan’s “D.I.Y. In the Sky” article was an extreme example of how a person can unconsciously leave small amounts of information around the web, and how far that information can be taken. Given this, the debate about the 9/11 messages raises another interesting question: what is to be done with the digital detritus that will soon be simply everywhere? Should such information be part of the historical record? If a person posts material publicly available on the Web, but did not know it would be so, should such information be fair game for scholars? What are the ethics here?
Lastly, I want to close on a prescient observation of Foucault’s:
“…the managing of a population not only concerns the collective mass of phenomena, the level of its aggregate effects, it also implies the management of population in its depths and details” (102).
As this week’s readings show, such management in the “depths and details” of the population is already happening.
What does it mean to write history in the digital medium? How can and should historians go about publishing their work digitally?
The readings this week look at these questions from two different angles: the creation of digital “journal articles,” and the workings of Wikipedia.
In terms of the first is Thomas’ terribly titled (because it so plainly demonstrates the humanist need to have a colon in their title) “Writing a Digital History Journal Article from Scratch: An Account,” and the guidelines for the AHR prize for best digital article. These writings both drive at the same question: how can one use the digital to ask a novel question? From the AHR guidelines:
“The AHR also seeks to promote scholarship that leverages digital tools and modalities to ask new questions about the past, and to enable new interpretations of the past, rather than merely adorning a presentation with multimedia features or materials.”
This is certainly a provocative challenge, but interestingly Thomas’ article hints at a further difficulty: given how ingrained the standard modes of historical inquiry are, could a professional readership realize that a different kind of question was being asked at all? Several times Thomas noted that readers had difficulty tracing an argument in his work, and he concluded by noting that “Until more digital scholarship is created, we will have few conventions or answers about ‘reading’ in the digital medium. Clearly, we will need to evolve the digital environment as we inhabit it.” While the best forms of digital history aspire to ask new questions, the audience must be able to find and understand these questions as well. Of course the only way to do this is to produce more and novel works of digital historical scholarship, and I was surprised to find a lack of discussion of design research this week: that is, the practice of designing and making something as an act of research. This is a common method in game studies, and I wonder to what extent it has been applied in history.
On the other hand were the pieces addressing Wikipedia. I particularly enjoyed Rosenzweig’s “Can History be Open Source? Wikipedia and the Future of the Past,” even though I found myself anticipating many of his points. At first I was intrigued by his observation that because parts of Wikipedia can be downloaded and subjected to computation. In other words, one could search for instances of words or phrases, or look for places where these occur in the same article, and so on. This seems to be a means of fulfilling the AHR’s requirement of asking a novel question. Having access to the enormity of Wikipedia and being able to sort that data certainly has promise.
Indeed, Rosenzweig’s article could itself be claimed to be asking such a novel question, in that he is relying heavily on the fact that Wikipedia tracks the changes made to articles, the fact that users log-in with unique identifiers, and so on. While he focused largely on the fact that Wikipedia users are obsessed with facts (I was reminded of Umberto Eco’s claim that we make lists because we don’t want to die), I think the more useful conclusion to draw is that Wikipedia often tells us as much about who is editing it than anything else. Jon Udell’s video of the evolution of the “Heavy Metal Umlaut” Wikipedia article was a nice demonstration of this: the inordinate amount of detail paid to the brief note on Spinal Tap shows that film’s importance to the people editing the article. It is entirely possible that there are other instances of placing an umlaut over an unlikely letter in other cultural artifacts (Western or otherwise), but the editors were unaware of them.
I also wanted to express my ambivalence about the extent to which Udell’s work could be considered scholarly. It is certainly an invaluable piece of data, but I find myself echoing the critic of Thomas who asked, “where is the argument?” In this instance in seems the digital has created a new source more than a new work of scholarship.
Townsend’s brief article on the adoption of technology by historians was somewhat interesting but not entirely surprising; the breakdown was not unexpected. I did empathize with the subjects who pointed to the learning curve of new technology as a deterrent. I am pretty technologically savvy, and for me the time cost of learning a new tool is rarely worth the benefits of the tool. I quickly gave up on Zotero, but I quickly fell in love with Scrivener, a word processor that in some ways has far fewer features than Microsoft Word, but has powerful metadata tools that make notetaking and brainstorming much easier.
Lastly I wanted to mention O’Malley’s post on history and video games. I agreed with much of what he had to say, but I wanted to note that a lot of the questions he raises are why I am taking this class. He claims that video games “mistake imitating for being,” which is a fair point, but investigating how imitating happens is an important line of research. What does it mean for a game to be historical? My theory is that part of the answer lies in how the game represents history.
I also want to address his criticism of Call of Duty. He writes:
“I’m the first to agree that this isn’t really history. It’s just a bunch of fantasy, and no amount of “accuracy” in recreating, say, uniform details or weapons trajectory would make it history. Nobody makes a video game about the quartermaster division, but armies win and lose on logistics and supply, and politics and diplomacy, and the work people do on the homefront. So I’d declare Call of Duty bad history, incomplete, or history poorly taught.”
From a game design standpoint, the vision of a “good” history game that O’Malley is looking for is probably impossible. Even the most dedicated gamers can only work with a system that is so complex before they will give up in frustration, or be unable to learn enough to be able to play. The myriad interlocking systems O’Malley refers to are managed by countless people, there is no way one player could grasp all of that complexity in full. Games can never fully re-create something, either due to technical limitations (computational power, memory, table size) or human limitations (we are only so smart). Thus even the most advanced simulation must leave something out. A perfect re-creation of a thing is just that thing again.
Of course “leaving something out” is what procedural rhetoric and the simulation gap are all about. As Bogost claimed in last week’s readings, games can make use of procedural rhetoric to make arguments. And Thomas’ critics were looking for the argument in his work of digital scholarship. It seems to me that games are ideal candidates for historical scholarship. But who knows what the tenure committee will say.
How do new media artifacts create meaning? Or alternatively, how do the people interacting with them ascribe meaning to them?
In “Persuasive Games: The Proceduralist Style” Bogost applies his theories of procedural rhetoric to several “art games.” His general theory is that these game designer / artists use game rules and mechanics as a means of expression, and that these supersede other semiotic aspects of the game (art, text, sound and so on). As a fellow game scholar, however, I tend to disagree with him on this point. I currently have a journal article out for review on this topic, but I will summarize the argument here.
Bogost’s piece alludes to a term he coined in his 2006 book Unit Operations, namely, the “simulation gap,” and I have since expanded upon in my own work. The simulation gap posits that users interpret a simulation (loosely defined) by comparing it to their experience with that which is being simulated. This includes how the simulation functions, as well as which elements of the source it includes and which it excludes (I will return to this second aspect later in this post). For Bogost the rules of the simulation are paramount:
“In these games, expression is found in primarily in the player’s experience as it results from interaction with the game’s mechanics and dynamics, and less so (in some cases almost not at all) in their visual, aural, and textual aspects.”
However, when we look at his examples, it becomes apparent that procedural rhetoric and the simulation gap only function when we understand the semiotics of the system. Jason Rohrer’s Passage only makes sense to anyone (assuming they haven’t read his artist’s statement) because they can identify the humans as humans, the treasure chests as treasure chests, and so on. Thus how they interact with each other has a certain meaning. If you strip the semiotic layer out and leave only the bare rules, there is no way the system would remain identifiable. The rules are certainly important to the expression, but the expression only functions because we know what the objects in the game are. Thus expression is not primarily in one or the other.
On the subject of meaning and expression, Manovich writes that “Multimedia works that have “cultural” content appear to particularly favor the database form” (219). What precisely he means by “favor” here is anybody’s guess, but the examples he is drawing on definitely sound similar to the kinds of projects the Vector journal publishes: heavy on content, light on interaction. However, Bogost’s examples and my discussion of meaning in games shows how expression, and thereby “cultural content,” can be found in the algorithmic aspect of digital media (to say nothing of demoscene).
Overall I found Manovich’s work hard to follow and a bit too hyperbolic for his own good. In his discussion in chapter 5 he essentially argues that everything happening in a computer is data (“database form”), then divides digital artifacts in falling more towards database or more towards algorithm, and then comes back around to say it’s all database anyway.
As a structural theorist I think he has some good observations and does a nice job clarifying things that I think would be helpful for the less-computationally-literate, but I felt his argumentation was weak. For example, at one point he claims that all video games are narrative, because in some sense a narrative is constructed as a player attempts to reach his or her goal. He later argues that 1. digital media is all essentially a database, and 2. narration and databases are at odds. The inconsistency, brought about by his sweeping claims, is problematic.
As another example, he writes: “The more complex the data structure of a computer program, the simpler the algorithm needs to be, and vice versa” (223). This is simply not true, but in the case of the former more of a “best practice” than a “needs be.” The vice versa is also problematic: the first thing you learn to do in any programming class is use a simple algorithm to manipulate a simple data structure, often by doing an arithmetical operation on a variable. This oversight is another reason I believe anyone working with digital media should have a basic understanding of programming.
To return to the simulation gap (as described above), I brought it up because I was thinking of it when reading Morris “Photography as a Weapon.” His discussion of all photography as being staged to a certain degree is similar to the simulation gap in that both are about inclusion and exclusion: what someone chooses to leave in the frame and to leave out is in itself a moment of artistic expression, be it intentional or not. Curiously, Morris is also very interested in the “truth” of a photograph, or at least interested in our interest in it, whereas Bogost openly embraces the subjectivity of games. There is certainly no association between truth and games analogous to our (diminishing) expectation of truth and photography.
Inclusion and exclusion was also an obvious theme of his series on the Fenton photographs, which was very enjoyable reading. I also appreciated the reminder not to overly fetishize technology and technical tools: while the person who solved the problem was clearly comparing the photographs on a computer, the solution could have been found with a magnifying glass and some patience. Not to mention that the more process-intensive solutions fell through.
Lastly I read through this digital humanties manifesto and it struck me as being slightly too utopian. I also appreciate the irony of academics in the humanities, who almost always use Macs, calling for openness.
This week’s readings covered two broad topics concerning digital history, and so I will respond to each in turn.
First and most interesting was the piece in the Journal of American History (link), wherein various scholars who work on and with digital history discussed the field. There was considerable confusion surrounding what exactly constitutes “digital history.” On the one hand, digital history can be framed as the history of the digital: a history of technology. There is also digital History, the idea of “doing History on or with the digital,” whatever that might mean. Kirsten Sword summed up this dichotomy nicely:
“Under what circumstances do you find yourselves thinking of digital history as a field and when is it a method accessible to all interested historians? … How do we negotiate the line between digital history as a field requiring specific, advanced technical expertise, and a method about which all historians need some knowledge?”
Certainly “doing history digitally” and “doing the history of the digital” can be very different things: an oral history of Apple computer sounds like something that somebody somewhere has attempted, for example. But I would like to argue that “digital history” as discussed in the article is neither a field nor a method, but a moment at which historians (and humanists generally) have been forced to confront the ubiquity of computing and consider its potential. In reading the piece I found myself anticipating Frisch’s comment:
“I’m skeptical of the lasting value of ‘digital history’ as a term—it either will end up meaning too much or too little and pretty soon will be so inescapable (in twenty years, will anyone do professional work in history without involving what we’re talking about?) as to provide little purchase on anything specific enough for a course, workshop, or blog.”
In the near future history will simply be done about, within and around digital technology, and the idea of “digital history” will seem anachronistic. One can easily imagine similar debates to those presented happening at the time of printing press’ invention: what did it mean for History that books could be printed and distributed on a massive scale? Just as today “print history” sounds strange, I believe “digital history” inevitably will as well.
I think that Kirschenbaum’s piece on the Digital Humanities was somewhat bland in that it was mostly an overview, hence far less provocative than the JAH article. I did enjoy the fact that the author ended on a question—a common blog tactic to encourage comments.
Moving on, the rest of the readings clustered around the uses, strengths and weaknesses of services falling broadly into the “Web 2.0” category (blogs, RSS, social media and the like). Both Kaufman and Cohen presented a somewhat utopian view of academic blogging, trotting-out old arguments about accessibility, audience, and conversation. Having participated in lab blogs, and run a few of my own, I wanted to add my perspective. In my experience the Web encourages fast skim reading, which is at odds with the academic need for clarity and rigor. In order to be successful, academic blog posts have to be like miniature articles, complete with citations and the like. Simply rattling off thoughts does not work, as your readers (more than likely colleagues) expect more. I have never found the effort of writing a good blog post to be worth it, as that effort is better spent on actual articles.
Twitter has proven to be far more useful for networking and conversation than blogs. Having used it regularly for upwards of four years I have met many interesting people through Twitter and have had some very productive debates. The short form forces you to be clear, but also lowers expectations of rigor. Of course different people have different habits, and more than once I have found an interesting scholar or game designer on Twitter who never posts anything useful or interesting at all. I would always recommend using Twitter over blogging, as the work-reward ration is far better on the former.