This week’s readings covered two broad topics concerning digital history, and so I will respond to each in turn.
First and most interesting was the piece in the Journal of American History (link), wherein various scholars who work on and with digital history discussed the field. There was considerable confusion surrounding what exactly constitutes “digital history.” On the one hand, digital history can be framed as the history of the digital: a history of technology. There is also digital History, the idea of “doing History on or with the digital,” whatever that might mean. Kirsten Sword summed up this dichotomy nicely:
“Under what circumstances do you find yourselves thinking of digital history as a field and when is it a method accessible to all interested historians? … How do we negotiate the line between digital history as a field requiring specific, advanced technical expertise, and a method about which all historians need some knowledge?”
Certainly “doing history digitally” and “doing the history of the digital” can be very different things: an oral history of Apple computer sounds like something that somebody somewhere has attempted, for example. But I would like to argue that “digital history” as discussed in the article is neither a field nor a method, but a moment at which historians (and humanists generally) have been forced to confront the ubiquity of computing and consider its potential. In reading the piece I found myself anticipating Frisch’s comment:
“I’m skeptical of the lasting value of ‘digital history’ as a term—it either will end up meaning too much or too little and pretty soon will be so inescapable (in twenty years, will anyone do professional work in history without involving what we’re talking about?) as to provide little purchase on anything specific enough for a course, workshop, or blog.”
In the near future history will simply be done about, within and around digital technology, and the idea of “digital history” will seem anachronistic. One can easily imagine similar debates to those presented happening at the time of printing press’ invention: what did it mean for History that books could be printed and distributed on a massive scale? Just as today “print history” sounds strange, I believe “digital history” inevitably will as well.
I think that Kirschenbaum’s piece on the Digital Humanities was somewhat bland in that it was mostly an overview, hence far less provocative than the JAH article. I did enjoy the fact that the author ended on a question—a common blog tactic to encourage comments.
Moving on, the rest of the readings clustered around the uses, strengths and weaknesses of services falling broadly into the “Web 2.0” category (blogs, RSS, social media and the like). Both Kaufman and Cohen presented a somewhat utopian view of academic blogging, trotting-out old arguments about accessibility, audience, and conversation. Having participated in lab blogs, and run a few of my own, I wanted to add my perspective. In my experience the Web encourages fast skim reading, which is at odds with the academic need for clarity and rigor. In order to be successful, academic blog posts have to be like miniature articles, complete with citations and the like. Simply rattling off thoughts does not work, as your readers (more than likely colleagues) expect more. I have never found the effort of writing a good blog post to be worth it, as that effort is better spent on actual articles.
Twitter has proven to be far more useful for networking and conversation than blogs. Having used it regularly for upwards of four years I have met many interesting people through Twitter and have had some very productive debates. The short form forces you to be clear, but also lowers expectations of rigor. Of course different people have different habits, and more than once I have found an interesting scholar or game designer on Twitter who never posts anything useful or interesting at all. I would always recommend using Twitter over blogging, as the work-reward ration is far better on the former.