Text analysis using the Old Bailey API & Annotated Books Online

The Old Bailey Online provides digitised proceedings of the Old Bailey from 1674-1913.  It offers a general search function, however using the open API allows the user query to results in a more specific way, “undrilling” to modify a query, or breaking the query down into further subcategories.  Using the API also allows results to be exported to the online reference management software Zotero and also to Voyant for further visualisation.

For my search, I used the keyword “Camberwell” (where I live), with gender of the defendant set to “female”, and punishment category set to “Death”. This returned 8 (highly interesting!) results.

OldBaileyScreenshotsegmentfromoldbailey

 

I exported these texts to Voyant, and the resulting word cloud looked like this:

OldBaileyVoyantThe prominent words,  “child” “mr” “mrs” “death” “house” “room” “seen” “know” “said”, paint an eerie picture of domestic mishap, which would definitely be a good starting point if you were looking for inspiration for a Victorian murder mystery. Aside from that, the word cloud doesn’t give you the kind of information you’d expect a researcher to be looking for while using this tool, i.e. you don’t get any kind of picture about what kinds of crimes these women committed or the kinds of evidence presented at court. This does seem to be one of those situations Jacob Harris mentions in his blog post at Nieman Lab wherein the use of the word cloud doesn’t provide much in the way of insight.

I was interested to read that these court proceedings were digitised through a process of text rekeying. Earlier texts were manually typed twice by two different typists, and then the transcripts were compared by a computer, with editing performed manually.  Later texts were keyed once, with the second version being created using OCR software, and the texts once again compared and manually corrected.  In my place of work I use OCR on PDFs uploaded to Moodle in order to make them accessible for visually impaired students so that they can use text-to-speech software.  This is a time consuming process, especially if the original text is old and the print quality not very good (we have students studying Olde English and Witchcraft, which and the OCR software really doesn’t like their texts).  In some ways it was pleasing to know that there just *isn’t* the technology out there to get it right at the moment to make this task easy, as demonstrated by the laborious processes performed by the people behind the Old Bailey Online.  I am glad to know that in my place of work we aren’t just wasting our time with all our manual editing, at present it seems this is the only way!

Later in the DITA lab, I looked at Universiteit Utrecht’s Digital Humanities Lab, specifically at their text mining research projects, and chose  to explore the project Annotated Books Online. This project digitises early modern books with handwritten annotations, marking the text up in order to separate out the annotations themselves for closer inspection.  Annotations can be highlighted with different colours, and have transcriptions added to them.  Well, that was the theory anyway.  The first time I used ABO I could highlight the annotations and get them to change colours, however, I haven’t been able to since for some reason.

ABOThis research project really appealed to me, I have always found marginalia interesting, and I like that the present-day reader can, in a sense “interact” with the annotator of the past by “doing stuff” with their scribblings in the margin.  Considering these texts are quite old and no doubt delicate, it’s a treat to be able to manipulate them in this way (well, it would be if I could get the annotation features to work for me again!).

Learning to love the digital in order to understand the world

I was fumbling around for a way into a blog post this week, and was inspired by my classmate Judith’s entry; “If it’s boring, it’s important”, which made me laugh as it’s so painfully true.*

That said, I am loving they way that DITA is being taught, it puts a whole new spin on things that have otherwise never interested me, and I am certainly not finding it boring.  In fact it has me seriously questioning the ways in which I have ever been taught about digital technology in the past!  Ernesto’s slideshow for our last lecture on “Archiving, Understanding and Visualising Twitter data” is a good example of this “angle”, ending with a cartoon by Randall Munroe, creator of xkcd.com which has two stick figures in a dark and empty landscape full of possibilities saying “Let’s find out”.  Ultimately, all of this is about being curious and having questions, and using information, such as from Twitter, to find out about the world we live in.  When information technology is looked at in this way, it is much less daunting.

Having my mind opened up in this way is leading to some important realisations.  For example, it now seems clear to me that leaving Twitter data out of the equation when analysing modern communication networks, topics which groups of people care about, and the way current events unfold which will one day be of historical importance, is bordering on irresponsible.  As Ernesto Priego says in his blogpost ‘Twitter as public evidence and the ethics of Twitter research’, “these days what’s unethical is not to use Twitter as a research tool”.  Indeed, the Library of Congress signed an agreement with Twitter in 2010, which gave them access to an archive of public tweets from 2006 – 2010, and Twitter continue to provide the Library with access to public tweets to this day.   On the Library of Congress website, it is explained that the reason for this is that the Library’s core mission is to “collect the story of America”, demonstrating the importance of social media-as-document, and consequently in the way in which we understand our world.  As Lyn Robinson states in ‘The future of documents’, networked technology is only going to become more pervasive, and as such social media will not be going away any time soon.  Furthermore, the role Twitter and other social media play in political protest and world-changing events is the subject of much recent debate in the media, and even when the position is taken that it’s actually not very important, such as this article by Laurie Penny for the New Statesman, ‘Revolts don’t have to be Tweeted’, Twitter et al is still central to the discussion.  Either way, social media can’t be ignored.

That said, I had never used Twitter until I needed to for #citylis and therefore I remain skeptical about privileging Twitter (though I do understand that it is the public nature of Twitter which affords itself to the study of it in the context of a classroom). Adding to my skepticism is my experience working in a public youth Library in New Zealand. It was the early 2000s and Bebo and MySpace were new on the scene. Interestingly, many of my peers who lived in the central city and were interested in punk music took to using MySpace, while almost all of the kids who used the library (which was based in an economically underprivileged suburb), and who generally listened to hip-hop and R&B used Bebo.  These different groups were having conversations and building communities that didn’t seem to touch each other, though they were all living in the same city. And to this day I don’t quite understand why one social media platform would be chosen over the other on the basis of socio-cultural/economic factors, given that both were free.

I am therefore finding myself quite drawn to literature which points out the biases which occur in analyses of Twitter data, particularly in relation to when these analyses are used to explain social and historical events by the media.

In ‘Assessing the bias in samples of online networks’, Gonzalez-Bailon et al describe their use of the Twitter search API (application programming interface) and the Twitter stream API using various filters to compare what kinds of data each bring up about the Spanish ‘indignados’ protests in 2012.  Probably not surprisingly, they found that smaller samples don’t reveal the diverse array of peripheral activity/conversations that was going on, and their data using the Twitter search API with filters was biased towards the centrality of certain tweets/users.  Unfortunately, unless you are the Library of Congress or some other big organization which can pay for archives of “all” the tweets, you will be limited to smaller samples, and will therefore get a skewed picture of communication networks.

But this brings me back full circle to things that are, if not boring, at least seemingly impenetrable at first glance being the most important. As researchers, librarians, information specialists we need to be able to understand things such as how APIs  work and their inherent limitations in order to best assess the data we collect.  I am also interested thinking about why certain people use Twitter and others don’t and why some groups were using Bebo and others MySpace in the mid-2000s.  How does this affect data visualisation?  You only have to be a New Zealander, and look at the picture of the world-as-connected-by-Facebook on Facebook’s login page and see your country is left out of it, to realise the limitations of Big Data, and remember there is always another story going on beyond the one gleaned from the algorithms.

*Entertaining aside, I shared this link to an article by Charlie Brooker for the Guardian on Judith’s blog, “What is Drip and how, precisely, will it help the government ruin your life?” about the Data Retention and Investigatory Powers bill which Brooker describes as “the most tedious outrage ever”.  This is how They will get us in the end, by boring us to death with things that matter the most.

Understanding APIs with the help of WhatsApp

Getting to grips with the concept of an “API”, particularly in contrast to a web service, took me quite some time.  I couldn’t figure out what the difference was initially, until I WhatsApped with a friend of mine in New Zealand who is a programming whiz genius-type person. She, succinctly in a text message, informed me that a web service is a type of API, but APIs themselves are not web specific.  She went on to give an example of an API that may exist in conjunction with, say, a kernel in the open-source operating system Linux, which would allow Linux app developers to write desktop applications.  As she said, APIs can be accessed on the same machine, rather than over a network, unlike web services which are almost always accessed over HTTP (K. Graham, personal communication October 26th, 2014). Hooray for WhatsApp allowing my friends back home to explain the intricacies of DITA to me 🙂

I had a quick look to see what kind of APIs exist for WhatsApp. This link explains a couple of ways WhatsApp can be integrated into a various apps: http://www.whatsapp.com/faq/en/iphone/23559013. WhatsApp has a Documentation API which is what allows multimedia created by other apps to be shared on WhatsApp.

One of my difficulties with getting my head around what APIs and web services were is partly because they are so ubiquitous: we use them, or the mashup of data or services they provide, everyday when we read Tweets on the Guardian, or copy/share media or text between applications.

For a bit of embedding practise, as well as a tie in with my WhatsApp revelation here is a talk by Toby Shapshak and the role of the mobile phone in Africa.