Semantic Web and the potential for opening up accessibility

I work with visually impaired library users every day in my job as Library Access Support co-ordinator. The benefits of the development of the “semantic web” for these library users is immediately obvious (one thing I have learned on this job is that technology with assistive aspects benefits all of us, wether we consider ourselves disabled or not, hence the success of the iPhone and naturally, the drive towards the semantic web).

Throughout the 10 sessions of DITA, in the back of my mind I have been applying the ideas to library users of my past (youth) and my present (disabled and dyslexic university students) which helps ground the theory in practise for me. Right at the beginning we learnt about Information Architecture, thinking about the importance of structuring web resources well, and now at the end we are investigating the semantic web, which involves the Text Encoding Initiative (which seeks to make documents machine understandable) and the Resource Description Framework (which provides metatdata for digital resources). Who better to judge the efficacy of these concepts and approaches than those who, in navigating digital resources, are relying solely on software entirely dependant on the hierarchies of a webpage being meaningfully structured, or document being correctly tagged. Think about the ‘skill’ we are taught to develop of quickly scanning a document to decide on its usefulness for our research. Without being able to physically “see” the text, imagine the benefits of text analysis tools and topic modelling to quickly pull out the salient concepts of a document.

Reading further into literature on the semantic web, however, I kept getting snagged on the discussions around the creation of ontologies and taxonomies (which any readers of this blog will not be surprised to hear).

Take for example, the Comic Book Mark Up Language. As an ex-comic book shop owner I was fascinated to see this exists. Our store, Cherry Bomb Comics (RIP) specifically sold only those graphic novels made by women, LGBT and people of colour, as well as local New Zealand creators (our blunt instrument way of rectifying the imbalance in the comics world). I remember trawling through distributors’ catalogues hoping to catch sight of a few keywords we had employed to identify the stock we wanted to hold. If all things comics were marked up with CBML (they’d have to be digitised first though, but I imagine most comics are born digital these days) and publishers made them available for text analysis, what a far more accurate way of identifying what we needed. But… As well as removing the serendipity of browsing through catalogues, who decides how to interpret (and then subsquently mark up) a comic or any image for that matter? The artist/author? The publisher? The person they have hired to do the marking up?

At my place of work, we came across similar philosophical difficulties when we OCRd texts from art books for VI students. Initially we tried to describe the artwork depicted (which the blind student would obviously not be able to see), but it quickly became apparent that this was inappropriate as we were describing things inconsistently, and subjectively, effectively “telling” the VI student what an image represents.

The concept of the “semantic web” being about creating knowledge structures is as exciting as it is open to abuse of power and privilege.  What does the web get to “know”?  Whose knowledge?

I wanted to investigate the possibilities for Web 3.0 technologies to aid accessibility and located this study by Koroupetroglou et al, on the “Web For All” site which looks at using semantic web frameworks to create applications to assist visually impaired users.   Conducted in 2006, it’s rather old now in terms of digital technology, but I was interested in their focus on the extensibility that comes with using OWL (the language used for setting the ontologies behind an RDF), and the fact that this openness to addition and change, they felt, leads to increased opportunity for co-operation amongst different groups with expertise in different areas of digital accessibility. The final paragraph in the study sums up the possibilities opened up by using semantic web technologies in a way that I think implicitly addresses the need to be aware of “whose knowledge?” : “Our community is not tightly connected to the web authoring society, which is quite large and difficult to educate in accessibility issues. However, it can work independently upon the products of the web authoring society.”

Reference: Kouroupetroglou, C., Salampasis, M. & Manitsaris, A. (2006) A Semantic-Web based Framework for Developing Applications to Improve Accessibility in the WWW. Retrieved from: URL http://www.w4a.info/2006/prog/15-kouroupetroglou.pdf.

Advertisements

Text analysis using the Old Bailey API & Annotated Books Online

The Old Bailey Online provides digitised proceedings of the Old Bailey from 1674-1913.  It offers a general search function, however using the open API allows the user query to results in a more specific way, “undrilling” to modify a query, or breaking the query down into further subcategories.  Using the API also allows results to be exported to the online reference management software Zotero and also to Voyant for further visualisation.

For my search, I used the keyword “Camberwell” (where I live), with gender of the defendant set to “female”, and punishment category set to “Death”. This returned 8 (highly interesting!) results.

OldBaileyScreenshotsegmentfromoldbailey

 

I exported these texts to Voyant, and the resulting word cloud looked like this:

OldBaileyVoyantThe prominent words,  “child” “mr” “mrs” “death” “house” “room” “seen” “know” “said”, paint an eerie picture of domestic mishap, which would definitely be a good starting point if you were looking for inspiration for a Victorian murder mystery. Aside from that, the word cloud doesn’t give you the kind of information you’d expect a researcher to be looking for while using this tool, i.e. you don’t get any kind of picture about what kinds of crimes these women committed or the kinds of evidence presented at court. This does seem to be one of those situations Jacob Harris mentions in his blog post at Nieman Lab wherein the use of the word cloud doesn’t provide much in the way of insight.

I was interested to read that these court proceedings were digitised through a process of text rekeying. Earlier texts were manually typed twice by two different typists, and then the transcripts were compared by a computer, with editing performed manually.  Later texts were keyed once, with the second version being created using OCR software, and the texts once again compared and manually corrected.  In my place of work I use OCR on PDFs uploaded to Moodle in order to make them accessible for visually impaired students so that they can use text-to-speech software.  This is a time consuming process, especially if the original text is old and the print quality not very good (we have students studying Olde English and Witchcraft, which and the OCR software really doesn’t like their texts).  In some ways it was pleasing to know that there just *isn’t* the technology out there to get it right at the moment to make this task easy, as demonstrated by the laborious processes performed by the people behind the Old Bailey Online.  I am glad to know that in my place of work we aren’t just wasting our time with all our manual editing, at present it seems this is the only way!

Later in the DITA lab, I looked at Universiteit Utrecht’s Digital Humanities Lab, specifically at their text mining research projects, and chose  to explore the project Annotated Books Online. This project digitises early modern books with handwritten annotations, marking the text up in order to separate out the annotations themselves for closer inspection.  Annotations can be highlighted with different colours, and have transcriptions added to them.  Well, that was the theory anyway.  The first time I used ABO I could highlight the annotations and get them to change colours, however, I haven’t been able to since for some reason.

ABOThis research project really appealed to me, I have always found marginalia interesting, and I like that the present-day reader can, in a sense “interact” with the annotator of the past by “doing stuff” with their scribblings in the margin.  Considering these texts are quite old and no doubt delicate, it’s a treat to be able to manipulate them in this way (well, it would be if I could get the annotation features to work for me again!).

Word clouds: “mullets of the internet”? What would Tupac say?

The description of word clouds employed by Jeffrey Zeldman as the “mullets of the internet” made me laugh.  I’ve never found them particular attractive to look at.  That said, using tools like Wordle, Many Eyes and Voyant was fun and, like the Altmetrics doughnuts made the data in the otherwise eye-strainingly dull Excel spreadsheets much easier to get my head around, though I’m not sure how useful they are beyond getting a very general picture of a situation.

That said though, we used data collected from our altmetrics work in the last DITA lab and a few things were revealed to me.  Firstly, using Altmetric I performed a keyword search for “Aotearoa” as I mentioned in my previous blogpost. When I  looked at the results produced by Altmetric, it seemed that some of the journal articles/blog posts/Tweets etc it gathered did not contain the word Aotearoa, and it felt like the results were a bit random.  However, using Voyant on the titles from the Altmetric data exported to Excel, resulted in the following word cloud:

AotearoaVoyant

with the word “Aotearoa” (as well as “Zealand”) showing very prominently, which lead me to realise that I probably dismissed my Altmetric results too quickly, and on further inspection they were more relevant than I thought – and the word cloud more than just a colourful mullet! (And yes, I did forget to include “stop words” which is why “and” and “of” appear so frequently – oops).

I also gathered Altmetric data using the keyword “Bicycle” and exported these to Voyant as well. This screenshot shows the kinds of information Voyant pulled out for me:

BicycleVoyantOne of the most useful features is being able to select a word, in this case “helmets” from the corpus, and on the bottom right of the screen, the instances of this word being used are shown, surrounded by the context of the sentence (which can be expanded). This is useful if the word the researcher is looking for is more ambiguous than “helmets” or “Aotearoa”, and could perhaps be mentioned in a context irrelevant to the thing being studied.  This more granular way of looking at the data ensures that the researcher is getting an accurate picture of how the words are being used in the text, with minimal effort.

I still can’t say I am convinced by the usefulness of the word cloud, or even 100% sold on text analysis when looked at in this quantitative way.  I did my undergraduate degree in English literature, so I guess Franco Moretti’s concept of distant reading which employs graphical and quantitative visualisations of a text is a new one to me (though would have been REALLY helpful when writing those essays on Victorian literature!).  But I was interested in Julie Meloni’s blogpost at the Chronicle of Higher Education regarding the use of word clouds for engaging students.  I used to work in a youth library, and many of the teenagers I worked with were very interested in poetry and expressive language.  “The Rose that Grew from Concrete” by Tupac Shakur was (perhaps unsurprisingly) one of the most popular books in the library.  In a bid to get the kids to engage with how poems are written, I photocopied some of Tupac’s poetry and whited-out some of the more visceral words.  The kids then had to imagine/guess/decide what words should be used where the spaces were.  I just used Voyant on poems from “The Rose that Grew from Concrete”, and I think that this would’ve been a hit amongst all those emo teenagers at the library:

TupacVoyant