Thoughts about open data and the future of librarianship

voyantThese are most common words I have used on this blog since I began writing it back at the beginning of October.  I feel, looking at this representation exported from Voyant tools, that I must have been on the right track.  It was actually even more interesting, from a writerly point of view, to leave a few of the stop words in, as what resulted gave me an indication of boring words that I tend to overuse. “Particularly” seems to be one of them… I might have to give thought to this before I hand in my final essay!

In the final lab of DITA we attempted to obtain Twitter metrics reports, however, given that I have only Tweeted a handful of times since the course began, my results were singularly uninteresting, and I couldn’t seem to get the program to work properly so I won’t publish the results here. This is not to say I haven’t been using Twitter during this time.  As well as following up my classmates’ links and suggestions, I have also used it to track the protests post the Ferguson verdict, read feedback and comments from students using the library I work in, found out the details of various incidents I have passed while cycling to work and discovered details about what some of my favourite bands are doing.  All of this data I have generated and accessed has covered vast swathes of my life and has made me realise how useful open access to data, via APIs and beyond, can be for people developing apps to help us get on in our lives.  It also scares me a bit, when you look at the ways that companies such as Uber are using data to invade people’s privacy.

The move towards open data generated from research has been prominent in the university in which I work – well – talk of open data has been prominent, whether or not the university eventually sets up a repository similar to the institutional repository we currently have remains to be seen.  Increasingly it is being recognised that researchers providing their raw data will, as the Open Data Initiative says, contribute “economic, environmental, and social value” to society.  If research is publicly funded, it stands to reason the public should have access to results.  And as I mentioned before, the ability to utilise this kind of data, to mash up different applications really can’t be underestimated, considering the kinds of things that people are creating, such as this woman’s mission to make it easy for people to locate public toilets in Denmark. Someone needs to do that for London!

What has been interesting, and slightly uncomfortable for me throughout the last 10 weeks of DITA is that, while on the one hand I can definitely see the need for librarians and information specialists getting a handle on these kinds of technologies, on the other hand it seems to run very parallel to what companies and corporations are doing (such as Uber).  The difference being, I guess, that we’re not in it to (necessarily) make money off of people, but much of it does feel a bit like we’re learning business analyst tools.  In fact, a friend of mine recently got a job working with “big data”, and her company does market research and the like for various big companies.  We’ve been able to share a lot of knowledge in the last few weeks, and while I realise it is reactionary (and probably a bit technophobic) of me to feel uncomfortable, there is a bit of “I am training to be a librarian after all, not help car companies sell cars!”.

But I think this will be the (future) role of librarians; to help the public to gain/retain control of their own information and understand what is being done with their data, as well as navigate copyright limitations and in an academic context, promote useful data analysis tools to students. To that end I am pleased to have been given these leads to follow up and look forward to integrating them into my work within the library..

Using Altmetrics to measure societal impact

The value of using alternative metrics to collect evidence of impact for a scholarly work appeals to me, because it opens up the notion that the “general public” are thinking and reading also, not just academics.   I am involved with lots of communities that I guess you’d call grassroots: activist and music communities particularly, which are not connected to universities, but are often political in nature (feminist, post colonial and queer theory contributing significantly).  The open and cheap dissemination of information and ideas is important to these networks and while zines have long played a big part in this, often introducing the theories of seminal thinkers (see for example, this zine, Judy!, a tongue in cheek zine about Judith Butler, recently digitised by QZAP – the Queer Zine Archive Project), the internet has obviously by and large taken over (though zines still live on!) and these same communities continue to share theory on Tumblr, Twitter, Facebook etc.  Gathering evidence that scholarly works are being discussed outside of the ivory tower I think not only is gratifying for the scholar, but also provides an important channel of feedback as academics are able to see the context in which their work is being used.

Working in an academic library I hear a lot about the Research Evaluation Framework (REF), which is basically how funding decisions are made for research in Universities.   As said on the Ref home page: “The assessment provides accountability for public investment in research and produces evidence of the benefits of this investment”. Which I guess is a fancier and more money-focused way of saying what I said above.   Clearly, altmetrics will play a significant part in proving the worth of areas of research, particularly with the shift to Open Access that is also being hustled along by the REF.

As we discussed in our DITA lecture, altmetrics cannot be relied upon for the whole picture.  Tools such as Altmetric rely on documents having DOIs, and also due to the ever-shifting nature of social media, results are not stable, they will only ever provide a snapshot for a moment in time.  Five minutes later, things could be different.  Not only that, but the way that we share things on social media can often be flippant and superficial; i.e. just because I share a link doesn’t mean I’ve really read it.  However, as Ernesto Priego points out on the Altmetric blog, using altmetrics often (but not always) means you can pinpoint data such as the geolocation of the person sharing the link, which can give added weight to the significance of the share.

AltmetricsUsing Altmetric Explorer last week was an interesting experience.  I was a bit frustrated that the keyword search didn’t seem particularly accurate, for example, I wanted to search for mentions of “Aotearoa”, which is the Maori word for “New Zealand”, as I thought it would cut out the chances of picking up articles about “Zealand” in Denmark. However, despite the uniqueness of the name, some of the articles returned did not contain the word, or even have anything to do with NZ at all.  I couldn’t get to the bottom of this. Also I noticed that mostly Science-based journals were being discovered, but I guess this is probably due to the these journals having a higher proportion of DOIs over journals in the humanities and literature, which is probably where I was more likely to find the topics I was interested in.  One thing I wondered about what whether there was any kind of correlation between how “populist” the article topic was, and what kind of social media was used to share it, e.g. perhaps including Pintrest and Tumblr in my search scope would reveal something rather different than if I stuck to news sites and blogs…however, it was difficult for me to judge that from the results I received (partly because the notion of what’s “populist” is subjective I think).

Looking at the Altmetric “doughnuts” was far more pleasing and easy to take in at a glance than using Excel spreadsheets, and I will definitely be going back to this tool and hopefully will be able to get more out of it with practice.

Learning to love the digital in order to understand the world

I was fumbling around for a way into a blog post this week, and was inspired by my classmate Judith’s entry; “If it’s boring, it’s important”, which made me laugh as it’s so painfully true.*

That said, I am loving they way that DITA is being taught, it puts a whole new spin on things that have otherwise never interested me, and I am certainly not finding it boring.  In fact it has me seriously questioning the ways in which I have ever been taught about digital technology in the past!  Ernesto’s slideshow for our last lecture on “Archiving, Understanding and Visualising Twitter data” is a good example of this “angle”, ending with a cartoon by Randall Munroe, creator of xkcd.com which has two stick figures in a dark and empty landscape full of possibilities saying “Let’s find out”.  Ultimately, all of this is about being curious and having questions, and using information, such as from Twitter, to find out about the world we live in.  When information technology is looked at in this way, it is much less daunting.

Having my mind opened up in this way is leading to some important realisations.  For example, it now seems clear to me that leaving Twitter data out of the equation when analysing modern communication networks, topics which groups of people care about, and the way current events unfold which will one day be of historical importance, is bordering on irresponsible.  As Ernesto Priego says in his blogpost ‘Twitter as public evidence and the ethics of Twitter research’, “these days what’s unethical is not to use Twitter as a research tool”.  Indeed, the Library of Congress signed an agreement with Twitter in 2010, which gave them access to an archive of public tweets from 2006 – 2010, and Twitter continue to provide the Library with access to public tweets to this day.   On the Library of Congress website, it is explained that the reason for this is that the Library’s core mission is to “collect the story of America”, demonstrating the importance of social media-as-document, and consequently in the way in which we understand our world.  As Lyn Robinson states in ‘The future of documents’, networked technology is only going to become more pervasive, and as such social media will not be going away any time soon.  Furthermore, the role Twitter and other social media play in political protest and world-changing events is the subject of much recent debate in the media, and even when the position is taken that it’s actually not very important, such as this article by Laurie Penny for the New Statesman, ‘Revolts don’t have to be Tweeted’, Twitter et al is still central to the discussion.  Either way, social media can’t be ignored.

That said, I had never used Twitter until I needed to for #citylis and therefore I remain skeptical about privileging Twitter (though I do understand that it is the public nature of Twitter which affords itself to the study of it in the context of a classroom). Adding to my skepticism is my experience working in a public youth Library in New Zealand. It was the early 2000s and Bebo and MySpace were new on the scene. Interestingly, many of my peers who lived in the central city and were interested in punk music took to using MySpace, while almost all of the kids who used the library (which was based in an economically underprivileged suburb), and who generally listened to hip-hop and R&B used Bebo.  These different groups were having conversations and building communities that didn’t seem to touch each other, though they were all living in the same city. And to this day I don’t quite understand why one social media platform would be chosen over the other on the basis of socio-cultural/economic factors, given that both were free.

I am therefore finding myself quite drawn to literature which points out the biases which occur in analyses of Twitter data, particularly in relation to when these analyses are used to explain social and historical events by the media.

In ‘Assessing the bias in samples of online networks’, Gonzalez-Bailon et al describe their use of the Twitter search API (application programming interface) and the Twitter stream API using various filters to compare what kinds of data each bring up about the Spanish ‘indignados’ protests in 2012.  Probably not surprisingly, they found that smaller samples don’t reveal the diverse array of peripheral activity/conversations that was going on, and their data using the Twitter search API with filters was biased towards the centrality of certain tweets/users.  Unfortunately, unless you are the Library of Congress or some other big organization which can pay for archives of “all” the tweets, you will be limited to smaller samples, and will therefore get a skewed picture of communication networks.

But this brings me back full circle to things that are, if not boring, at least seemingly impenetrable at first glance being the most important. As researchers, librarians, information specialists we need to be able to understand things such as how APIs  work and their inherent limitations in order to best assess the data we collect.  I am also interested thinking about why certain people use Twitter and others don’t and why some groups were using Bebo and others MySpace in the mid-2000s.  How does this affect data visualisation?  You only have to be a New Zealander, and look at the picture of the world-as-connected-by-Facebook on Facebook’s login page and see your country is left out of it, to realise the limitations of Big Data, and remember there is always another story going on beyond the one gleaned from the algorithms.

*Entertaining aside, I shared this link to an article by Charlie Brooker for the Guardian on Judith’s blog, “What is Drip and how, precisely, will it help the government ruin your life?” about the Data Retention and Investigatory Powers bill which Brooker describes as “the most tedious outrage ever”.  This is how They will get us in the end, by boring us to death with things that matter the most.