KB Research

Research at the National Library of the Netherlands

Author: Lieke Ploeger

Re:publica 2013: In/side/out

From 6-8 May the Re:publica conference was held in Berlin – one of the largest international conferences in the field of blogging, social media and our society in the digital age. In numbers alone the event is already quite impressive: with over 5.000 attendees, 300 speakers, 3 Tb of video footage or 95 hours of sessions, talks, and panel discussions and around 27.600 social media activities, it was hard to keep up: already before the conference started, the Twitter stream of #rp13 exploded with tweets!

republica_1          republica_2Photos by Gregor Fischer (https://www.flickr.com/photos/re-publica/)

But it was not only the size of the conference that was impressive: there was also a large number of interesting speakers and talks. Unfortunately my German was not sufficient to follow every detail, so I’ve mainly attended the sessions in English. This did make for an easier choice, as there were often around 10 parallel sessions and workshops. For most of the  presentations a video is online – a good overview of all sessions is available here: http://michaelkreil.github.io/republicavideos/.

Several talks focused on the differences in internet access and internet censorship worldwide. During the session ‘ 403 Forbidden: A Hands on Experience of the Iranian Internet’ you could gain more insight in how the Iranian internet is censored by participating in a quiz. Several websites and newpaper images were shown, and you had to guess which sites are blocked and which photos are manipulated.  The results were often surprising (for example, the sites of the KKK and Absolut Vodka are blocked, but the sites of Smirnoff Vodka and the American Nazi party are not), showing how internet filtering in Iran is both arbitrary and targeted, designed to make you feel insecure online.

michella obama

#403forbidden: Iranian photo manipulation of Michelle Obama’s dress

Another interesting talk in this area was ‘Internet Geographies: Data Shadows and Digital Divisions of Labour’ (unfortunately the video of this talk is not online yet). Mark Graham of the Oxford Internet Institute showed how, despite the initial hope that internet would offer everyone around the world the opportunity to share their knowledge, there are significant concentrations of knowledge. For example, Europe has just over 10% of the world’s population, but produces nearly 60% of all Wikipedia articles. There are even more Wikipedia articles written about Antarctica than about any country in South America or Africa. He illustrated these numbers with great visualisations, most of which you can find through this blog: http://www.zerogeography.net/2011/09/geographies-of-worlds-knowledge.html

In ‘Investigation 2.0’ Stephanie Hankey and Marek Tuszynksi of Tactical Tech discussed ways in which individual hackers and people working with data visualisation are nowadays working together with journalists to make certain underexposed data visible and present it in a way that gives us more insight into complex social and political issues. A good example of this is the visualisation that was made of drone strikes in Pakistan since 2004: http://drones.pitchinteractive.com.

Data visualisation can also be done through food, which the three women behind the cool website http://bindersfullofburgers.tumblr.com call ‘Data cuisine’ (or How to get juicy data from spreadsheets). They are working on visualising data that is usually thought of as boring and dry, such as election results, in creative and appealing ways. In Binders full of Burgers they displayed the US election results of 2012 with burgers and fries, while they used waffles, candy and fruit to show the local Berlin elections (http://wahlwaffeln.tumblr.com/). They also made their own talk more appealing by handing out cookies for audience participation :)

Then there was a good overview presentation by Joris Pekel of the Open Knowledge Foundation on Open Data & Culture – Creating the Cultural Commons. After presenting some impressive numbers of available online records, media files and metadata from Europeana, DPLA and Wikimedia Commons, he focused on the possibilities of enabling the public to connect and contextualise open data in the future through initiatives such as OpenGLAM. His talk is also available from Slideshare at http://t.co/bpqBEB28sb.

A very direct way of promoting open data, transparency and hacker culture was presented by Daniela B. Silva in Hacker culture on the road. In 2011, a Brazilian hacker community came up with the plan of starting a Hacker bus: they crowdfunded the budget for buying an old school bus and started travelling around Brazil, organising local events focused on promoting government transparency and hacker activism. Volunteer hackers, digital activists, lawyers, artists and journalists joined the bus, listened to the needs of the people they met and helped them to develop answers with the help of available technology, such as local street mapping applications and apps but also new local legislations.

Finally, one of the most fascinating talks was that of Neil Harbisson and Moon Ribas on Life with extra senses – How to become a cyborg. They showed ways of extending and creating new senses and perceptions by applying technology to the human body, as well as the artistic projects they created based on these sensory extensions. For example, due to a medical condition Neil Harbisson only sees in black and white, so he developed an electronic eye that transforms the colours around him into sound, allowing him to experience colour in a different way.  Together they have established the Cyborg Foundation to further promote their activities and help more people to become a cyborg, because they believe all humans should have the right to extend their senses and perceptions. Interestingly they see this as a way for people to get closer to nature instead of alienating themselves from it, since many animals have very different sensory perceptions than we do.

935450_502820113100275_1127485294_n

 

 

 

 

 

 

 

 

Neil Harbisson and Moon Ribas: when Neil scans Moon’s dress with his Eyeborg, he will hear the song ‘Moon River’ (Photo by Tony Sojka, https://www.flickr.com/photos/re-publica/)

All this is just a fraction of everything that was presented at Re:Publica: at http://michaelkreil.github.io/republicavideos/ you can find videos of nearly every talk – and if you speak German, you’re even more lucky!

Performance measurement – A state of the art review

Author: Henk Voorbij

The Handboek Informatiewetenschap (Handbook Information Science) contains about 150 contributions from experts on various subjects, such as budgeting, collection management, open access, website archiving, digitization of heritage collections, information retrieval software and gaming in the library. The Handbook is published as a loose-leaf paper version and as an online version (http://www.iwabase.nl/iwa-base/). Unfortunately, it is not well known.

c_iw

Articles are being updated every ten years. Recently, I was requested to update the article on Performance Indicators, originally published in 2000 and authored by Peter te Boekhorst. My contribution starts with definitions of core concepts (for example: what is the difference between statistics and performance indicators). It continues with general guidelines that may be helpful to libraries which aim to develop their own instrument for performance measurement. Among these are models such as the classical system approach (input – throughput – output – outcomes) and the Balanced Scorecard, and international standards, such as provided by ISO and IFLA. In the Netherlands, there are well developed benchmark instruments for university libraries and libraries from universities of applied sciences. I am involved with the development of these systems and analysis of the data since many years and describe my experiences in depth, in order to provide an example of the caveats and benefits of performance measurement. The last three chapters address potential additions to traditional performance indicators: user surveys, web statistics and outcomes.

Updating an earlier version offers an excellent opportunity to depict the progress in the field. Two things struck me most. One is the fast rise of the concept ‘Key Performance Indicators’. There’s no agreement in the literature of what this concepts actually means. Some use it loosely and do not make a genuine distinction between performance indicators and key performance indicators. Others have very pronounced ideas of its meaning: there should be no more than ten KPI’s, they should be measured daily or weekly, they should be coupled to critical success factors rather than strategic goals, and they should be understood by a fourteen year old child. The other thing is the growing interest in outcomes, the popularity of LibQual+ as an instrument to measure user satisfaction and the upsurge of new technologies such as web analytics. I can’t wait to see the 2023 version of the paper.

Looking back: publications in 2012

2012 was quite a busy year for the people working at the Research department of the KB National Library of the Netherlands. The large-scale project IMPACT was concluded in the summer of 2012, followed by the start of the IMPACT Centre of Competence in Digitisation. Work continued in ongoing research projects such as SCAPE and APARSEN (both in the area of digital preservation) and the newly started Europeana Newspapers project. In addition, articles were written on research areas such as copyright and benchmarking for libraries, and a website was started for the Atlas of Digital Damages. Last but not least, our library & two people from our Research department were featured in an item on the Euronews TV channel.

research

 List of publications in 2012 by the KB research department:

Succeed Project launched

Author: Clemens Neudecker
Originally posted on: http://www.openplanetsfoundation.org/blogs/2013-02-05-succeed-project-launched

The kick-off meeting of the Succeed project (http://www.succeed-project.eu) took place on Friday 1 February in Paris.

RTEmagicC_succeed.jpg

Succeed is a project coordinated by the Universidad de Alicante and supported by the European Commission with a contribution of 1.8 mio. €.


The core objective of Succeed is to promote the take-up of the research results generated by technological companies and research centres in Europe in a strategic field for Europe: digitisation and preservation of its cultural heritage.


Succeed will foster the take-up of the most recent tools and techniques by libraries, museums and archives through the organisation of meetings of experts in digitisation, competitions to evaluate techniques, technical conferences to broadcast results and through the maintenance of an online platform for the demonstration and evaluation of tools.


Succeed will contribute in this way to the coordination of efforts for the digitisation of cultural heritage and to the standardisation of procedures. It will also propose measures to the European Union to foster the dissemination of European knowledge through centres of competence in digitisation, such as Open Planets FoundationPrestoCentreAPARSEN3D-COFORM Virtual Competence Centre, and V-MusT.net.


In addition to the University of Alicante, the consortium includes the following European institutions: the National Library of the Netherlands, the Dutch Institute of Lexicology, the Fraunhofer Gesellschaft, the Poznań Supercomputing Centre, the University of Salford, the Foundation Biblioteca Virtual Miguel de Cervantes Savedra, the French National Library and the British Library.


For additional information, please contact Rafael Carrasco (Universidad de Alicante) or send an email to succeed@ua.es.


OAIS in het Nederlands! (OAIS in Dutch!)

Finally, an article on the OAIS model has been written in Dutch. Barbara Sierman of the KB National library of the Netherlands, Research department, wrote “Het OAIS-model, een leidraad voor duurzame toegankelijkheid” in Handboek Informatiewetenschap, issue 62, December 2012. The article describes the most important concepts of the latest version of the standard for digital preservation (2012) in clear terms.

Within the KB, the OAIS model guides the design of the new digital repoitory, and is important to everyone involved in the long-term preservation of digital material – from acquisition to metadata and from IT to online access. The article will also appear in www.iwabase.nl

————————————————————————————-

Eindelijk is er een artikel in het Nederlands verschenen over het OAIS model. Barbara Sierman van de Koninklijke Bibliotheek, afdeling Onderzoek, schreef “Het OAIS-model, een leidraad voor duurzame toegankelijkheid” in het Handboek Informatiewetenschap, aflevering 62 van december 2012. De beschrijving gaat uit van de laatste versie van de standaard voor digitale duurzaamheid (2012) en beschrijft in heldere taal de belangrijkste concepten.

Het OAIS model is binnen de KB leidend bij het ontwerp van het nieuwe Digitaal Magazijn, en is van belang voor iedereen een rol speelt bij het duurzaam toegankelijk houden van digitaal materiaal. Van acquisitie tot metadatering en van IT tot online toegang. Het artikel verschijnt ook in www.iwabase.nl

illustratie5

The Elephant in the Library: KB at Hadoop Summit Europe

Clemens Neudecker (technical coordinator in the Reseach department at the National Library of the Netherlands) and Sven Schlarb (Austrian National Library) will present the paper ‘The Elephant in the Library’ at the upcoming Hadoop Summit Europe, the leading conference for the Apache Hadoop community.

The paper, which is based on the work being done in the SCAPE project, discusses the role Apache Hadoop is playing in the mass digitization of cultural heritage in the MLA sector. Clemens and Sven were recently interviewed about their participation at this large-scale event – the interview is available from the Hadoop website: Meet the Presenters.

Paper abstract:
Libraries collect books, magazines and newspapers. Yes, that’s what they always did. But today, the amount of digital information resources is growing at dizzying speed. Facing the demand of digital information resources available 24/7, there has been a significant shift regarding a library’s core responsibilities. Today’s libraries are curating large digital collections, indexing millions of full-text documents, preserving Terabytes of data for future generations, and at the same time exploring innovative ways of providing access to their collections. 

This is exactly where Hadoop comes into play. Libraries have to process a rapidly increasing amount of data as part of their day-to-day business and computing tasks like file format migration, text recognition, linguistic processing, etc., require significant computing resources. Many data processing scenarios emerge where Hadoop might become an essential part of the digital library’s ecosystem. Hadoop is sometimes referred to as a hammer where you have to throw away everything that is not a nail. To remain in that metaphor: we will present some actual use cases for Hadoop in libraries, how we determine what are the nails in a library and what not, and some initial results.

© 2018 KB Research

Theme by Anders NorenUp ↑