Museums have been making their collections meta-data available digitally through websites & APIs for some time now (since 2009 for our Search the Collections and the V&A Collections API) , allowing academics and other researchers to examine object records and aggregate them across multiple museum collections. However there has not been the equivalent ability for researchers to access collection object images in the same way, i.e. via a standard API, or indeed any API. This means researchers wanting, for example, to
- virtually leaf through all the pages of a manuscript that has been broken up and held in different collections around the world (e.g. pages from the Glazier Rylands bible are held by the V&A, University of Manchester Library, Morgan Library, and others)
- examine side-by-side the pencil sketch, oil sketch, painting of a Constable work held by three different galleries (the V&A, Tate and Yale)
- bring together all of Leonardo Da Vinci’s notebooks from around the world (V&A, British Library, Royal Collection, Royal Library of Turin, National Library of Spain , and many more)
currently have to find the relevant items on multiple websites (if they have been digitised), download the images (likely all at different resolutions), and re-unite them somehow on their own computer, without then being able to annotate them in a way they can easily share with other researchers making similar efforts.
So, given IIIF provides the answer to so many problems, it must be a hugely complex thing to implement? Well not really, but it has certainly helped us to break our implementation of it down into smaller steps rather than a single giant leap, with each step providing us with new features for our website. This helps illustrate more graphically the benefits of the implementation, as opposed to talking rather abstractly about APIs and metadata (enjoyable as this is to some of us). The first step has been to implement the Image API. For this, we have set up a server running IIPImage (this may switch to Loris in the future); this serves up a (small) selection of object images used on our main website, identified using the asset identifiers generated by the V&A’s Digital Asset Management system. In the future this image API server will be automatically populated with all the assets associated with our collection objects, this will then play a role in how we display and annotate enhanced objects on the website.
We then needed a way to allow our content editors to easily make use of these IIIF managed images on the website. One of the many benefits of developing our own website content management system (CMS) is that we can add support for these features directly, thereby incorporating seamlessly into the editing process (although it’s not quite there yet, see Next Steps below). We built into our CMS editor a new block type which can be added to website articles which, when given an IIIF Image URL, will add the image to the page as a zoomable image (using Open Seadragon‘s recently added support for IIIF), see example below:
Annotations can also be added although these are not currently handled via Open/Web Annotations – that’s another step on the ‘to do’ list.
Of course, not all IIIF served images will need to be presented via a zoomable, annotated interface; this CMS feature will develop to allow editors to make use of the image in different ways.
As can be seen from the videos, the first pages on our website to make use of this feature are three highly detailed medieval copes featured in the V&A’s recently opened exhibition, Opus Anglicanum: Masterpieces of English Medieval Embroidery. These web features let the visitor view in close detail the embroidery, with some explanatory annotations.
The next step is implementing the Presentation API. For this we will need to generate manifest files which can make more structured use of Image APIs. So, for example, we can present the pages of a manuscript or the photographs in an album in a page-turning viewer, with each photograph or page an individual image asset (and even potentially served from another institution, so we could re-unite much of the Glazier-Rylands manuscript mentioned above). To do this in an automated way we need to look into how much of this we can generate from our existing collections data, so we are starting to analyse the ways structured objects (albums, manuscripts, books, portfolios, etc.) are catalogued in the collection and how this can be transformed into manifests.
We also plan to improve how the CMS works with IIIF as part of the deeper implementation of enhanced objects on the site. Ideally, the IIIF underpinnings to object images should be hidden from content editors, with IIIF Image URLs generated by the CMS based on the collection object the content editor is working on.
We will also look at other ways we can present the images. One thing we are particularly keen to do is show multi-spectral images of objects (e.g. X-Ray, UV, visible light, etc) in a more useful way. The development of IXIF (extending IIIF to handle motion & 3D media) is something we are also following closely.
This on-going project has been the work of many people in the Digital Media team, with others providing help and encouragement. So many thanks to:
- Holly Hyams (V&A Digital Media) for doing all the work of adding the images to the Opus pages and recording the demo videos
- James Docherty (V&A Digital Media) for implementing IIIF in the CMS
- Tom Crane (Digerati) for talking us through the Wellcome Library implementation and patiently answering lots of IIIF questions
- Michael Appleby (Yale) for providing the Constable example
- Catherine Yvard (V&A National Art Library) for providing the Glazier-Rylands manuscript example
- Arran Rees and the rest of the V&A Collections department for continued assistance on and deep knowledge of collections records
- Douglas Dodds (V&A Word & Images) for initiating and encouraging the V&A along this path