December 20, 2010

Guest post: LoC response to discussion on long-term preservation of JPEG 2000

Carl Fleischhauer, Program Officer at NDIIPP, Library of Congress, responds to recent posts from Johan van der Knijff and the Wellcome Library regarding long-term preservation of JPEG 2000. Both posts mentioned the need to rate the JPEG 2000 format for long-term sustainability using criteria drawn up by the Library of Congress and the National Archives, UK (we have helpfully created an openly available/editable Google doc to make this a collaborative effort).

Thanks for provocative blogs

Thanks to Johan van der Knijff and Dave Thompson for the helpful blog postings here that frame some important questions about the sustainability of the JPEG 2000 format. Caroline Arms and I were flattered to see that our list of format-assessment factors was cited, along with the criteria developed at the UK National Archives. We certainly agree that many of these factors have a theoretical turn and that judgments about sustainability must be leavened by actual experience.

We also call attention to the importance of what we call Quality and Functionality factors (hereafter Q&F factors). It is possible that some formats will "score" high enough on these factors as to outweigh perceived shortcomings on the Sustainability Factor front.

As I drafted this response, I benefited from comments from Caroline and Michael Stelmach, the Library of Congress staffer who chairs the Federal Agencies Still Image Digitization Guidelines Working Group.

Colorspace (as it relates to the LoC's Q&F factor Color Maintenance)

We agree that the JPEG 2000 specification would be improved by the ability to use and declare a wider array of color spaces and/or ICC profile categories. We join you in endorsing Rob Buckley's valuable work on a JP2 extension to accomplish that outcome.

When Michael and I were chatting about this topic, he said that he been doing some informal evaluations of the spectra represented in printed matter at the Library of Congress. This is an informal investigation (so far) and his comment was off the cuff, but he said he had been surprised to see that the colors he had identified in a wide array of original items could indeed be represented within the sRGB color gamut, one of the enumerated color spaces in part 1 of the JPEG 2000 standard.

Michael added that he knew that some practitioners favor scRGB - not included in the JPEG 2000 enumerated list - either because of scRGB's increased gamut and/or perhaps because it allows for linear-to-intensity representations of brightness rather than only gamma-corrected representations. The extended gamut - compared to sRGB - will be especially valuable when reproducing items like works of fine art. And we agree with Johan van der Knijff's statement that there will be times when we will wish to go beyond input-class ICC profiles and embrace 'working' color spaces. All the more reason to support Rob Buckley's effort.

Adoption (the LoC Sustainability criteria includes adoption as a factor)

This is an area in which we all have mixed feelings: there is adoption of JPEG 2000 in some application areas but we wish there were more. Caroline pointed to one positive indicator: many practitioners who preserve and present high-pixel-count images like scanned maps, have embraced JPEG 2000 in part because of its support for efficient panning and zooming. The online presentation of maps at the Library of Congress is one good example (for a given map you see an 'old' JPEG in the browser, generated from JPEG 2000 data under the covers).

Caroline adds that the geospatial community uses JPEG 2000 as a standard (publicly documented, non-proprietary) alternative to the proprietary MrSID. Both formats continue to be used. LizardTech tools now support both equally. Meanwhile, GeoTIFF is used a lot too. Caroline notes that LizardTech re-introduced a free stand-alone viewer for JPEG2000/MrSID images last year in response to customer demand. And a new service for solar physics from NASA, Helioviewer, is based on JPEG2000. NASA includes a justification for using the format on their website.

For my part, I can report encountering some JPEG 2000 uptake in moving image circles, ranging from its use in the digital cinema's 'package' specification (see a slightly out of date summary) to its inclusion in Front Porch Digital's SAMMA device, used to reformat videotapes in a number of archives, including the Library of Congress.

Meanwhile, Michael recalled seeing papers that explored the use of JPEG 2000 compression in medical imaging (where JPEG 2000 is an option in the DICOM standard), with findings that indicated that diagnoses were just as successful in JPEG 2000 compressed images as they were when radiologists consulted uncompressed images. An online search using a set of terms like "JPEG2000, medical imaging, radiology" will turn up a number of relevant articles on this topic, including Juan Paz et al, 2009, "Impact of JPEG 2000 compression on lesion detection in MR imaging," in Medical Physics, which provides evidence to this effect.

On the other hand - negative indicators, I guess - we have the example of non-adoption by professional still photographers. On the creation-and-archiving side, their fondness for retaining sensor data motivates them to retain raw files or to wrap that raw data in DNG. I was curious about the delivery side, and looked at the useful dpBestFlow website and book, finding that the author-photographer Richard Anderson reports that he and his professional brethren deliver the following to their customers: RGB or CMYK files (I assume in TIFF or one of the pre-press PDF wrappers), "camera JPEGs" (old style), "camera TIFFs," or DNGs or raw files. There is no question that the lack of uptake of JPEG 2000 by professional photographers hampers the broader adoption of JPEG 2000.

Software tools (their existence is part of the Sustainability Factor of Adoption; their misbehavior is, um, misbehavior)

It was very instructive to see Johan van der Knijff's report on his experiments with LuraTech, Kakadu, PhotoShop, and ImageMagick. If he is correct, these packages do misbehave a bit and we should all encourage the manufacturers to fix what is broken. There is of course a dynamic between the application developers and adoption by their customers. If there is not greater uptake in realms like professional photography, will the software developers like Adobe take the time to fix things or even continue to support the JPEG 2000 side of their products?

Caroline, Michael, and I pondered Johan van der Knijff's suggestion that "the best way to ensure sustainability of JPEG 2000 and the JP2 format would be to invest in a truly open JP2 software library." We found ourselves of two minds about this. On the one hand, such a thing would be very helpful but, on the other, building such a package is definitely a non-trivial exercise. What level of functionality would be desired? The more we want, the more difficult to build. Johan van der Knijff's comments about JasPer remind us that some open source packages never receive enough labor to produce a product that rivals commercial software in terms of reliability, robustness, and functional richness. Would we be happy with a play-only application, to let us read the files we created years earlier with commercial packages that, by that future time, are defunct? In effect such an application would be the front end of a format-migration tool, restoring the raster data so that it can be re-encoded into our new preferred format. As we thought about this, we wondered if people would come forward to continue to update the software for new programming languages and operating systems, to keep them in operation to ensure that they are still working.

As a sidebar, Johan van der Knijff summarizes David Rosenthal's argument that "preserving the specifications of a file format doesn’t contribute anything to practical digital preservation" and "the availability of working open-source rendering software is much more important." We would like to assert that you gotta have 'em both: it would be no good to have the software and not the spec to back it up.

Error resilience

Preamble to this point: In drafting this, I puzzled over the fit of error resilience to our Sustainability and Quality/Functionality factors. In our description of JPEG 2000 core coding we mention error resilience in the Q&F slot Beyond Normal. But this might not be the best place for it. Caroline points out that error resilience applies beyond images and she notes that it may conflict with transparency (one of our Sustainability Factors). We find ourselves wishing for a bit of discussion of this sub-topic. Should error resilience be added as a Sustainability Factor, or expressed within one of the existing factors? Meanwhile, how important is transparency as a factor?

Here's the point in the case of JPEG 2000: Johan van der Knijff's blog does not comment on the error resilience elements in the JPEG 2000 specification. These are summarized in annex J, section 7, of the specification (pages 167-68 in the 2004 version), where the need for error resilience is associated with the "delivery of image data over different types of communication channels." We have heard varying opinions about the potential impact of these elements on long term preservation but tend to feel, "it can't be bad."

Here are a few of the elements, as outlined in annex J.7:
  • The entropy coding of the quantized coefficients is done within code-blocks. Since the encoding and decoding of the code-blocks are independent, bit errors in the bit stream of a code-block will be contained within that code-block.
  • Termination of the arithmetic coder is allowed after every coding pass. Also, the contexts may be reset after each coding pass. This allows the arithmetic coder to continue to decode coding passes after errors.
  • The optional arithmetic coding bypass style puts raw bits into the bit stream without arithmetic coding. This prevents the types of error propagation to which variable length coding is susceptible.
  • Short packets are achieved by moving the packet headers to the PPM (Packed Packet headers, Main header marker) or PPT (Packed packet header, Tile-part header marker) segments. If there are errors, the packet headers in the PPM or PPT marker segments can still be associated with the correct packet by using the sequence number in the SOP (Start of Packet marker).
  • A segmentation symbol is a special symbol. The correct decoding of this symbol confirms the correctness of the decoding of this bit-plane which allows error detection.
  • A packet with a resynchronization marker SOP allows spatial partitioning and resynchronization. This is placed in front of every packet in a tile with a sequence number stating at zero. It is incremented with each packet.
Conclusion

Thanks to the Wellcome Library for helping all of us focus on this important topic. We look forward to a continuing conversation.

December 08, 2010

Suitability of JPEG2000 for preservation, help us do some further work

Following on from Johan van der Knijff's guest post on this blog we were interested in following up issues that Johan raised. If, as Johan suggests, there are some gaps in the tool sets available for working with JPEG2000 in a reliable way and if some of the long term preservation issues are not well understood, perhaps we could begin to explore where the gaps are. Specifically, we were wondering if we could compare the suitability of just one part of JPEG2000 - the JP2 format - for long term preservation against the two sets of criteria that Johan mentioned.

These criteria were

1. The Library of Congress Sustainability of Digital Formats Planning for Library of Congress Collections, and
2. The National Archives Digital Preservation Guidance Note 1: Selecting file formats for long-term preservation.

Our thinking is that we could do a quick, targeted exercise utilising our community expertise to provide an overview that might reveal useful areas for future research. We propose to limit our investigation to just the JP2 format (for now) and the two sets of suitability criteria. We're looking for high level properties of the JP2 format in relation to the TNA and LoC criteria. High level in the sense that we think that it should be possible to set out properties of JP2 as a series of bullet points against each of the TNA and LoC criteria. It's not a perfect approach by any means, but as a starting point it seems to offer interesting possibilities.

It's not meant to be definitive, but to serve as an information sharing exercise to help non-technical archivists/librarians better understand the suitability of JP2 to long term preservation, and to highlight areas where more work may be required. In this way we hope to point the way for developers and the more technically minded to do further work that makes JPEG2000 a more suitable format for long term preservation by providing better information/documentation to support that.

So we're asking you to collaborate with us in this piece of work. We've created a framework document and put it onto GoogleDocs, where it can be viewed and edited. This document summarises the TNA and LoC criteria (the full criteria can be seen online, following the links given above) and space to add your response as bullet points in the right hand column.

Remember that we're thinking about JP2 only and we're looking for a high level overview - so be brief and stick with the bullet points for now. We'll take on the editing and management of the document.

We will publish the results sometime in early 2011, providing we can get a sufficient and meaningful response. If you have any questions, please ask!

December 02, 2010

Guest post: Ensuring the suitability of JPEG 2000 for preservation

Johan van der Knijff, of the KB/National Library of the Netherlands, follows up his presentation at the JPEG 2000 seminar with a guest blog post on long-term preservation of JPEG 2000.

In my presentation during the JPEG 2000 seminar I discussed the suitability of JPEG 2000 (and more specifically its JP2 format) for long-term preservation. I highlighted the erroneous restriction in the JP2 (and JPX) format specification that only allows ICC profiles of the 'input' class to be used. This effectively prohibits the use of all working colour spaces such as Adobe RGB, which are defined using 'display device' profiles. I also showed how different software vendors interpret the format specification in subtly different ways, and how such issues can create problems in the long term, such as the loss of colour space and resolution information after some future migration.

This leads us to the question; to what extent we can predict a specific file format's suitability for long-term preservation. The answer is not that straightforward. The Library of Congress assesses file formats against 7 'sustainability factors', whereas the National Archives have formulated a list of 12 criteria. It is beyond the scope of this blog post to present a detailed analysis of the extent to which JP2 lives up to either set of criteria. However, it is interesting to have a look at whether these criteria could have been helpful in identifying the issues covered by my presentation.

Format specifications
First, both the LoC's 'sustainability factors' and the TNA criteria acknowledge the importance of having published specifications of a file format. The LoC uses a 'Disclosure' factor, which refers to “the existence of complete documentation, preferably subject to external expert evaluation”. TNA take this one step further by also defining a 'Documentation Quality' criterion, which expresses the degree to which documentation is comprehensive, accurate and comprehensible. This last criterion largely covers the JPEG 2000 ICC issue, although it's questionable how useful this would have been to identify it a priori. A problem with errors and ambiguities in format specifications is that they can be incredibly easy to overlook, and you may only become aware of them after discovering that different software products interpret the specifications in slightly different ways.

Adoption
Formats that are widely used are typically well supported by an array of software tools, and such formats are unlikely to disappear into obsolescence. TNA expresses this through an 'Ubiquity' criterion, which essentially reflects a file format's overall popularity. The definition of the LoC's 'Adoption' factor includes a list of criteria that can be used as “evidence of adoption”. The first set of criteria here includes “bundling of tools with personal computers, native support in Web browsers or market-leading content creation tools, and the existence of many competing products for creation, manipulation, or rendering of digital objects in the format”.


Note that JP2 isn't doing particularly well when measured against any of these criteria. However, the LoC list adds that “a format that has been reviewed by other archival institutions and accepted as a preferred or supported archival format also provides evidence of adoption”. This certainly seems to be the case for JP2. But how relevant is this, really? Going back to the ICC profiles issue: the JP2 file format has been around for about 10 years now, and its acceptance by the archival community has been growing steadily over the last 5 years or so. Yet, this whole issue seems to have gone unnoticed in the archival community for all those years, and I think this is slightly worrying.

Now let's imagine for a moment that JP2 would have been picked up by the digital photography and graphic design communities. For such uses the ability to do proper colour management is a basic prerequisite, and limiting the support of ICC profiles to the 'input' class would have made the format virtually useless to these user communities. My guess is that in this -entirely fictional- scenario, the format specification would have either improved quickly (based on feedback from the user community), or the respective user communities would have simply stopped using the format altogether. The problem here seems to be that very few people in the archiving community are even aware of such things as colour spaces and colour management, let alone their importance within the context of preservation. With more established formats such as TIFF this may not be as much of a problem, if only because TIFF has been 'road tested' for decades by the photography and graphic design communities. As an archiving community we cannot fall back to any similar 'road testing' in the case of JP2. And this brings me to my next point.

Importance of hands-on experience
Preservation criteria such as those of the LoC or TNA are invaluable for assessing the suitability of a format for preservation, but I believe it is equally important to have actual hands-on experience with the tools that are used for creating, modifying, and reading the format. For instance, the TNA criteria use the number of software tools that support a given format as an indicator for the extent of current software support of that format. But knowing the number of tools says nothing about how good or useful these tools actually are! In the case of JP2, quite a large number of (mostly free or open-source) tools exist that, under the hood, are using the open JasPer library. JasPer is known to have performance and stability issues that make it unsuitable for most professional applications (for which, I should emphasise, it was never developed in the first place!). These issues affect all software tools that are using JasPer. So, only counting the number of available tools may be simply missing the point without incorporating any additional quality criteria. But how would you define these?

Part of the answer, I think, is that assessing a format's suitability for long-term preservation is not a purely top-down process. Most of the software-related issues that I showed in my presentation were found by simply experimenting with actual files, encoders and characterisation tools: convert a TIFF to JP2; convert it back to TIFF; use existing metadata-extraction and characterisation tools such as ExifTool and JHOVE to analyse the in- and output files; try to understand the output of these tools; compare the output before and after the conversion, and so on. Such experiments are extremely useful for getting a feel for the strengths and weaknesses of specific software tools, and they can reveal problems that are not readily captured by pre-defined criteria. In some cases, their results may be used to refine existing criteria, or even add new ones.

Final notes on preservation criteria
Although I wouldn’t downplay the importance of preservation criteria such as those used by the LoC or TNA, I think it’s important to realise that such criteria are largely based on theoretical considerations. In most cases they are not based on any empirical data, and as a result their predictive value is largely unknown. For example, an interesting blog post by David Rosenthal argues that preserving the specifications of a file format doesn’t contribute anything to practical digital preservation. According to Rosenthal, the availability of working open-source rendering software is much more important, and he explains how “formats with open source renderers are, for all practical purposes, immune from format obsolescence”.

This takes us directly to the lack of JPEG 2000-related activity in the open source community, which I also referred to in my presentation. Perhaps the best way to ensure sustainability of JPEG 2000 and the JP2 format would be to invest in a truly open JP2 software library, and release this under a free software license. This could either take the form of the development of a completely new library, or investing in the improvement and further development of an existing one, such as OpenJPEG. This would require an investment from the archival community, but the payoff may be well worth it.

Acknowledgement: this blog entry was largely inspired by an e-mail discussion that was started by Richard Clark, and in particular by a contribution to this discussion by William Kilbride.

November 29, 2010

Wellcome Library releases an ITT for a Workflow Tracking System

If you’ve been reading our blog regularly you’ll know about how the Library plans to transform itself into a groundbreaking digital resource, allowing access to much of the Library’s material in digital form.

As part of this program we’ve just released an ITT for a Workflow Tracking System. We’re looking for a system that will track and manage the processes around creating digital content – whether that content is digitised by us, digitised externally or born digital archival material- and automating that activity as much as possible.

Within the Library, staff who want to add content to our Digital Library will do so using the Workflow Tracking System. This means using the WTS to record that all digital content, e.g. digitised books or archival collections, has been created correctly, has had its descriptive metadata attached, is converted to JPEG2000 (or some other appropriate format) and is ingested into our digital object repository. The WTS will also create metadata encoding and transmission standard (METS) files. These will be used by the front end system to deliver digital content to our users.

Expressed simply, the WTS will play a critical central role in ensuring that all digital content that is destined for our Digital Library is created, quality controlled and ingested accurately and efficiently into the Library’s repository.

November 24, 2010

JPEG 2000 seminar - edited highlights #2

This blog post continues my summary of the JPEG 2000 for the Practitioner Seminar (the edited highlights of the first five presentations can be seen in a previous blog post).

Following Svein Arne Brygfjeld's discussion of the National Library of Norway's use of JPEG 2000, we had Saša Mutić, General Director of Geneza, speaking about the "Practical Usage of JP2 Files with Presentational Web Interface."Saša, based in Slovenia, gave an overview and demonstration of the delivery system MediaINFO that uses a JPEG 2000 image server. This system is soon to be used by the National Library of Norway to deliver their digitised images. Some interesting features include the ability to easily share content, and to create "Personal Library" working spaces. There is also a demonstration of the system on YouTube.

Johan van der Knijff, from the Koninklijke Bibliotheek (National Library of the Netherlands), spoke about "JPEG 2000 for Long-Term Preservation in Practice: problems, challenges and possible solutions." He started off with an overview of the KB's investigations and use of JPEG 2000 (which started in 2007) and their current mass digitisation programme which will see the digitisation of around 14m images. Johan highlighted a number of issues with JPEG 2000 that although fairly minor in nature, are issues that should be addressed by either fixing deficiencies in the standard (particularly around colour profile support), or by changing the way software developers implement the standard (making sure that compressed files do indeed meet the standard). He stressed the importance of a strong user community and knowledge sharing as key to solving the remaining issues with the JPEG 2000 format.

Gary Hodkinson, Managing Director of LuraTech Ltd., gave a presentation entitled "Delivering High-Resolution JPEG2000 Images and Documents over the Internet." He provided a quick background of the company itself, which is German based, but has seen the recent establishment of a UK subsidiary. LuraTech's core business is document conversion and compression, and they supply a JP2 image compression tool called LuraWave. He gave an introductory background to image compression and image formats in general, what the key challenges are around compression, and how JPEG 2000 meets those challenges. He also gave further details on LuraWave, and the LuraTech Image Content Server, which works with JP2 to provide delivery of images to end users.

As the final speaker of the day, Katty van Mele, from IntoPIX, gave an informative talk on "Pros and cons of JPEG 2000 for video archiving". She covered a wide range of moving image applications for JPEG 2000, in the cinema, broadcasting and cultural heritage world. JPEG 2000 is the only format currently in use for digital cinema, while broadcasters are still working toward agreeing a suitable long-term format (JPEG 2000 being a leading contender). Katty stressed the fact that massive amounts of material in moving image formats already in existence and continually being created makes long-term storage and preservation a very serious problem. JPEG 2000 is increasingly now seen as the solution to the storage problem, and a number of other problems as well, such as royalty payments currently required to use MPEG for example. IntoPIX provides solutions for converting JP2s, including hardware-based compressors that are orders of magnitude quicker than software-based compressors.

During the course of the day, delegates were asked to write questions down and post them on whiteboards to raise during the final session of the day: questions and answers, moderated by Ben Gilbert, Photographer at the Wellcome Library. Ben posed the questions to the audience, alternating between technical issues (such as "What is the difference between a tile and a precinct"), to more philosophical questions (such as "Is it really feasible to store master archive images as lossy compressed files?"). This stimulated a good amount of discussion, which is impossible to adequately capture in a blog post!

Many thanks go to William Kilbride of the DPC for putting all the presentations online.



JPEG 2000 seminar - edited highlights #1

JPEG 2000 for the Practitioner seminar attracted a full house of 80+ delegates on 16 November at the Wellcome Trust.

The aim of the seminar was to look at specific case studies of JPEG 2000 use, to explain technical issues that have an impact on practical implementation of the format, and explore the context of how and why organisations might choose to use JPEG 2000. Follow the day as it unwound at Twitter #jp2k10.

Delegates were welcomed by Simon Chaplin, Head of the Wellcome Library, who briefly summarized the context of the Wellcome's digital library ambitions. I (Christy Henshaw) gave a quick introduction to the JP2K-UK group, and the origins of the seminar as one of the main outcomes from the group discussions. What follows is an edited highlights version of the talks given on the day; the full presentations are available on the DPC website.

The first talk, "What did JPEG 2000 ever do for us?" was given by Simon Tanner, Director of King's Digital Consultancy Service. The fact of the matter, according to Simon, is that although JPEG 2000 is "cool and froody", and has a lot to offer in terms of functionality and intelligent format design, those who use it are doing so because it can save them money. The economic benefits can not be underestimated for large scale digitisation - even though storage is relatively cheap these days, the total cost of owning a million images is quite high. Storing master files as JPEG 2000s can save an institution over £100,000 per year in terms of ongoing storage costs.

Richard Clark, Managing Director of Elysium Ltd., gave an overview of the JPEG 2000 standard, "JPEG 2000 Standardisation: A Practical Viewpoint." As the UK head of delegation to the JPEG Committee, Richard has been involved with developing the standard since its inception. Richard ran through the key features and functionality that can be achieved with the JPEG 2000 format (and its many parts), and explained the rationale behind the standard. He quoted the original objective, which was to develop an "architecturally based standard" that would enable flexibility for a wide range of uses, and he demonstrated that this was, in fact achieved. Although JPEG 2000 has a lot to offer the cultural heritage industry, that industry has not been well represented on the standards committees.

The next hour was taken up with the "Profiles" session. Sean Martin, Head of Architecture and Development at the British Library, kicked off with a description of the JP2 profile (i.e. the specific parameter settings) to be used for the British Library's newspapers project. Key to point out here is that the British Library has opted for lossy compression for its archival masters, stating that "it is also desirable that the same master file support the needs for both long term archival and also access." I followed with a brief summary of the compression aspects of the Wellcome Library's profile (our JP2 profile is available online), and how we determine the right level of compression. Like the British Library, we use lossy compression for our archival masters, and will use the same file for providing access. Bedrich Vychodil presented the new JP2 profiles for the National Library of the Czech Republic that will soon come into force for a wide range of materials. In contrast to the British Library and the Wellcome, the Czech National Library will use a different, lossless, profile for their archival masters, and a lossy profile for their access files. Delegates were provided with a list of these parameter settings, as well as several others, available online.

Petr Zabicka spoke about "IIPImage and OldMapsOnline", a development project carried out by the Moravian Library in the Czech Republic that uses JPEG 2000 to display large images, in particular maps. The imaging server they have devised is based on IIPImage and uses the tiles encoded into the JPEG 2000 format to provide speedy access to portions of the image when zooming and panning. More uniquely, they have developed a georeferencing application that allows the user to match points on historic maps with those on Google maps, and to overlay - and correct - old maps using the Google maps API.


After a well-deserved lunch, delegates heard Svein Arne Brygfjeld from the National Library of Norway speak about "Implementing JP2K for Preserv..." (his title was abbreviated in order to fit a picture of a glacier on the slide, but I am led to believe the title ended with "..ation and access, experiences from the National Library of Norway". The glacier provides a key to the talking point of Svein Arne's presentation - extremes. Located in the Arctic Circle, at Mo i Rana, the NLN is carrying out mass digitisation of newspapers and other materials, and has recently decided to store their master files as JPEG 2000 lossless files. Digitisation is such a large part of what the NLN does, that around 30% of the workforce is involved in digitisation.

Stay tuned for more edited highlights covering the second half of the seminar...

October 18, 2010

JPEG 2000 seminar - draft programme now available

Places are still available on the JPEG 2000 seminar to be held at the Wellcome Trust on 16 November.

Draft programme with timetable and confirmed speakers:

09:00 Registration, coffee
10:00 Welcome, introduction
Christy Henshaw, Wellcome Library, Chair of JP2K-UK

Morning session Chair: William Kilbride, Executive Director, Digital Preservation Coalition
10:10 What did JPEG 2000 ever do for us?
Simon Tanner, Director, Kings Digital Consultancy Service
10:40 JPEG 2000 standardization - a pragmatic viewpoint
Richard Clark, UK head of delegation to JPEG and MD of Elysium Ltd.
11:10 JPEG 2000 profiles
Five ten-minute presentations moderated by Sean Martin, Head of Architecture and Development, British Library
12:10 IIPImage and OldMapsOnline
Petr Zabicka, Head of R&D, Moravian Library, Czech Republic

12:40 LUNCH

Early afternoon session Chair: Dave Thompson, Digital Curator, Wellcome Library
13:40 JP2K for preservation and access, experiences from the National Library of Norway
Svein Arne Brygfjeld, National Library of Norway
14:10 Web presentation of JPEG 2000 images
Sasa Mutic, Geneza and Ivo Iossiger, 4DigitalBooks, Switzerland
14:40 JPEG 2000 for long-term preservation in practice: problems, challenges and possible solutions
Johan van der Knijff, Koninklijke Bibliotheek (NL)

15:10 Coffee

Late afternoon session Chair: Simon Tanner, Director, Kings Digital Consultancy Service
15:40 Delivering High-Resolution JPEG2000 Images and Documents over the Internet
Gary Hodkinson, MD of Luratech Ltd.
16:10 Pros and Cons of JPEG 2000 for video archiving
Katty van Mele, IntoPIX
16:40 Questions and discussion
Moderated by Ben Gilbert, Photographer, Wellcome Library

17:10 Concluding remarks

October 01, 2010

Guest post: Examining losses, a simple Photoshop technique for evaluating lossy-compressed images

Bill Comstock, Head of Imaging Services at Harvard College Library, writes a second post about using Photoshop to evaluate lossy compressed images.

If you decide to employ JPEG2000’s lossy compression scheme, you will also have to determine the degree to which you are willing to compress your files; you’ll have to work to identify that magic spot where you realize a perfect balance between file size reduction and the preservation of image quality.

Of course, there is no magic spot, no perfect answer -- not for any single image, and certainly not for the large batches of images that you will want to process using a single compression recipe. Regardless of whether you decide to control the application of compression by setting the compression ratio, using a software-specific “quality” scale, or by signal-to-noise ratio, you will want to test a variety of settings on a range of images, scrutinize the results, and then decide where to set your software.

Below I describe a Photoshop technique for overlaying an original, uncompressed source image, with a compressed version of the image to measure the difference between the two, and to draw your attention to regions where the compressed version of the image differs most significantly from the source image. Credit for the technique belongs to Bruce Fraser.

1. First, open up the two images that you want to compare (the original source image, and the compressed JP2 derivative) in Photoshop.


2. Next, go to the “Image” menu and select “Apply Image”.



3. Set Blending to “Subtract”; Scale to “1”; and Offset to “128.“


4. The differences between the two images are now visible (you may need to magnify the image beyond 100%), and the standard deviation between the two copies can be displayed on the Histogram panel.

(A standard deviation of zero indicates that the two copies are identical and that the compressed version was losslessly compressed.)



Another option: You can also create a two layer image in PS where one layer is the source image, the second layer is the compressed copy, and by setting the blending option to “difference”. You may find the technique described in detail above preferable, if only because it makes the variance between the two copies more easily visible by shifting the pixel-to-pixel differences into the middle gray region.

Within the group that I manage, we modulate compression using PSNR. We test each candidate setting on a large number of images and then examine some number of the least and most compressed images in the set. We repeat the process until we have zeroed in on what seems to be the best setting.

Good luck!

September 20, 2010

Calling all JPEG 2000 profiles

We plan to provide delegates of our JPEG 2000 Seminar (16 Nov) with a list of JPEG 2000 profiles from a number of organisations who are currently using the format. Some of these will be briefly presented during a "Profiles" session on the day.

Do you have a profile you are currently using, and would like to distribute to your peers? If so, please send the following details to Christy at c.henshaw@wellcome.ac.uk.

The specific information we're looking for includes:

Used for: (e.g Newspapers)
Conversion software used: (e.g. Kakadu)
File format: (e.g. Part 1 (.jp2)
Lossy or lossless: (choose)
Typical compression: (expressed as a ratio)
Tiling: (e.g. 1024 x 1024)
Progression order: (e.g. RPCL)
No. of decomposition levels: (e.g. 6)
Number of quality layers: (e.g. 12)
Code block size (xcb = yxb): (e.g. 6)
Transformation: (e.g. 9-7 irreversible filter)
Precinct size: (e.g. 128 x 128)
Regions of interest: (yes or no)
Code block size : (e.g. 64 x 64)
TLM markers: (yes or no)

September 14, 2010

New Wellcome Digital Library blog

The Wellcome Library has launched a new blog (wellcomedigitallibrary.blogspot.com/), centered on the development of the Wellcome Digital Library. The blog will be a "a real-time progress report, discussion outlet, and notification area."

This JPEG 2000 blog will still remain focused specifically on the work being done around JPEG 2000 at the Library, but the Wellcome Digital Library blog will provide much broader information on the programme, including:

  • What will be digitised, and how the content will be of use to researchers.
  • How we will facilitate research activity, learning, and discovery.
  • Logistics of digitisation and workflows.
  • In-house vs. outsource options.
  • Metadata.
  • Long-term data management.
  • Delivery formats, speeds, and functions.

September 10, 2010

Guest post: JPEG2000 recipes for the Aware encoder

As our first guest poster, Bill Comstock, Head of Imaging Services at Harvard College Library, writes about the specific "recipes" used at Harvard for producing JPEG2000 images.

I needed help remembering when it was that we began making JPEG2000 images. I ran a search against the Harvard Library’s preservation digital repository, DRS, and it looks like we first deposited a JP2 image in 2004.

Over the intervening six years, we’ve refined and settled on a single recipe that we use to produce lossy-compressed images, and another that we use to produce losslessly-compressed JP2s. I’ll share these recipes with you below. My reasons for sharing are two:

1) Depending upon the software used, there are many encoding combinations to consider - many more than one would have to consider when cooking up the more familiar and less complex TIFF. My group uses the Aware JPEG2000 SDK encoder. Flipping through Aware’s 330 page manual (“AccuRad J2KSuite Developer’s Guide”), I count...152 different command line options. In sharing our recipes (the combination of options and parameters that we invoke ), I’d like to speed others along in developing their own encoding formulations and JP2 production workflows.

2) Not only are there many encoding options to consider, but some are complex and a bit intimidating. Honestly, I don’t know what the “--set-input-raw-channel-subsampling” or “--set-output-j2k-rd-slope” options do. I do think that I understand the options that we use, but I may hear from one of you that my understanding is flawed and that our recipe could be improved upon, or at least better understood and explained. Here you go.

Lossless encoding (Windows command line)

j2kdriver.exe --set-input-image --set-output-j2k-color-xform YES --set-output-j2k-error-resilience ALL --wavelet-transform R53 --set-output-j2k-bitrate 0 --set-output-j2k-progression-order RLCP --tile-size 1024 1024 --output-file-type JP2 --output-file-name

Notes on individual options:

  • “--set-input-image ” reads file into the encoder's memory-buffer and auto-detects the input file format
  • “-- set output-j2k-color-xform YES” I believe that one does not need to call this option explicitly; YES seems to be the default value. The transform referred to here is a colorspace transformation from RGB to YUV. This conversion is made prior to compressing the data. Applying compression to YUV data is more efficient, yielding smaller files than the same compression applied to the unconverted RGB data.
  • --set-output-j2k-error-resilience ALL This function will take the following parameters, each explained below by Aware’s Alexis Tzannes.
  • SOP to enable Start of Packet markers
  • EPH to enable End of Packet Header markers
  • SEG to enable segmentation symbols
  • ALL to enable all of the above
  • NONE to disable all of the options
  • Resynchronization markers: Start of Packet (SOP), End of Packet Headers (EPH). These are used to signal the beginning of each packet and the end of each packet header, and can be used to resynchronize in the case of missing or corrupted data. This allows the decoder to detect and discard entire corrupted packets. So the SOP and EPH are basically tags that signal the beginning and end of a packet (a piece of coded data) in the file. If a packet gets corrupted the error resilient decoder can resync using the next SOP packet marker. The idea here is that if one packet in a file is bad, we don't lose everything that comes after it (as was the case with original JPEG). With JPEG 2000, you could lose a packet and have a small area of the image go bad, but the decoder can recover and keep on decoding.
  • Segmentation symbols: this adds a special four symbol code to specific locations in the compressed data stream, enabling error resilient decoders to detect errors, if this symbol is corrupted. This allows the decoder to detect and discard corrupted bitplanes. So this is similar, but less granular, as it operates at the bitplane level, each bitplane may include multiple packets. Overall, these features would be useful in noise prone environments or over unreliable networks.
  • --wavelet-transform R53” specifies use of the reversible "integer 5-3 filter" (compression) to produce a losslessly encoded JPEG2000 file.
  • --set-output-j2k-bitrate 0 Quoting from the “AccuRad J2KSuite Developer’s Guide”: “Sets the output image bitrate, in bits per pixel. A bitrate of 0 indicates that all the quantized data should be included in the image. This creates lossless images if the R53 wavelet is chosen Sets the output image bitrate, in bits per pixel. A bitrate of 0 indicates that all the quantized data should be included in the image. This creates lossless images if the R53 wavelet is chosen [...].”
  • “--progression-order RLCP” “For a given tile, the packets contain data from a specific layer, a specific component, a specific resolution, and a specific precinct. The order in which these packets are interleaved is called the progression order. The interleaving of the packets can progress along four axes: layer, component, resolution and precinct.” [1] A progression order that begins with “R” (resolution) indicates that the data is organized so that low resolution information will be decoded first, followed and augmented by the remaining higher resolution data in the codestream.
  • --tile-size 1024 1024 This tile size was prescribed, as it was said to be optimally matched to the software our library uses to dynamically generate and deliver JPEG files from stored JP2 masters.

Lossy encoding

j2kdriver.exe --set-input-image-file --set-output-j2k-color-xform YES --set-output-j2k-error-resilience ALL --wavelet-transform I97 --set-output-j2k-progression-order RLCP --set-output-j2k-psnr 46 --tile-size 1024 1024 --output-file-type JP2 --output-file-name

  • --wavelet-transform I97” specifies use of the “irreversible 9-7 filter” to produce a lossy encoded JPEG2000 file.
  • “--set-output-j2k-psnr 46” The pSNR function was selected because it effectively, although imperfectly, modulates the level of compression applied to each image based on the image's particular characteristics: the arrangement and variability of raster values. When we set the pSNR value to 46 db for the page-images that we create, we've come to expect a very high-quality encoded image. There are cases (certain kinds of photographs, illustrations with fine thin lines) where a 46 db setting would produce a too heavily compressed file. Too guard against over-compression, we have developed an effective (although a bit crude) method for dynamically resetting the db value if the file appears to have been too heavily compressed. This is a topic for another day.
  • I believe that most users set a fixed compression ratio, e.g., “--set-output-j2k-ratio ”.

Creating a Windows batch file

If you would like to run your encoding script over a directory of TIFF images (for example), you can create a simple batch file.

Example:

for %%f in (*.tif) do j2kdriver.exe --set-input-image-file "%%f" --set-output-j2k-error-resilience ALL --set-output-j2k-progression-order RLCP --set-output-j2k-ratio 8 --tile-size 1024 1024 --output-file-type JP2 --output-file-name "%%~nf.jp2"

Good luck putting together your own recipes and workflows. Again, please let me know if you have any suggestions for improving our practice.

[1] SO/IEC JTC1/SC29 WG1, JPEG 2000 Editor Martin Boliek, Co-editors Charilaos, C. and E.Majani. "JPEG 2000 Part I Final Committee Draft Version 1.0.", 2000, http://www.jpeg.org/public/fcd15444-4.pdf (accessed September 3, 2010).

Bill Comstock
Head, Imaging Services
Harvard College Library
Widener, D70C
Harvard Yard
Cambridge, MA 02138
http://imaging.harvard.edu/

August 27, 2010

JPEG 2000 for the practitioner - free one-day seminar

This recently announced call for papers/registration may be of interest to our readers:

JPEG 2000 for the practitioner - a one-day seminar

A free seminar to explore and examine the use of JPEG 2000 in the cultural heritage industry will be held at the Wellcome Trust. The seminar will include specific case studies of JPEG 2000 use. It will explain technical issues that have an impact on practical implementation of the format, and explore the context of how and why organisations may choose to use JPEG 2000. Although the seminar will have an emphasis on digitisation and digital libraries, the papers will be relevant to a range of research and creative industries. Places are limited to 80 attendees. Papers will be made available online after the event.

Tuesday 16 November 2010
9am - 5pm
Wellcome Trust, 215 Euston Road, London, UK

This seminar is hosted by the JPEG 2000 Implementation Working Group and the Wellcome Library.

Contributors: please submit the title and a brief abstract of your proposed paper and a bio of the speaker/s to c.henshaw@wellcome.ac.uk by October 4, 2010.

Delegates: if you would like to attend please email your name and the name of your institution to c.henshaw@wellcome.ac.uk by 1 November, 2010.

August 24, 2010

Determining rates of JPEG 2000 compression on a collection-by-collection basis

As a result of our decision to "go lossy", we need to make sure that the level of lossiness is appropriate to the image content. We can't do this on the individual image level, as there are simply too many images. But we can do this on the collection level. We came up with a rule of thumb:

For any given collection of physical formats we will apply a range of different compressions on a representative sample from that collection . We will continue compressing at regular intervals until visual artefacts began to appear on any individual image (i.e. 2:1, 4:1, 6:1, and so on).
Once we determined at which compression level the worst-performing image began to show visual artefacts, we will choose the next lowest compression level (if the worst-performing image showed artefacts at 10:1, we would chose 6:1) and apply that to the entire collection, regardless of how much more compression other material types in that collection might bear.
This rule of thumb allowed us to strike a balance between storage savings and the time and effort in assessing compression levels for a large number of images.

The first "real life" test of this methodology was carried out in relation to our archives digitisation project. We are currently digitising a series of paper archives (letters, notebooks, photos, invitations, memos, etc.) in-house. The scope runs to something like half a million images over a couple of years, and includes the papers of some notable individuals and organisations (Francis Crick being the foremost of these). Archives can be quite miscellaneous in the types of things that you find, but different collections within the archives tend to contain a similar range of materials. This presents a problem if you want to treat images differently depending on their content. The photographer doesn't know, from one file of material to the next, what sort of content they will be handling. So even for a miscellaneous collection, once the image count gets high enough, you have to make the compromise by taking a collection-level decision on compression rates.

For archival collections we needed to test things like faint pencil marks on a notebook page, typescript on translucent letter paper, black and white photos, printed matter, newsprint, colour drawings, and so on. We chose 10 samples for the test. As this was our first test, and we were curious just how far we could go for some of the material types in our sample, we started with 1:1 lossy compression and increased this to 100:1. We used LuraWave for this testing.

For the archives, the compression intervals were: 1:1 lossy, 2:1, 4:1, 6:1, 10:1, 25:1, 50:1, and 100:1. The idea is that at 2:1, the compression will reduce the file size by half in comparison to the source TIFF, and so on.

Not surprisingly, the biggest drop in filesize was seen in converting from TIFF to JPEG 2000 in the first place. At a 1:1 compression rate, this reduced the average filesize by 86% (ranging from 67% to 95%). A 2.1 compression resulted in no noticeable drop in filesize from 1:1 - begging the question what differences there could possible be between 1:1 and 2.1 in the LuraWave software. At the average file size (5mb) at this compression (2:1) , a 500,000 image repository (our estimate for the archives project) would require 2.4 Tb of storage. These averages are somewhat misleading, because while they represent a spread of material, they do not represent the relative proportions of this material in the actual collection as a whole (and we can't estimate that yet).

File size reduction was relatively minimal between 2:1 and 10:1. What is obvious here is that setting the compression rate at, say, 2:1 does not give you a 2:1 ratio. You can achieve in fact a 14:1 ratio or higher. An interesting point to make about the very high experimental compression rates of 25:1 and above, was that output file sizes were essentially homogeneous across all the images, where as at 10:1 and lower, file sizes ranged from 1.5 Mb to 11.5 Mb.

TIFF = 35 Mb
1:1/2:1 = 4.96 Mb (86% reduction)
4:1 = 4.56 Mb (87% reduction)
6:1 = 3.89 Mb (89% reduction)
10:1 = 2.87 Mb (92% reduction)
25:1 = 1.39 Mb (96% reduction)
50:1 = 0.72 Mb (98% reduction)
100:1 = 0.37 Mb (99% reduction)

We found that the most colourful images in the collection (such as a colour photograph of a painting) performed the worst, as expected, and started to show artefacts at 10:1. These were extremely minor artefacts, but they could be seen. Other material types were impossible to differentiate from the originals even at 50:1 or 100:1, surprisingly. These tended to be black and white textual items. Using our rule of thumb, we chose 6:1 lossy compression for the archive collections. Were an archive to consist solely of printed pieces of paper, we would reassess and choose a higher compression rate, but an 89% reduction was highly acceptable in storage savings terms.

You may ask: why not just use 1:1 across the board? Is the extra saving actually worth it? Viewed in comparison to the 1:1 setting, we were getting a better than 20% reduction at 6:1 on average. This continues to represent a significant storage saving when you consider the ultimate goal is to digitise around 3.5 million images from the archive collections. Bearing in mind all the other collections we plan to digitise in future (up to 30m images), the savings become further magnified if we strive to reduce file sizes within the limits of what is visually acceptable.

There are a couple of follow-on questions remaining from all this: first, what size original should you begin with? And secondly, is it possible to automate compression using a quality control (such as peak to signal noise ratio) that allows you to compress different images at different rates depending on an accepted level of accuracy. These will be the subject future posts.

August 13, 2010

The JPEG2000 problem for this week

JPEG2000 isn’t the easiest of formats to disseminate. Browsers typically handle the format with difficulty and then require plugins or extensions to render the format. We don’t want our users to have to download anything just to be able to view our material on-line. So, we plan to convert our JPEG2000 files to a browser friendly JPEG or PDF for dissemination. Both formats admirably handled by browsers. (OK, PDF needs an Adobe plugin but it's commonly included with browsers.) Other formats may come along later. The thing is, how do we do that conversion? There are plenty of conversion tools out there – we use Lurawave for the image conversion. But then the question becomes when do we convert from a master to a dissemination format? Especially if we want a speedy delivery of content to the end user.

One of the guiding principles behind our decision to use JPEG2000 was that we could reduce our overall storage requirements by creating smaller files than we might have done if we’d used, say, TIFF. So if we automatically convert every JPEG2000 to a low res thumbnail JPEG, a medium res JPEG and a high res JPEG and to a PDF then we’re back to having to find storage for these dissemination files. OK, JPEG won’t consume terabytes of storage and nor will PDF, but we’d need structured storage to keep track of each manifestation and metadata to provide to our front end delivery system as to which JPEG was to be used in which circumstances. True, this has been very successfully done for many projects before now but alongside efficiency of storage is efficiency of managing what we have stored and a speedy delivery.

So we plan to convert JPEG2000 to JPEG or PDF on-the-fly at the time each image is requested. The idea is that we serve JPEG2000 images out of our DAM to an image server, the image is converted and the dissemination file served up. Instead of paying for large volumes of static storage we believe that putting the saving on storage into a fast image server will directly benefit those who want to use our material online.

One outcome of a conversation had with DLConsulting is that we've learned that on-the-fly conversion is a potentially system intensive (and at worst inefficient) activity that could create a bottleneck in the delivery of content to the end user. We've said that speed is an issue. We need to efficiently process the tiled and layered JPEG2000 files we plan to create. A faster more powerful image server may help but good conversion software qwill be key. Alongside on-the-fly conversion we plan to use a cache that would hold, in temporary storage the most requested images/PDFs. The cache would work something like this. It has a limited size/capacity and contains the most popular/most often requested images/PDFs. If an image/PDF in the cache were not requested for n amount of time it would be removed from the cache. In practice a user requests an digitised image of a painting, the front end system queries the cache to see if the image is there, if it is its served directly and swiftly to the user. If not the front end system calls the file from the back end DAM. The DAM delivers that image to the image server, which converts JPEG2000 to JPEG and places that images in the cache. From where it can be passed to the front end system and the end user. Smooth, fast and efficient in the use of system resources.

But there are still questions. If we pass the JPEG2000 to the image server for conversion to JPEG that’s fine; but what happens next? Is the JPEG2000 discarded after the conversion process leaving only the JPEGs? Is this the best way to support the zooming in on image sections that we want to offer. The original proposal was to hold only dissemination formats in the cache, now we’re thinking that for flexibility we may prefer to hold the JPEG2000 images and convert them as the image is requested by a user. Is this still the most efficient process? It's easy to build bottlenecks into a system that slow processes down, much more difficult to design a system for speed and efficiency. We’re pretty certain that the conversion–on-the-fly is a good idea and we also think the cache is too. Unless you know differently….

July 21, 2010

Future migration of JPEG2000

Those of us who work with digital assets know that one day we’ll face format obsolescence. The formats we have in our care will no longer be rendered by the applications that created them or by readily obtainable alternatives. This applies to all formats not just JPEG2000. As a relatively new and untried format planning for the long term management of JPEG2000 will require some work.

The key challenge with migration as a strategy is not deciding how to do migration but how to identify and maintain the significant properties of the format being migrated. The danger is that some property of the format may be lost during the process. The biggest fear with images being that quality will deteriorate over time. This loss of quality, whilst insignificant in the initial migration, may have a detrimental cumulative and irreversible effect over time.

So, do we have a plan for the future migration of obsolete JPEG2000 files? No, we do not. We are still trying to develop the specifications for the types of JPEG2000 that we want to use. Beyond the pale or not we have accepted that our images will be lossy. What we are trying to do is create JPEG2000 images that are consistent, have a minimal range of compression ratios and have a few variations in technical specifications as we can provide for. As a start this will make long term management simpler, but we are aware that we still have a way to go.

Our promotion of JPEG2000 as a format will hopefully make it more widely accepted and therefore the format will attract more research into possible migration options. We’re pleased to see that already individuals and organisations have been thinking about future migration of JPEG2000. The development of tools such as Planets in recent years has been a great step forward in supporting decision making around the long term management of formats.

Obsolescence is not something totally beyond our control. We are free to decide when obsolescence actually occurs, when it becomes a problem we need to deal with, and, with proper long term management strategies how we plan to migrate from obsolete formats to current ones. The choice of JPEG2000 as a master format supports this broader approach to data management.

The long term management of JPEG2000 as a format is part of our overall strategy for the creation of a digital library. Ease of use, the ability to automate processes and the flexibility of JPEG2000 have all been factors in our decision to use the format.

We’re clear that the choice we have made in the specification of our JPEG2000 images is a pragmatic one. Its also clear that the decision to use JPEG2000 in a lossy format has consequences. However, we have a format that we can afford to store and one that offers flexibility in the way that we can deliver material to end users. For us this balance is important, probably more important than any single decision about one aspect of a formats long term management.

July 13, 2010

Lossy v. lossless compression in JPEG 2000

The arguments for and against using JPEG 2000 lossy files for long-term preservation are largely centred around two issues: 1) that the original capture image is the true representation of the physical item, and therefore all the information captured at digitisation should be preserved; and 2) that lossy compression (as opposed to lossless compression) will permanently discard some of this important information from the digital image. Both of these statements can be challenged, and the Buckley/Tanner report went some way to doing this.

The perceived fidelity of the original captured image is the root of the attachment to lossless image formats. As cameras have improved, so has the volume of information captured in the RAW files. This volume of information has of course improved the visual quality and accuracy of the images, but this comes at the cost of inflated file sizes. A high-end dSLR camera will produce RAW files of around 12Mb. A RAW file produced by a medium-format camera may be 50Mb or higher. When a RAW file is converted to a TIFF, file sizes can increase dramatically depending on the bit-depth chosen due to interpolating RGB values for each pixel captured in the RAW file. As RAW files can only be rendered (read) by the proprietary software of the camera manufacturer (which may include plugins for 3rd party applications like Photoshop), they cannot be used for access purposes and, being proprietary, are not a good preservation format. They must be converted to a format suited to long term management, and this has usually been TIFF. When a RAW file is converted to a TIFF, file sizes can increase dramatically depending on the bit-depth chosen due to interpolating RGB values for each pixel captured in the RAW file. This bloats the storage requirements by 2 to 4 times.

However, image capture and subsequent storage of large images, is expensive, and we don't want to have to redigitise objects ever if we can get away with it - particularly for large scale projects. So, how much of a compromise is lossy compression, and is it really worth it? The question is: what information are we actually capturing in our digital images? Do we we need all that information? Is any of it redundant?

First - the visual fidelity issue. Fidelity to what information? The visual appearance of a physical item as defined by one person in a particular light? The visual appearance as perceived through a specific type of lens? All the pixels and colour information contained in the image as captured under particular conditions? No two images taken through the same camera even seconds apart will look the same due to distortions caused by the equipment, and, possibly, noise levels. What makes any particular pixel the original representation, or the most accurate, or indeed at all important?

Lossy compression will permanently discard data. What is necessary is to determine - for any given object, set of objects, or purpose - what information is actually useful and necessary to retain. We already balance these decisions at the capture stage. Choosing to use a small-format camera immediately limits the amount of information that can be detected by the camera sensor. Choosing one lens over another introduces a slightly different distortion. Compression also represents a choice between what you can capture and what you actually need. One may not need all the information that has been captured; some of it may be redundant. A lot of it may be redundant. And the point of JPEG 2000 is that it is very good at removing redundant information.

At the Wellcome Library, the aim of our large-scale digitisation projects is to provide access. We do not want to redigitise in the future, but we do not see the digital manifestations as the "preservation" objects. The physical item is the preservation copy, whether that is a book, a unique oil painting, or a copy of a letter to Francis Crick. For us, the important information captured in a digital manifestation are the human-visible properties. Images should be clear and in-focus, details visible on the original should be visible in the image (so it must be large enough to see quite small details), colour should be as close to the original as possible in daylight conditions and consistent, and there should be no visible digital artefacts at 100%. This is the standard for an image as captured.

We are striking a balance. Can we compress this image and retain all these important qualities? Yes. Do we need to retain information that doesn't have any relevance to these qualities? No. Lossy compression works for us. Using these qualities as a basis, we set out a testing strategy to determine how much compression our images could withstand.

To be continued...

July 06, 2010

Finding a JPEG 2000 conversion tool

It should be stated straight away that we don't have any programming capacity at the Wellcome Library (or the Wellcome Trust, our parent company). We don't do any in-house software development, and we don't use open source software much as a result. When it comes to using and creating the JPEG 2000 file format, this immediately limited our options regarding what tools we could use. Imaging devices do not output JPEG 2000, and even if they did, we would prefer to convert from TIFF to allow us full control over the options and settings. To achieve this, we needed a reliable file conversion utility.

Richard Clark, as discussed in a previous blog post, presented a number of major players providing tools for converting images to JPEG 2000. Of this list, only two offer a graphical user interface (GUI); these were Photoshop and LuraWave. The other tools, such as Kakadu, Aware, Leadtools, and OpenJPEG are available as software developer kits (SDKs) or binary files and require development work in order to use them.

We tested Photoshop and LuraWave with a range of images representing material from black and white text to full-colour artworks. We attempted to set options in both products as closely as possible to the Buckley/Tanner recommendations. We tested compression levels as well, but this is the subject of a future posting.

Photoshop first began supporting JPEG 2000 with CS2. The plugin - installed separately from the CD - allows the user to view, edit and save JPEG 2000 files as jpx/jpf (extended) files (although these can be made compatible with jp2). That means that although the file is a .jpx, you can open it with programs that only work with jp2. This version provided a number of options: tile sizes, embedding metadata, and so on, but was limited. In CS3, the plugin changed. In this version, the plugin used Kakadu to encode the image, and appeared to create a "proper" jp2 file. This version got us much closer to the Buckley/Tanner recommendation. CS4 removed the plugin from the installation altogether, requiring the user to download it from the Photoshop downloads website as part of a batch of "legacy" plugins. CS5, however, now includes the plugin as part of the default install. CS5 became available this summer, so we have not had a chance to investigate this version of the plugin, but their userguide mentions JPEG 2000 in the final section and as before, saves jpx/jpf files as standard.

It is good news that Photoshop is now including the plugin as standard. However, as the previous versions of the plugin were so variable, and the implementation so non-standard, it became clear that for the time being use of Photoshop is too risky for a large-scale programme. We need flexibility in setting options, images that conform to a standard, and long-term consistency in the availability of the tool and the options it provides.

LuraWave, developed by a German company called LuraTech, provided the GUI interface we needed, so was the obvious choice for testing. We obtained a demo version and using the wide range of options available we seemed able to meet the Buckley/Tanner recommendations in their entirety. We did, however, come across two issues with this software.

Firstly, we found that with our particular settings (including multiple quality levels and resolution layers, etc.), the software created an anomaly in the form of a small grey box in certain images where a background border was entirely of a single colour (in our case, black). It was reproducible. We immediately notified the suppliers, who investigated the bug, fixed it, and sent us a new version in a matter of days. The grey boxes no longer appeared.

Secondly, when we characterised our converted images with JHOVE we found that the encoding was in fact a jpx/jpf wrapped in a jp2 format. We went back to the suppliers who informed us that our TIFFs contained an output ICC profile that was incompatible with their implementation of jp2. The tool was programmed to encode to jpx/jpf when an output ICC profile was detected. This was a bit of a blow - we use Lightroom to convert our raw images to TIFF, and Lightroom automatically embeds an ICC profile. We would either have to strip the ICC profiles from our images before conversion, or the software would need to accommodate us.

Happily, Luratech were able to re-programme the conversion tool to force jp2 encoding (ignore the ICC profile), with an option to allow it to encode to jpx/jpf if the ICC profile is detected (see screenshot of the relevent options below). We have now purchased this revised version, and will soon be integrating JPEG 2000 conversion into our digitisation workflow. Of course, all this talk of ignoring ICC profiles and so on leads us to some issues around colour space and colour space metadata in JPEG 2000. We also had an interesting experience using JHOVE, that we will talk about soon. Watch this space!

UPDATE July 2010: In order to ignore the ICC profile, an additional command has to be added to the command line, as shown in the following images:






June 25, 2010

JPEG 2000 workshop with Richard Clark

In the wake of taking on board the recommendations from the Buckley/Tanner report (see a previous blog post), we needed to start looking at how we would actually create these JPEG 2000s as part of our digitisation workflow. As the JP2K-UK group meeting showed us, there is not a lot of knowledge in our industry regarding to the tools we could use - not only for creating the JPEG 2000s in the first place, but also for managing, displaying and converting them back into other (browser-friendly, for example) formats. We knew of a few tools, but wanted a more thorough understanding of the possibilities.

We turned to software engineer Richard Clark, who was deeply involved in the JPEG Committee and has worked on the JPEG 2000 technology. Richard is based in the UK, and currently owns Elysium Ltd., offering software and IT support solutions to businesses and organisations. Richard Clark was asked to deliver a half-day workshop for those Wellcome Library staff that would be involved in implementing our JPEG 2000 solution.

The workshop focused on options for the practical implementation of JPEG 2000, and the situation regarding software support for the format. He also touched on the workflow issues we need to be aware of and address in planning our strategy. The workshop helped us determine which solution would work best for us - as will be described in subsequent posts on this blog. You can read a version of his presentation as embedded in this post. Richard also shared with us some of the more technical details from his presentation at the British Library in 2007, available on Scribd.

J2K Workshop for the Wellcome Library