Categories
Digitisation open content Web metrics

Library of Congress report on their participation in the Commons on Flickr

Michelle Springer, Beth Dulabahn, Phil Michel, Barbara Natanson, David Reser, David Woodward, and Helena Zinkham over at the Library of Congress have (publicly) released a very in-depth report on their experiences in the Commons on Flickr over a 10 month period.

Titled “For the Common Good: The Library of Congress Flickr Pilot Project” it explores the impacts of the project on access and awareness, traffic back to the LoC’s own website, and, importantly, what they have learned about how collections might operate in the broader social web. Given that their pilot was born of a need to explore the opportunities and challenges of the social web, their findings are important reading for every institution that is dipping their toes in the water.

The Flickr project increases awareness of the Library and its collections; sparks creative interaction with collections; provides LC staff with experience with social tagging and Web 2.0 community input; and provides leadership to cultural heritage and government communities.

I am impressed by the depth of the report and the recommendations. Critically they have identified the resourcing issues around ‘getting the most out of it’ and broken these down as a series of options (see page 34).

Even to maintain their current involvement in the project, they have identified a need to increase resourcing. They also identify that ‘just as is’ is no longer enough.

(2) Continue “as is” – add 50 photos/week and moderate account.

Pro: Modest expense to expand to 1.5 FTE from current 1 FTE (shared by OSI
and LS among 20 staff). Additional .5 FTE needed to keep up with the
amount of user-generated content on a growing account—both in
moderation and in changes to the catalog records (both in Flickr and PPOC).

Con: Loss of opportunity to engage even more people with Library’s visual
collections. Risk of losing attention from a Web 2.0 community that expects new and different content and interaction as often as possible.

Download and read the full report (PDF).

2 replies on “Library of Congress report on their participation in the Commons on Flickr”

The full report also compliments the Powerhouse Museum on page 24 for the incorporation of Flickr tag information back to the object record at the persistent URL – something the LOC is unable to easily do for technical reasons. Seb, I’m sure you are aware of that anyway.

The report leaves few useful questions unasked, nor answered.

With regard to the resourcing aspect they usefully provide the precise number of person hours devoted to tasks in the set-up phase, and the ongoing maintenance phase.

With the departure of George Oates (canvassed elsewhere in your blog) one question will be whither The Commons Project?

Will existing institutional participants in The Commons have a champion at Flickr HQ? Will new institutions continue to be drawn to participate?

One interesting corollary is the Life photo archive Google images project which will see 10 million images from the Life archive go to the web, scanned from negatives, glass plates, etchings and prints.

(ref. http://googleblog.blogspot.com/2008/11/life-photo-archive-available-on-google.html )

This could supplant Flickr’s The Commons because it looks like Google is supplying the technology and the personnel to do the scanning and put them up on the web. At present there is no facility for the tagging, comments and notes that Flickr provides. This is a drawback that Google could easily rectify, especially since they invite people with Google accounts to sign in and rate photographs using a 5-star system.

Plenty of cultural institutions have got glass plate negatives sitting around doing nothing. Google may be persuaded to scan them and host the images at no charge to the institution.

If we move away from the question of photographic images and look at newspapers see that Google has its own project to digitize newspapers:

http://googleblog.blogspot.com/2008/09/bringing-history-online-one-newspaper.html

And compare that to the National Library of Australia’s Australian Newspapers Digitization Program (http://www.nla.gov.au/ndp/)

The NLA has acknowledged their accuracy problems with their Optical Character Recognition in this project (http://www.nla.gov.au/ndp/project_details/documents/ANDP_IncreasingOCRaccuracy.pdf ) and their “exciting and groundbreaking” idea of having users do the corrections.

See their site provided to search newspapers scanned thus far and have users correct the inaccuracies here:

http://ndpbeta.nla.gov.au/ndp/del/home

Now, I’ve seen much of the scanned newspapers by Google, and I’ve participated in correcting some of the textual OCR problems at the NLA Australian newspapers beta.

It seems to me that Google have got some sort of superior technology and are using it. Just the same way they seem to have developed some proprietary processes for the scanning of books for Google Books.

Then if we look at the cost factor, we see the National Library of Australia stating last week that they can’t achive their required efficency dividend. Maybe they should consider a partnership with Google.

That should free up some resources.

Note too that Google, like Yahoo and Yahoo’s Flickr, is not immune from the effects of the present world economic downturn. Google have stated they will not cut head count but will delay or modify other expenditure. The effect of this on such projects as Google images, Google newspapers and Google books is at the moment unknown.

Comments are closed.