Copyright/OCL open content

Powerhouse collection documentation goes Creative Commons

We’re happy to announce that as of today all our online collection documentation is available under a mix of Creative Commons licenses. We’ve been considering this for a long time but the most recent driver was the Wikipedia Backstage tour.

Collection records are now split into two main blocks of text.

The first section is the relatively museum-specific provenance which is now licensed under a Creative Commons Attribution, Non-Commercial license.

The second section is primarily factual object data and is licensed under a less restrictive Creative Commons Attribution, Share-Alike license.

Just to be very clear, images, except where we have released them to the Commons on Flickr, remain under license. There’s a lot more work to be done there.

So what does this really mean?

Teachers and educators can now do what they want or need to with our collection records and encourage their students to do the same without fear. Some probably did in any case but we know that a fair number asked permissions, others wrongly assumed the worst (that we’d make them fill out forms or pay up), and it is highly likely that schools were charged blanket license fees by collecting agencies at times.

Secondly it means that anyone, commercial or non-commercial can now copy, scrape or harvest our descriptive, temporal and geospatial data, and object dimensions for a wide range of new uses. This could be building a timeline, a map, or a visualisation of our collection mixed with other data. It could be an online publication, a printed text book, or it could be just to improve Wikipedia articles. It can also now be added to Freebase and other online datastores, and incorporated into data services for mobile devices and so much more.

Obviously, we’ll be working to improve programmatic access to this data along the lines of the Brooklyn Museum API, as well as through OAI and other means, but right now we’re permitting you to use your own nouse to get the data, legitimately and with our blessing – as long as you attribute us as the source, and share alike. We figure that a clear license is probably the ground level work that needs to preceded a future API in any case.

Thirdly, we’ve applied an attribution, non-commercial license to object provenance largely to allow broad educational and non-commercial repurposing but not to sanction commercial exploitation of what is usually quite specific material to our Museum (why we collected it etc).

You might be wondering why we didn’t go with a CC-Plus license?

A CC-Plus license was considered but given the specific nature of the content (text) we felt that this added a layer of unnecessary complexity. We may still, in the future, apply a CC- Plus license to images where it will make more sense given we have a commercial unit actively selling photographic reproductions and handling rights and permissions.

Copyright/OCL open content Picnic08

Picnic08 – Open Museum part two

Picnic is a large ‘creativity’ conference held annually in Amsterdam. I’ve been here as a guest of n8 talking about the notion of ‘open museums’.

Here is the next set of notes (with only a minor cleanup for the sake of timeliness) which were taken during the Open Museum sessions on Day Two. (More notes on the rest of Picnic still to follow)

Following my presentation, Fiona Romeo from the National Maritime Museum in London spoke. Fiona began by reminding us that often by themselves museums (outside of the art museum world) often hold incredibly banal and mundane objects whose significance is only apparent when placed into a particular context. This poses enormous challenges for museums in a digital environment that offers the user/visitor the opportunity to actively decontextulise objects (especially when browsing collection databases etc).

Fiona then detailed her recent work mapping some of the NMM collections to draw out the stories associated with objects – something that works incredibly well when the objects collected pertain to navigation and voyages of discovery.

For the NMM, the opportunities for collaboration lie in working with ‘data artisans’ outside of the sector to reveal new stories and ways of seeing our data. To this end she discussed the NMM’s work with Stamen in visualising the language of memorials – a quite poetic and revealing presentation of otherwise rather dull data; and also some of the object licensing to game developers Six To Start who make alternate reality games (and for whom some of the NMM’s maps and objects were a fantastic and uptapped resource).

Fiona emphasised that we under estimate the worth of our own data – we should ‘love our data’. It is rich and interesting even if we see it as incomplete – and by connecting with such ‘data artisans’ in the commercial and creative sector, we may begin to see for ourselves, new opportunities.

Paul Keller from Kennisland talked about ‘museums, fans and Copyright’, arguing that one of the things that is currently paralysing museums in taking advantage of the new collaborative opportunities of digital is this perception that ‘new business models of unimaginable wealth’ are just around the corner. Of course this is totally unrealistic – the bags of money don’t exist – and as a result we get a situation of ‘rights stagnation’ where the museums digital assets are locked up.

Of course, fans are already bypassing museums to take advantage of digital. Paul gave the example of Bittorrent communities whose collective collections of Dutch documentary films are more complete, more accessible, and of a higher quality that those preserved (and inaccessible online) by the Dutch National Film archives. The official film archives are paralysed by ‘getting permission’ while those who want access now just bypass them completely.

Stepping back from the obvious IP issues here, Paul gave another example of an amazing searchable video archive made by two Germans. 0xdb uses the data from video torrents along with their subtitles (sometimes fansubbed) to create a wonderful full text search of around 6000 movies. Whilst downloading is not allowed the metadata and rich content is astounding.

Jelmer Boomsma from n8 gave an excellent run down of the collaborative audience development strategies of the Amsterdam Museum Night. The Museumnacht is a good example of making museums more accessible to wider audiences and Boomsma’s presentation looked at how, in just 3 years they have transformed their strategies to make the Museumnacht reach even wider audiences and build strong participatory cultures around them.

In 2005 Museumnacht was seemingly successful. 26,000 visitors across all the museums in the one night, and a 94% ‘very satisfied’ audience. But there was one problem, the average age of attendee was 37 years old. Now n8 new that young people were interested in museums and culture, just that the event wasn’t appearing on their radars, so rather than take shortcuts and underestimate the intelligence of the audience (a dance party in every museum, or free beer etc), they focussed on redefining what the Museumnacht event was and who it was for.

Over 2006 and 2007 the print campaign began to be supplemented with an extensive and diverse online campaign where they gave the audience the tools to become ‘an ambassador’ for the event itself. They instituted competitions to design the campaign materials, design a Museumnacht t-short, as well as ways to build your own programme for the night and then share it with your friends and even make your own audiotour.

They worked with Hyves, the largest social networking site in the Netherlands (well outranking Facebook and MySpace), and trialled a customised banner advert. This failed and despite 160,000 page impressions it only generated 80 click throughs! So instead they worked with Hyves to set up a Hyves Group to enable 2 way communication, perks and discounts and importantly the tools to share with other Hyves users. In 2006 2% of their traffic came through Hyves, and in 2007 this was up to 8% and capturing 20% of visitors aged 16-24.

Now the new Museumnacht site is incorporating an Opensocial-style set of logins whereby many of the social networking and sharing functions will be available to any network user and also remove the requirement of a separate Museumnacht website login.

Copyright/OCL Imaging open content

A new collection in the Commons – Clyde Engineering

We’ve just added the start of a new collection of photographs to the Commons on Flickr.

The Clyde Engineering Photograph collection is full of photographs of heavy machinery. We’ve uploaded the first 50 to give you a feeling for what will be coming in future weeks.

The glass plate negatives in the Clyde photograph collection were taken at the Clyde works in Granville, and depict both the workers and the machinery they manufactured. Subjects covered include: railway locomotives and rolling stock; agricultural equipment; large engineering projects funded by Australian State and Federal governments; airplane maintenance and construction and Clyde’s contribution to the first and second World Wars. Some photographs date back to the 1880s but most were taken between 1898 and 1945 . . . The Clyde Engineering Company photograph collection was acquired by the Powerhouse Museum in December 1987.

Go start tagging them!

Copyright/OCL Imaging

50 new images on the Commons on Flickr

As promised we’ve just added another 50 historical images from our Tyrrell Collection to the Commons on Flickr. In the first week we had nearly 20,000 views and an enormous amount of tagging and ‘favouriting’ activity combined with many congratulatory messages and support for the Museum’s release of these images into the Commons.

The new additions include historical shots of the Art Gallery, areas around Bulli, and this great photograph of the Great Hall at Sydney University.

Copyright/OCL Digitisation

Amazon and rare books on demand

A very interesting new development in the digitisation space as reported in The Chronicle (via Siva Vaidhyanathan).

Amazon, which made its name selling books online, is now entering the book-digitizing business.

Like Google and, more recently, Microsoft, Amazon will be making hundreds of thousands of digital copies of books available online through a deal with university libraries and a technology company.

But, unlike Google and Microsoft, Amazon will not limit people to reading the books online. Thanks to print-on-demand technology, readers will be able to buy hard copies of out-of-print books and have them shipped to their homes.

And Amazon will sell only books that are in the public domain or that libraries own the copyrights to, avoiding legal issues that have worried many librarians — and that have prompted publishers to sue Google for copyright infringement.

Whilst I agree with Siva’s argument that this is “a massive privatization of public treasures”, at the same time this activity of effectively republishing, in physical form (via on-demand), can potentially bring older books, especially those that do not already have a large re-print value, to a much larger audience beyond just scholars and researchers.

The privatisation process began long ago with economic rationalist politics and the scaling back of the public sector and public institutions. This has left us in this situation where in some countries only the private sector has the resources and capital to make grand idealistic projects like this a reality – something that used to be the preserve of visionary government (although the reality was often different).

Depending upon the quality of the print on-demand I can also see this opening up a whole new genre of coffee table ‘cultural capital’ enhancing books . . . .


Good Copy, Bad Copy – the developing world and Copyright

Good Copy, Bad Copy is a rather splendid hour long documentary exploring Copyight law as it applies to remix culture. Unlike a lot of similar projects Good Copy, Bad Copy is truly internationalist and the most fascinating voices come from the developing world – a Nigerian ‘Nollywood‘ film company that has been producing ‘straight to DVD’ digital films for many years and building a business model that allows them to compete effectively with ‘pirated’ DVD copies in the local markets; and a Techno Brega producer in northern Brazil whose music is given away freely as marketing for enormous parties.

One of the most striking things about the Nollywood and Brazilian examples is that here are cultural producers who are using the internationalist and globalising mechanisms of the Internet to effectively spread their cultural products far and wide. To hear the Nigerian film producer talk about African Americans in the USA as his next ’emerging target market’ is a lovely flip of traditional ideas about one-way globalisation. In many ways this echoes many of the themes that I and others were talking about in the last few weeks in Havana – seizing the opportunities that are now available rather than being crippled by seeing them as a threat.

There are instructions on downloading the whole documentary (freely) on the promotional website. It is also on Google Video. The trailer (only) is playable above.

Collection databases Copyright/OCL Developer tools Interactive Media Metadata Social networking UKMW07 Web 2.0

UK Museums on the Web 2007 full report (Leicester)

Museums on the Web UK 2007 was held at the slightly rainy and chilly summer venue of the University of Leciester. Organised by the 24 Hour Museum and Dr Ross Parry with the Museums Computer Group the event was attended by about 100 museum web techies, content creators and policy makers.

As a one day conference (preceded by a day long ‘museum mashup’ workshop) it was very affordable, fun and entertaining (yes, in the lobby they had a demo of one of those new Phillips 3D televisions . . . disconcerting and very strange).

Here’s an overview of the day’s proceedings (warning: long . . . you may wish to print this or save to your new iPhone)

The conference opened with Michael Twidale and myself presenting the two conference keynote addresses. I presented a rather ‘sugar-rush, no-holds barred view from the colonies’ of why museums should be thinking about their social tagging strategies. (I’ll probably post my slides a little later). I had been quite stressed about the presentation coming off very little sleep and a long flight from Ottawa to London the night before. But I’ve been talking about these and related topics almost non-stop for the past two weeks so it was actually a good feeling to get it done right at the beginning.

After my presentation Michael Twidale from the University of Illinois reprised the joint presentation about museums making tentative steps into SecondLife that his colleague and co-author Richard Urban had presented at MW07 in San Francisco. Michael (like Richard before) certainly peaked the interest of some in the room who I had the feeling had barely thought about Second Life before – although I notice that the extremely minimally staffed Design Museum in London has just been doing an architecture event and competition in Second Life (see Stephen Doesinger’s ‘Bastard Spaces’).

Mike Ellis from the Science Museum followed the tea break with a presentation that looked at the outcomes of letting a small group of museum web nerds loose for a day without the pressures of a corporate inbox. Using a variety of public feeds the outcomes of such a short period of open-ended collaborative R&D were quite amazing. In many ways Mike’s presentation ended up challenging the audience to think about new ways of injecting innovation and R&D into their museum’s web practices. Amongst the mashups were a quick implementation of the MIT Simile Timeline for an existing project at the Cambridge University Museum tracking dates; a GoogleMaps mashup of all known museum locations and websites in the UK (something that revealed that current RSS feeds of this data are missing the crucial UK postcode information); a date cleaning API to allow cross-organisational date comparison built by Dan Z from Box UK; and an exciting mashup using Spinvox‘s voice to text service to allow museum visitors to call a phone number and be SMSed back information about locations, services or objects.

These were all really exciting prototypes that had come out of a very small amount of collaborative R&D time – something every museum web team should have. Apart from this a couple of problems facing museum mashups were revealed – stability issues and reliance on other people’s data – but as Mike pointed out how does this really compare to the actual stability of your existing services?

Nick Poole from MDA presented Naomi Korn’s slides on rights issues (moral, ethical and Copyright) involving museums implementing Web 2.0 applications. Nick presentation was excellent and had two main points to make. The first being that the museum sector is already going the way of increased audience focus and interaction in real world policy and has been for at least the past decade so why should the web be any different? Further that the recent political climate in which museums in teh UK exist has focussed on the cultural sector being a lead in enhancing social cohesion and the sharing of cultural capital. Secondly, Nick emphasised that as museums “we have a social responsibility to the population to exploit any and all methodologies which makes it easier for them to engage with and learn from their (cultural) property”, concluding that despite the potential legal issues, Web 2.0 offers a “set of mechanisms by which we can enhance accountability and effectiveness in a public service industry”. Excellent stuff.

Alex Whitfield from the British Library then presented an interesting look at an albeit extreme example of the tensions with implementing Web 2.0 technologies with certain exhibition content. Alex demonstrated some of the website for the Sacred exhibiton which shows some the key religious manuscripts from the faiths – Christianity, Islam, and Judaism. The online exhibition shows 66 of 152 texts and includes a GoogleMaps interface, expert blogs, podcasts and some nice Flash interactives (yes, I did ask why Flash? apparently because it was a technology choice encouraged by the IT team). Alex then proceeded to look at a few examples of where tagging and digital reproduction can cause community offence or at the very least controversy, before closing referencing from Susan Sontag’s ‘On Photography’ where Sontag claims that there is a reduction of ‘the subject’. (see an interview with Sontag where she explains this concept). Alex’s example was certainly provocative and reminded me, again, that the static web and the participatory web both carry their own particular set of implicit politics (individualistic, pro-globalisation, and pro-democracy although to differing depths of democracy).

After a light lunch Frances Lloyd-Baynes from the V&A gave an overview of some of the work they have been doing and some of the challenges ahead. She reported that the V&A has 28% of their collection online but that the figure reduces to 3% once bibliographic content is excluded. Of course they have been working on other ‘collections’ – those held by the community – for quite a while as evidenced by their Every Object Tells A Story and the new Families Online project.

She also mentioned the influence of the MDA’s ‘Revisiting Collections‘ methodology which focuses on making a concerted effort to engage audiences and bring user/public experiences to museum collections content. This and other concepts have become a key part of the V&A’s strategic policy.

In terms of user-generated content she highlighted problems that manyof us are starting to face. What UGC gets ‘kept’? How long, how much? What should be brought into the collection record? Should it be acknowledged? How?How should museums respond, mediate and transform content? Or should they remain unmediated? And how do we ensure that there is a clarity and distinction between voice of the museum and voice of the user.

Fellow Australian, now ex-pat who works as a database developer at the Museum of London, Mia Ridge, gave a practical overview of how Web2.0 can be implemented in museums. She covered topics like participation inequality, RSS and mashups, and the need to be transparent with acceptable use and moderation policies. it was a very practical set of recommendations.

Paul Shabajee from HP Labs then gave a very cerebral presentation on the design of the “digital content exchange protoype” for the Singapore education sector. The DCX allows for the combination of multiple data and metadata spread across multiple locations and sources, as well as faceted browsing and searches for teachers and students allowing for dynamic filtering by type, curriculum subject area, format, education level, availability, text search, etc. It was a great example of the potential of the Semantic Web. He then went on to explain the CEMS thesaurus model of curriculum and the taxonomies of collection, and how actual users wanted to do things in a more complex way such as finding topic for a class then find real world events and map them against topics. And because everything had been semantically connected, building new views in line with user needs did not mean massive re-coding. More information ont eh project can be gleaned from Shabajee’s publications.

Then after some very tasty micro-tarts (chocolate and raspberry, of which I must have partaken in five or six . . ), we moved on to the closing session from Brian Kelly of UKOLN. Brian is a great presenter although his slides always seem so lo-fi because of his typographic choices. Brian managed to make web accessibility for Web 2.0 are compelling topic and his passion for reforming the way we generally approach is ‘accessibility’ is infectious.

Brian is a firm believer that ‘accessibility is not about control. rules, universal solutions, and an IT problem’. Instead he asks what does accessibility really mean for your users? And rather cheekily ‘how can you make surrealist art accessible’? Accessibility, for Brian, is about empowering people, contextual solutions, wideing participation, blended solutions, all the things that Nick Poole and Frances Lloyd-Baynes (and the rest of us) were pushing for earlier in the day.

Brian has come up with a model of approaching accessibility that uses as a metaphor the tangram puzzle (for which there is no single ‘correct’ solution) rather than a jigsaw. He advised that we should focus on content accessibility because a mechanistic approach doesn’t work. How do you make an e-learning resource 3d model? It is just not possible and instead we should be focussing on making the learning objectives/outcomes accessible instead. If we see things in this way then there is no technical barrier for doing museum in projects in say, Second Life, citing the reasons that it isn’t ‘accessible’ by some disabled users, but that we should focus on providing alternatives as well that achieve or demonstrate similar outcomes for other users. Michael Twidale also provided the example of the paralysed Second Life user who can, in his virtual world, fly when in the real world he cannot walk.

Brian closed by advising that at a policy level we should be saying things like “museum services will seek to engage its auidences, attract new and diverse audiences. The museum will take reasonable steps to maximise access to its services”. By applying principles of accessible access across the whole portfolio of what the museum offers (real and virtual) we can still implement experimental services rather than using accessibility as a preventative tool. After all, as he points out the BBC has a portfolio of services for impaired users rather than ensuring access on every service.

Copyright/OCL Social networking Web 2.0

Potential of social networking / Peer to Patent

How do we re-build our patent system in light of the technology that enables the crowd-sourcing of scientific information?

A very interesting and wordy post from Beth Noveck on Peer to Patent, a pilot project that aims to examine how social networking may offer new possibilities for analysing the enormous backlog of US Patent Office claims and use the community’s aggregated knoweldege to quickly strike out patent trolls.

. . . what we are seeing the deconstruction of the notion of expertise – or at least the sociological organization of expertise – and we need to understand how this changes our institutions and might impact their legitimacy.

Whereas once expertise meant strictly a body of knowledge accumulated by a single person in a professional capacity, increasingly it also means the aggregation of discrete bits of knowledge into collective databases impelled by the new social networking tools, such as friend-of-a-friend (FOAF) social networking sites like Doppr or LinkedIn, or driven by rating and reputation techniques, such as those used by eBay, Amazon and Slashdot, and visual tools like Second Life and that make social practices transparent as well as other other Web 3.0 (I think 2.0 was last year) to organize that information.

These suggest that: ordinary people, regardless of institutional affiliation or professional status, possess information that could enhance decision-making and improve governance. Participating in a social network not only aggregates the wisdom of the crowd – summing up individual parts a la Surowiecki’s jelly bean jar – but it can also structure information into manageable knowledge and help build expertise through participation over time.

Copyright/OCL Interactive Media

Real world rights in Second Life

Simon Canning in the Australian writes Uluru row rocks Telstra in which the issues of real world rights and their interaction with representations of landmarks in Second Life is discussed.

Legislation has been in place to limit photography, filming and commercial painting at Uluru for 20 years, with tight restrictions on what is and is not allowed.

Capturing images of parts of the northeast face of Uluru is banned and all pictures taken of that part of Uluru must be submitted to the landowners for approval.

While visitors in the game cannot touch Uluru or fly over it, they can virtually fly in the no-fly zone to the northeast and take snapshots.

However, while the rules governing photography, filming and paintings have been in place since 1987, a spokesperson from National Parks said the issue of digital images online had never been raised before.

National Parks, which administers the area on behalf of the traditional landowners, now has lawyers looking at Uluru in Second Life and is considering sending a delegation to meet landowners to discuss the situation.

Copyright/OCL Digitisation MW2007 Web 2.0

M&W07 – Day two: Brewster Kahle

Museums & the Web is very big this year. There must be nearly 1000 people here and there is a good buzz in between sessions.

Today opened with an entertaining and motivational opening plenary from Brewster Kahle, founder of the Internet Archive. Kahle talked about the Internet Archive disucssing the various types of media it is digitising and making openly accessible, for free, using open standards. The big stumbling block is rights.

Starting with books he gave some interesting figures on digitisation costs. The archive is scanning 12,000 books per month over three locations (USA, Canada and the UK). It costs about $0.10 per page to do scanning, OCR, PDFing, file transfer and permanent storage (forever). Distribution problems are being solved by print on demand which costs as little as $0.01 per page and is being rolled out through mobile digital book buses in Uganda, India and China with print on demand. Kahle handed around some samples of the print on demand titles and they were of acceptable quality and had proper covers. He also handed around one of the 300 prototype $100 laptops from MIT which was pretty cool with a great hi-res screen which makes the concept of a low-cost, developing-world-friendly e-book reader viable.

Audio recordings are costing $10 per CD or roughly $10 per hour of recording. Internet Archive will host forever, and for free. Video recordings are slightly more at $15 per hour. They have also been recording broadcast television, 20 channels worldwide, 24/7. Only one week is available online so far – that of 9/11. They have also started on software archiving but are stymied by the DMCA.

The Wayback Machine (web archive) is snapshotting every two months at 100 terrabytes of storage per snapshot. Interestingly he quoted the average webpage changes or is deleted every 100 days making regular archiving critical.

Kahle emphasised the importance of public institutions doing digitisation in open formats rather than the exclusivity of GoogleBooks deals. His catchall warning for museums was “public or perish” which is a great start to the conference.