Categories
Conferences and event reports MW2009

MW2009 Clouds, Switches, APIs, Geolocation and Galleries – a shoddy summary

(Disclaimer – this is a rushed post cobbled together from equally rushed notes!)

Like most years, this year’s Museums and the Web (MW2009) was all about the people. Catching up with people, putting faces to names, and having heated discussions in a revolving restaurant atop the conference venue in Indianapolis. The value of face to face is more the case for people travelling from outside the USA – for most of us it is the only chance to catch up with many people.

Indianapolis is a flat city surrounded by endless corn fields which accounts for the injection of corn syrup into every conceivable food item. No one seems to walk preferring four wheels to two legs – making for a rather desolate downtown and a highly focussed conference event with few outside distractions.

The pre-conference day was full of workshops. I delivered two – one with Dr Angelina Russo on planning social media, and the other and exhausting and hopefully exhaustive examination and problematising of traditional web metrics and social media evaluation. With that out of the way I settled back and took in the rest of the conference.

MW2009 opened with a great keynote from Maxwell Anderson, director of the Indianapolis Museum of Art. Max’s address can be watched in full (courtesy of the IMA’s new art video site – Art Babble) and is packed with some great moments – here’s a museum director who gets the promise of the web and digital and isn’t caught up in the typical physical vs virtual dichotomy. With Rob Stein’s team at the IMA the museum has been able to test and experiment with a far more participatory and open way of working while they (still) work out how to bring the best changes into the galleries as well.

After the opening keynote it was into split sessions. Rather than cover everything I saw I’ll zero in on the key things I took away cribbed straight from my notes. I’ve left a fair bit out and so make sure you head over to Archimuse and digest the papers.

Using the cloud

In the session on cloud computing Charles Moad, one of the IMA developers, delved deep into the practicalities of using Amazon Web Services for hosting web applications. His paper is well worth a read and everyone in the audience was stunned by the efficiencies, flexibility (suddenly extra load? just start up another instance of your virtual servers!), and incredibly low cost of the AWS proposition. I’m sure MW2010 will have a lot of reports of other institutions using cloud hosting and applications.

Following Charles, Dan Zombonini from Box UK who works with, but isn’t in the museum sector showed off the second public iteration of Hoard.it. Last year Hoard.it caused a kerfuffle by screen scraping collection records from various museum collections without asking. This year Dan provoked by asking what the real value of efforts like the multimillion Euro project Europeana is? Dan reckons that museums should focus on being a service provider – echoing some of what Max Anderson had said in the keynote. According to Dan, museums have a lot to offer in terms of “expertise, additional media, physical space, reputation & trust, audience, voice/exposure/influence” – and these are rarely reflected in how most museums approach the ‘problem’ of online collections.

APIs

Last year there was a lot of talk of museum APIs at MW – then in November the New Zealanders trumped everyone by launching Digital NZ. But in the US it has been the Brooklyn Museum’s launching of their API a little while ago that seems to have put the issue in front of the broader museum community.

Richard Morgan from the V&A introduced the private beta of the V&A’s upcoming API (JSON/REST) and presented a rather nice mission statement – “we provide a service which allows people to construct narrative and identity using museum content, space and brand”. Interestingly, to create their API they have had to effectively scrape their existing collection online!

Brian Kelly from UKOLN talked about an emerging best practice for the development of APIs and the importance of everyone not going it alone. Several in the audience of both Richard and Brian’s sessions were uneasy about the focus on APIs as a means for sharing content – “surely we already have OAI etc?”. But as one anonymously pointed out, yes many museums have OAI but in not publicising and providing the easy access OAI is really ‘CAI’.

And APIs still don’t get around the thorny issues of intellectual property. (I’ve been arguing we need to organise our content licensing first in order to reduce the complexity of the T&C of our APIs).

As Piotr from the Met and author of the excellent Museum Pipes shows time and time again, the real potential of APIs and the like is only really apparent once people start making interesting prototypes with the data. Frankie Roberto (ex-Science Museum and now at Rattle) showed me Rattle’s upcoming Muddy service – they’ve taken Powerhouse data and done some simple visualisations.

APIs from a select few museums will probably put the rocket under the sector needed to really open up data sharing – however we need some great case studies to emerge for the true potential to be realised.

Geolocation

Another theme to reach the broader community this year was geolocation. Amongst a bunch of great projects showing the potential of geo-located content for storytelling and connecting with audiences was the rather excellent PhillyHistory site. The ability to find photos near where you grew up has resulted in some remarkable finds for the project as well as a healthy but of revenue generaton – $50,000 from the purchase of personal images.

Aaron Straup-Cope, geo-genius at Flickr delivered another of his entertaining and witty presentations where he covered some of the problems with geo-coding. In so doing he revealed that most of the geo-coded photos on Flickr are in fact hand geo-coded. That is, people opening a map, navigating to where they think they took the photo, and sticking in a pin. The map is not the territory – my borders of my neighbourhood are not the same as yours and neither of ours are the same as those formalised by government agencies. This is the case as much for obvious contested territories as it is for local spaces. The issue for geocoders, then, is how to map the “perceptions of boundaries”. Aaron’s slides are up on his blog and are worth a gander – they raise a lot of questions for those of us working with community memory.

Galleries

Nina Simon made her MW debut with a fun workshop challenging all of us in the web space to ‘get out our (web) ghetto’ and tackle the challenge of in gallery participatory environments. Her slides (made using Prezi) covered several examples of real-world tagging, polling, collaborative audience decision making and social interactions. The challenge to the audience to “imagine a museum as being like . . . ” elicited some very funny responses and Nina has expanded on her blog.

I don’t entirely agree with Nina’s call to action – the nature and type of participation and expectation varies greatly between science centres, history museums, and art museums. And there are complex reasons as to why participatory behaviours are sometimes more obviously visible online – and why many in-gallery behaviours are impossible to replicate online.

But the call to work with gallery designers is much needed. All too often there is a schism between the teams responsible for online and in-gallery interactions – technologically-mediated or not.

Kevin von Appen’s paper on the final day complicates matters even more. Looking at the outcomes of a YouTube ‘meet up’ at the Ontario Science Centre, Kevin and the OSC team struggled with working out what the real impact of the meet up was. Well attended and with people choosing to fly in from as far away as Australia it would have seemed as if 888Toronto888 was a huge success, however –

Clearly, meetup participants were first and foremost interested in each other. The OSC was the context, not the star. Videos that showcased the meetup-as-party/science center-as-party-place positioned us as a cool place for young adults to hang out, and that’s an audience we’d like to grow.

It wasn’t cheap either – the final figure worked out at $95 per participant. Clearly If we want more ‘participatory experiences’ in our museums it isn’t going to be cheap. And if we want audiences to have ownership of our spaces then we may need to rethink was our spaces are.

(As an aside, I finally learnt why art museums have more gallery staff in the galleries than other types of museums – one per room – albeit not necessarily engaging with audiences! According to my knowledgeable source, art museums have found that it is cheaper to hire people to staff the galleries than it is to try to insure the irreplaceable works inside.)

“The switch”

One of side streams of MW this year was a fascination with ‘the switch’. This arose from some late night shenanigans in the ‘spinny bar’ – a revolving restaurant atop the Hyatt. The ‘switch’ was what turned the bar’s rotation on and off and on the final day a small group were ushered into the bar and witnessed the ‘turning on’. Charles, the head of engineering at the hotel, gave us a one hour private tour of the ‘switch’ and the motor that ran the bar – it was fascinating and a timely reminder of the value of the ‘private tour’ and the ‘behind the scenes’. In return, Charles asked all of us plenty of questions about the role of technology in his children’s education and how to get the most out of it.

We need more museum experiences like this!

Categories
Collection databases Web 2.0

Another OPAC discovery – the Gambey dip circle (or the value of minimal tombstone data)

New discoveries as a result of putting our incomplete collection database online are pretty common place – almost every week we are advised of corrections – but here’s another lovely story of an object whose provenance has been significantly enhanced by a member of the public – a story that made the local newspapers!

Here’s the original collection record as it was in our public database.

Now take a look at the same record a week later.

If your organisation is still having doubts about the value of making available un-edited, un-verified, ageing tombstone data then it is worth showing examples like these.

Categories
Exhibition technology MW2009

MW2009 – Multi-touch: what does this technology hold for future musuem exhibits?

_dsc0106

Hi I’m Paula Bray and I usually blog over at Photo of the Day.

Today, whilst Seb was slaving away giving two workshops in a row at Museums and the Web 2009 I spent the day with Jim Spadaccini and Paul Lacey in a great, full-day workshop called ‘Make It Multi-touch’ that showcased the custom built 50” touch-table. You can view it over at Ideum .

We got inside information on how this technology was developed from the initial prototype back in September 2008 that featured a dual mirror and two camera solution that resulted in the need to process complicated gestures and quickly. Two prototypes later is the final product you can see here. This technology can process simple to complex gestures known as ‘blobs’ (fingers reflected) which is fed to software that can process touch, drag and drop, pinch and expand, drawing, rotate and double tap features that are all intuitive to the user within a short time-frame. The aim is to provide an interactive social experience that is very different to the traditional computer based interactive exhibits that can tend to isolate the experience to one visitor.

_dsc01092

What can we learn from the public about using museum collections and content through technology such as multi-touch? This form of technology may be a novelty for some at this stage but the future design of this product holds potentials for change amongst many museum applications.

Scenario: Multi-touch tables are available in a museum exhibition for the public to use and interact with exhibition content. Images of collection objects can be moved across the table, details of content can be zoomed in through simple “blob” (finger) movements. Descriptive information about the object can be shown through XMP metadata stored in the file. Location data can be retrieved and the user can create their own exhibit and learning experience. This is a very different user application that can change visitor’s experiece. Do we need to compete with devices that are currently available at home and make it social and educational in the museum? Does fixed navigation work anymore?

_dsc0094

Multi touch technology has potential to change museums experience and it will be interesting to watch this technology develop. Will the public start to expect to come to museums to interact with exhibits in this new way?

This is definitely more than a “big-ass table”.

Post & photography by Paula Bray

Categories
Developer tools

Intgerating Twitter tweets into blog comments

Backtype has just released the very first 0.1 version of a WordPress plugin that integrates tweets and retweets as well as comments on other blogs into the comment stream of your original WordPress posts.

I’ve been trialling an install and you can see it in action on a post like this one. Notice that the tweets are interleaved with comments on the blog itself – it even deciphers shortened URLs. (And in case you were wondering which URL shortener is the best check out this article from Searchengineland – hat tip Chloe Sasson!)

This sort of cross-site conversation tracking is becoming increasingly important in a world where tweets are easier and more common than on-blog comments. I’ll be watching with interest to see how the plugin evolves.

A word of caution before you go and roll it out on all your blogs – consider the additional moderation that seeing every public tweet and offsite comment is going to create for you!

Categories
Mobile QR codes User experience

A quick QR code update

As regular readers know, we’ve been trialling QR codes and a little while back rolled them on a small selection of object labels in a Japanese fashion display.

I’ve been keep an eye on their usage and some of the continuing problems around lighting, shadows, and low-resolution mobile phone cameras like the current iPhone 3G. So far usage has been, as expected, low. Firstly, the target audience for the exhibition content has, not surprisingly, not been very tech-savvy. Secondly, the ‘carrot’ isn’t clear enough to cause the audience to respond to the call to action.

More critically, one thing we still haven’t quite gotten right is the image size and error correction.

Shortly after the last post we upped the error correction in the codes to 30% (meaning that up to about 30% of the image can be obscured and it still scans – although it is isn’t evenly spread). This alone wasn’t enough.

With the long URLs encoded in the codes plus the error correction the resulting QR codes were even more ‘dense’ and hard to scan with 2 megapixel cameras. We’ve now done another set of codes with our own version of TinyURLs that generate locally. This has reduced the encoded characters from nearly 70 to around 25 characters – thus a far less dense code.

Even so, 2 megapixel cameras have patchy results when obscured by lens flare or shadow so our current thinking is that in the future the codes may need to be as much as 50% bigger.

Categories
Imaging open content Web 2.0

One year in the Commons on Flickr – statistics and . . . a book!

picture-262

Today we celebrate one year in the Commons on Flickr.

Since April 8 last year we’ve uploaded 1,171 photos (382 geotagged) from four different archival photographic collections. These have been viewed 777,466 times! For photographs that had been either hidden away on our website (the original 270 Tyrrell photographs on our website were viewed around 37,000 times on our site in 2007), or not yet even catalogued and digitised this is a fantastic result. And that’s not even scratching the surface of the amazing extra information and identifications, mashups, new work and more that has come from the community participation.

To celebrate we’ve published a 78 page book!

The book was published using print-on-demand service Blurb and comes as a softcover or two different hardcovers – it is your choice! Inside there are a range of photographs alongside their individual statistics, user comments and some of the stories of discovery that have come from the first year in the Commons.

Our Photo of the Day blog is giving away 10 copies and you can buy copies for your friends over at Blurb.

I’d personally like to thank everyone at the Powerhouse who have supported our involvement in the Commons and helped make available so many photographs. I’d also like to thank the enthusiastic Flickr community who have so enthusiastically embraced these historical images; Paul Hagon for his mashup;the staff at Flickr (esp George, Dan and Aaron); and the Indicommons crew.

Without all of you this would never have happened.

Categories
Social media

Impact of the Commons on image sales at the Powerhouse

As many readers know, Paula Bray, our manager of Visual and Digitisation Services, has been working on a paper for Museums and the Web looking at the impact of the Commons on Flickr on our image sales business.

Paula’s paper has been published over at Archimuse and if you are going to be in Indianapolis next week you’ll be able to get the visually enhanced interactive version.

Over on our Photo of the Day blog, Paula has added some updated figures that give a clearer picture of the impact of the Commons. Have a read and feel free to ask questions either here or on Photo of the Day. I’ll make sure Paula gets them.

We are celebrating our 1st birthday in the Commons on Flickr tomorrow and have an exciting announcement waiting . . .

Categories
Other museum blogs (from Museumblogs.org)

Powerhouse Object of the Week – a new behind the scenes blog

Another exciting thing we are launching today is our Object of the Week blog. It nicely complements our Photo of the Day which recently celebrated 500 posts!

We kick off Object of the Week with a profile of the project lead, curator Erika Dicker. Erika has chosen a favourite object from the collection – a prawn riding a bike, and her quirky tastes are also profiled in a quick Q&A.

Each week the blog will feature a new object and, until each curator has posted, a curator profile. We hope the blog will reveal some of the personalities behind the collection as well as many of the oddities and exciting objects that the public rarely gets to see. In coming weeks there will be video interviews and a whole lot more.

Categories
Copyright/OCL open content

Powerhouse collection documentation goes Creative Commons

We’re happy to announce that as of today all our online collection documentation is available under a mix of Creative Commons licenses. We’ve been considering this for a long time but the most recent driver was the Wikipedia Backstage tour.

Collection records are now split into two main blocks of text.

The first section is the relatively museum-specific provenance which is now licensed under a Creative Commons Attribution, Non-Commercial license.

The second section is primarily factual object data and is licensed under a less restrictive Creative Commons Attribution, Share-Alike license.

Just to be very clear, images, except where we have released them to the Commons on Flickr, remain under license. There’s a lot more work to be done there.

So what does this really mean?

Teachers and educators can now do what they want or need to with our collection records and encourage their students to do the same without fear. Some probably did in any case but we know that a fair number asked permissions, others wrongly assumed the worst (that we’d make them fill out forms or pay up), and it is highly likely that schools were charged blanket license fees by collecting agencies at times.

Secondly it means that anyone, commercial or non-commercial can now copy, scrape or harvest our descriptive, temporal and geospatial data, and object dimensions for a wide range of new uses. This could be building a timeline, a map, or a visualisation of our collection mixed with other data. It could be an online publication, a printed text book, or it could be just to improve Wikipedia articles. It can also now be added to Freebase and other online datastores, and incorporated into data services for mobile devices and so much more.

Obviously, we’ll be working to improve programmatic access to this data along the lines of the Brooklyn Museum API, as well as through OAI and other means, but right now we’re permitting you to use your own nouse to get the data, legitimately and with our blessing – as long as you attribute us as the source, and share alike. We figure that a clear license is probably the ground level work that needs to preceded a future API in any case.

Thirdly, we’ve applied an attribution, non-commercial license to object provenance largely to allow broad educational and non-commercial repurposing but not to sanction commercial exploitation of what is usually quite specific material to our Museum (why we collected it etc).

You might be wondering why we didn’t go with a CC-Plus license?

A CC-Plus license was considered but given the specific nature of the content (text) we felt that this added a layer of unnecessary complexity. We may still, in the future, apply a CC- Plus license to images where it will make more sense given we have a commercial unit actively selling photographic reproductions and handling rights and permissions.

Categories
open content Wikis

Working with Wikipedia – Backstage Pass at the Powerhouse Museum

I like the notion that Noam Cohen raises in his recent New York Times article where Wikipedia is compared to a city.

It is this sidewalk-like transparency and collective responsibility that makes Wikipedia as accurate as it is. The greater the foot traffic, the safer the neighbourhood. Thus, oddly enough, the more popular, even controversial, an article is, the more likely it is to be accurate and free of vandalism. It is the obscure articles — the dead-end streets and industrial districts, if you will — where more mayhem can be committed. It takes longer for errors or even malice to be noticed and rooted out. (Fewer readers will be exposed to those errors, too.)

Like the modern megalopolis, Wikipedia has decentralised growth. Wikipedia adds articles the way Beijing adds neighbourhoods — whenever the mood strikes. It is open to all: the sixth-grader typing in material from her homework assignment, the graduate student with a limited grasp of English. No judgements, no entry pass.

When most of us take a look at Wikipedia we conveniently forget that behind the names that create and edit the articles are real people. Likewise when we are critical of how Wikipedia works (or doesn’t) we forget that Wikipedia is as flawed (or as great) as people are.

And if you were setting up in a new city you would meet with the city and community leaders, then head out and meet those who make the city function – the recommenders, community activists, the outspoken voices (and, depending on the neighbourhood, the kingpins and warlords!). Before all that, of course, you’d be out in the streets working out who and where all these key figures were, and getting a feel for it all. Alternatively, you might approach a city completely from the bottom-up. In so doing you might get lucky or you might also be led into a dark alley and mugged.

So, when Liam Wyatt, Vice President of Wikimedia Australia approached the Powerhouse to be the inaugural venue for a ‘Backstage Pass’ idea we jumped at the chance to put some real world faces to the avatars, and to learn how the nuts and bolts of Wikipedia works from the perspective of those who edit and improve it. We knew Liam from his work with the Dictionary of Sydney and thus knew he was aware of the complexities of the heritage sector.

(image by Paula Bray, Powerhouse Museum, CC-BY-SA)

From our perspective, Wikipedia is hugely important. Wikipedia is the highest referrer of traffic to our main website after search. Regardless of whether all our research staff are personally enamoured with Wikipedia, it is clear that our research output is made more visible by being cited in Wikipedia. In fact, if citations are a measure of the success of academic research then perhaps Wikipedia citations are a measure of ‘assumed authority’ and accessibility. (More on that in my metrics workshops though!).

At the same time word has it that laptops destined for high school students across the State may come pre-loaded with a snapshot of Wikipedia, so it makes sense for museums to have their knowledge linked and connected to as many relevant articles as possible.

Around the same time as Liam approached the Powerhouse, Shelley Bernstein at the Brooklyn Museum asked us to participate in Wikipedia Loves Art. We really liked the idea but had two organisational issues – firstly, we don’t (currently) “do” art; but most importantly our onsite photography policy needed to be clarified and within the short time frame that wasn’t going to be possible (we are still working on it!). Shelley’s been blogging about the experience of WIkipedia Loves Art over on the Brooklyn blog – and that more open approach to the ‘city’ that is Wikipedia has yielded interesting and complicated results.

So on the 13th of March, Liam rolled up with a motley group of Wikipedians – the youngest was only 13 years old (we hope had a sick note for his teachers!) – and the curatorial staff, along with a photographer, set about giving them a guided tour of the Museum and then our basement collection stores before retiring to a networked meeting room to exchange ideas. All up we ended up dealing with a very manageable group of ten Wikipedians. These weren’t just any Wikipedians, they were paid up members of Wikimedia Australia – the kind of the community leaders you might want to get onside in your neighbourhood.

This made a huge difference.

Even so, Wikipedians are a diverse bunch and like normal people they don’t necessarily understand all the intricacies of how museums work – the timescales, the processes, the conception of significance, the complexities of Copyright in museums. They don’t all agree about the solutions to licensing – and collectively we have widely varying opinions about the viability and usefulness of Wikipedia’s ‘neutral point of view’.

(image by Paula Bray, Powerhouse Museum, CC-BY-SA)

(image by Paula Bray, Powerhouse Museum, CC-BY-SA)

But, as we learnt about how Wikipedia editors think about how to document and improve articles in Wikipedia, our Museum staff spoke of how we document, classify and research. Unsurprisingly between the Wikipedians and the Museum staff we found a lot of common ground.

One of the Wikipedians who came, Nick Jenkins, generously wrote on the Wikimedia-AU listserv,

It was very interesting, and the amount of material and knowledge (at the museum, in the heads of the curators, and in the internal databases at the museum) is truly vast; but the issues that are being grappled with seemed (from my perspective) to be how to fulfil the museum’s mission in an increasing online environment; how that relates to the Wikipedia and finding areas where there’s a good synergy and commonality of purpose, and also questions and complexity of licensing (for images of items and details about items), and all the cultural issues of interfacing the two different cultures and ways of operating.

I thought it was a very positive day, and I left very much with the impression that these were good people who genuinely wanted to help.

From our perspective, the Museum has a whole lot of changes being actively made to Wikipedia articles incorporating its areas of expertise, but most importantly, we’re putting faces to names and beginning to understand the safe and unsafe areas of the city that is Wikipedia.

(image by Paula Bray, Powerhouse Museum, CC-BY-SA)