Categories
Conferences and event reports MW2009

MW2009 Clouds, Switches, APIs, Geolocation and Galleries – a shoddy summary

(Disclaimer – this is a rushed post cobbled together from equally rushed notes!)

Like most years, this year’s Museums and the Web (MW2009) was all about the people. Catching up with people, putting faces to names, and having heated discussions in a revolving restaurant atop the conference venue in Indianapolis. The value of face to face is more the case for people travelling from outside the USA – for most of us it is the only chance to catch up with many people.

Indianapolis is a flat city surrounded by endless corn fields which accounts for the injection of corn syrup into every conceivable food item. No one seems to walk preferring four wheels to two legs – making for a rather desolate downtown and a highly focussed conference event with few outside distractions.

The pre-conference day was full of workshops. I delivered two – one with Dr Angelina Russo on planning social media, and the other and exhausting and hopefully exhaustive examination and problematising of traditional web metrics and social media evaluation. With that out of the way I settled back and took in the rest of the conference.

MW2009 opened with a great keynote from Maxwell Anderson, director of the Indianapolis Museum of Art. Max’s address can be watched in full (courtesy of the IMA’s new art video site – Art Babble) and is packed with some great moments – here’s a museum director who gets the promise of the web and digital and isn’t caught up in the typical physical vs virtual dichotomy. With Rob Stein’s team at the IMA the museum has been able to test and experiment with a far more participatory and open way of working while they (still) work out how to bring the best changes into the galleries as well.

After the opening keynote it was into split sessions. Rather than cover everything I saw I’ll zero in on the key things I took away cribbed straight from my notes. I’ve left a fair bit out and so make sure you head over to Archimuse and digest the papers.

Using the cloud

In the session on cloud computing Charles Moad, one of the IMA developers, delved deep into the practicalities of using Amazon Web Services for hosting web applications. His paper is well worth a read and everyone in the audience was stunned by the efficiencies, flexibility (suddenly extra load? just start up another instance of your virtual servers!), and incredibly low cost of the AWS proposition. I’m sure MW2010 will have a lot of reports of other institutions using cloud hosting and applications.

Following Charles, Dan Zombonini from Box UK who works with, but isn’t in the museum sector showed off the second public iteration of Hoard.it. Last year Hoard.it caused a kerfuffle by screen scraping collection records from various museum collections without asking. This year Dan provoked by asking what the real value of efforts like the multimillion Euro project Europeana is? Dan reckons that museums should focus on being a service provider – echoing some of what Max Anderson had said in the keynote. According to Dan, museums have a lot to offer in terms of “expertise, additional media, physical space, reputation & trust, audience, voice/exposure/influence” – and these are rarely reflected in how most museums approach the ‘problem’ of online collections.

APIs

Last year there was a lot of talk of museum APIs at MW – then in November the New Zealanders trumped everyone by launching Digital NZ. But in the US it has been the Brooklyn Museum’s launching of their API a little while ago that seems to have put the issue in front of the broader museum community.

Richard Morgan from the V&A introduced the private beta of the V&A’s upcoming API (JSON/REST) and presented a rather nice mission statement – “we provide a service which allows people to construct narrative and identity using museum content, space and brand”. Interestingly, to create their API they have had to effectively scrape their existing collection online!

Brian Kelly from UKOLN talked about an emerging best practice for the development of APIs and the importance of everyone not going it alone. Several in the audience of both Richard and Brian’s sessions were uneasy about the focus on APIs as a means for sharing content – “surely we already have OAI etc?”. But as one anonymously pointed out, yes many museums have OAI but in not publicising and providing the easy access OAI is really ‘CAI’.

And APIs still don’t get around the thorny issues of intellectual property. (I’ve been arguing we need to organise our content licensing first in order to reduce the complexity of the T&C of our APIs).

As Piotr from the Met and author of the excellent Museum Pipes shows time and time again, the real potential of APIs and the like is only really apparent once people start making interesting prototypes with the data. Frankie Roberto (ex-Science Museum and now at Rattle) showed me Rattle’s upcoming Muddy service – they’ve taken Powerhouse data and done some simple visualisations.

APIs from a select few museums will probably put the rocket under the sector needed to really open up data sharing – however we need some great case studies to emerge for the true potential to be realised.

Geolocation

Another theme to reach the broader community this year was geolocation. Amongst a bunch of great projects showing the potential of geo-located content for storytelling and connecting with audiences was the rather excellent PhillyHistory site. The ability to find photos near where you grew up has resulted in some remarkable finds for the project as well as a healthy but of revenue generaton – $50,000 from the purchase of personal images.

Aaron Straup-Cope, geo-genius at Flickr delivered another of his entertaining and witty presentations where he covered some of the problems with geo-coding. In so doing he revealed that most of the geo-coded photos on Flickr are in fact hand geo-coded. That is, people opening a map, navigating to where they think they took the photo, and sticking in a pin. The map is not the territory – my borders of my neighbourhood are not the same as yours and neither of ours are the same as those formalised by government agencies. This is the case as much for obvious contested territories as it is for local spaces. The issue for geocoders, then, is how to map the “perceptions of boundaries”. Aaron’s slides are up on his blog and are worth a gander – they raise a lot of questions for those of us working with community memory.

Galleries

Nina Simon made her MW debut with a fun workshop challenging all of us in the web space to ‘get out our (web) ghetto’ and tackle the challenge of in gallery participatory environments. Her slides (made using Prezi) covered several examples of real-world tagging, polling, collaborative audience decision making and social interactions. The challenge to the audience to “imagine a museum as being like . . . ” elicited some very funny responses and Nina has expanded on her blog.

I don’t entirely agree with Nina’s call to action – the nature and type of participation and expectation varies greatly between science centres, history museums, and art museums. And there are complex reasons as to why participatory behaviours are sometimes more obviously visible online – and why many in-gallery behaviours are impossible to replicate online.

But the call to work with gallery designers is much needed. All too often there is a schism between the teams responsible for online and in-gallery interactions – technologically-mediated or not.

Kevin von Appen’s paper on the final day complicates matters even more. Looking at the outcomes of a YouTube ‘meet up’ at the Ontario Science Centre, Kevin and the OSC team struggled with working out what the real impact of the meet up was. Well attended and with people choosing to fly in from as far away as Australia it would have seemed as if 888Toronto888 was a huge success, however –

Clearly, meetup participants were first and foremost interested in each other. The OSC was the context, not the star. Videos that showcased the meetup-as-party/science center-as-party-place positioned us as a cool place for young adults to hang out, and that’s an audience we’d like to grow.

It wasn’t cheap either – the final figure worked out at $95 per participant. Clearly If we want more ‘participatory experiences’ in our museums it isn’t going to be cheap. And if we want audiences to have ownership of our spaces then we may need to rethink was our spaces are.

(As an aside, I finally learnt why art museums have more gallery staff in the galleries than other types of museums – one per room – albeit not necessarily engaging with audiences! According to my knowledgeable source, art museums have found that it is cheaper to hire people to staff the galleries than it is to try to insure the irreplaceable works inside.)

“The switch”

One of side streams of MW this year was a fascination with ‘the switch’. This arose from some late night shenanigans in the ‘spinny bar’ – a revolving restaurant atop the Hyatt. The ‘switch’ was what turned the bar’s rotation on and off and on the final day a small group were ushered into the bar and witnessed the ‘turning on’. Charles, the head of engineering at the hotel, gave us a one hour private tour of the ‘switch’ and the motor that ran the bar – it was fascinating and a timely reminder of the value of the ‘private tour’ and the ‘behind the scenes’. In return, Charles asked all of us plenty of questions about the role of technology in his children’s education and how to get the most out of it.

We need more museum experiences like this!

One reply on “MW2009 Clouds, Switches, APIs, Geolocation and Galleries – a shoddy summary”

Seb, thanks for this great summary of MW. I only managed to be there in person for a single day, so the context you provide to my limited view is really valuable.

About APIs: I sat with the group of doubters you reference. In terms of sharing/aggregating data, I don’t see that they get us all that far. Metasearching a plethora of APIs to get to an approximation of an x-collection search is not realistic. Most APIs don’t allow you to take possession of the data, and without taking possession of the data, you can’t reconcile the inevitable and probably significant inconsistencies in the data. And we don’t have agreement on the data structure buried within the API describing the content. APIs may be good for something, but aggregating content probably isn’t it.

Comments are closed.