Categories
Conferences and event reports MW2009

MW2009 Clouds, Switches, APIs, Geolocation and Galleries – a shoddy summary

(Disclaimer – this is a rushed post cobbled together from equally rushed notes!)

Like most years, this year’s Museums and the Web (MW2009) was all about the people. Catching up with people, putting faces to names, and having heated discussions in a revolving restaurant atop the conference venue in Indianapolis. The value of face to face is more the case for people travelling from outside the USA – for most of us it is the only chance to catch up with many people.

Indianapolis is a flat city surrounded by endless corn fields which accounts for the injection of corn syrup into every conceivable food item. No one seems to walk preferring four wheels to two legs – making for a rather desolate downtown and a highly focussed conference event with few outside distractions.

The pre-conference day was full of workshops. I delivered two – one with Dr Angelina Russo on planning social media, and the other and exhausting and hopefully exhaustive examination and problematising of traditional web metrics and social media evaluation. With that out of the way I settled back and took in the rest of the conference.

MW2009 opened with a great keynote from Maxwell Anderson, director of the Indianapolis Museum of Art. Max’s address can be watched in full (courtesy of the IMA’s new art video site – Art Babble) and is packed with some great moments – here’s a museum director who gets the promise of the web and digital and isn’t caught up in the typical physical vs virtual dichotomy. With Rob Stein’s team at the IMA the museum has been able to test and experiment with a far more participatory and open way of working while they (still) work out how to bring the best changes into the galleries as well.

After the opening keynote it was into split sessions. Rather than cover everything I saw I’ll zero in on the key things I took away cribbed straight from my notes. I’ve left a fair bit out and so make sure you head over to Archimuse and digest the papers.

Using the cloud

In the session on cloud computing Charles Moad, one of the IMA developers, delved deep into the practicalities of using Amazon Web Services for hosting web applications. His paper is well worth a read and everyone in the audience was stunned by the efficiencies, flexibility (suddenly extra load? just start up another instance of your virtual servers!), and incredibly low cost of the AWS proposition. I’m sure MW2010 will have a lot of reports of other institutions using cloud hosting and applications.

Following Charles, Dan Zombonini from Box UK who works with, but isn’t in the museum sector showed off the second public iteration of Hoard.it. Last year Hoard.it caused a kerfuffle by screen scraping collection records from various museum collections without asking. This year Dan provoked by asking what the real value of efforts like the multimillion Euro project Europeana is? Dan reckons that museums should focus on being a service provider – echoing some of what Max Anderson had said in the keynote. According to Dan, museums have a lot to offer in terms of “expertise, additional media, physical space, reputation & trust, audience, voice/exposure/influence” – and these are rarely reflected in how most museums approach the ‘problem’ of online collections.

APIs

Last year there was a lot of talk of museum APIs at MW – then in November the New Zealanders trumped everyone by launching Digital NZ. But in the US it has been the Brooklyn Museum’s launching of their API a little while ago that seems to have put the issue in front of the broader museum community.

Richard Morgan from the V&A introduced the private beta of the V&A’s upcoming API (JSON/REST) and presented a rather nice mission statement – “we provide a service which allows people to construct narrative and identity using museum content, space and brand”. Interestingly, to create their API they have had to effectively scrape their existing collection online!

Brian Kelly from UKOLN talked about an emerging best practice for the development of APIs and the importance of everyone not going it alone. Several in the audience of both Richard and Brian’s sessions were uneasy about the focus on APIs as a means for sharing content – “surely we already have OAI etc?”. But as one anonymously pointed out, yes many museums have OAI but in not publicising and providing the easy access OAI is really ‘CAI’.

And APIs still don’t get around the thorny issues of intellectual property. (I’ve been arguing we need to organise our content licensing first in order to reduce the complexity of the T&C of our APIs).

As Piotr from the Met and author of the excellent Museum Pipes shows time and time again, the real potential of APIs and the like is only really apparent once people start making interesting prototypes with the data. Frankie Roberto (ex-Science Museum and now at Rattle) showed me Rattle’s upcoming Muddy service – they’ve taken Powerhouse data and done some simple visualisations.

APIs from a select few museums will probably put the rocket under the sector needed to really open up data sharing – however we need some great case studies to emerge for the true potential to be realised.

Geolocation

Another theme to reach the broader community this year was geolocation. Amongst a bunch of great projects showing the potential of geo-located content for storytelling and connecting with audiences was the rather excellent PhillyHistory site. The ability to find photos near where you grew up has resulted in some remarkable finds for the project as well as a healthy but of revenue generaton – $50,000 from the purchase of personal images.

Aaron Straup-Cope, geo-genius at Flickr delivered another of his entertaining and witty presentations where he covered some of the problems with geo-coding. In so doing he revealed that most of the geo-coded photos on Flickr are in fact hand geo-coded. That is, people opening a map, navigating to where they think they took the photo, and sticking in a pin. The map is not the territory – my borders of my neighbourhood are not the same as yours and neither of ours are the same as those formalised by government agencies. This is the case as much for obvious contested territories as it is for local spaces. The issue for geocoders, then, is how to map the “perceptions of boundaries”. Aaron’s slides are up on his blog and are worth a gander – they raise a lot of questions for those of us working with community memory.

Galleries

Nina Simon made her MW debut with a fun workshop challenging all of us in the web space to ‘get out our (web) ghetto’ and tackle the challenge of in gallery participatory environments. Her slides (made using Prezi) covered several examples of real-world tagging, polling, collaborative audience decision making and social interactions. The challenge to the audience to “imagine a museum as being like . . . ” elicited some very funny responses and Nina has expanded on her blog.

I don’t entirely agree with Nina’s call to action – the nature and type of participation and expectation varies greatly between science centres, history museums, and art museums. And there are complex reasons as to why participatory behaviours are sometimes more obviously visible online – and why many in-gallery behaviours are impossible to replicate online.

But the call to work with gallery designers is much needed. All too often there is a schism between the teams responsible for online and in-gallery interactions – technologically-mediated or not.

Kevin von Appen’s paper on the final day complicates matters even more. Looking at the outcomes of a YouTube ‘meet up’ at the Ontario Science Centre, Kevin and the OSC team struggled with working out what the real impact of the meet up was. Well attended and with people choosing to fly in from as far away as Australia it would have seemed as if 888Toronto888 was a huge success, however –

Clearly, meetup participants were first and foremost interested in each other. The OSC was the context, not the star. Videos that showcased the meetup-as-party/science center-as-party-place positioned us as a cool place for young adults to hang out, and that’s an audience we’d like to grow.

It wasn’t cheap either – the final figure worked out at $95 per participant. Clearly If we want more ‘participatory experiences’ in our museums it isn’t going to be cheap. And if we want audiences to have ownership of our spaces then we may need to rethink was our spaces are.

(As an aside, I finally learnt why art museums have more gallery staff in the galleries than other types of museums – one per room – albeit not necessarily engaging with audiences! According to my knowledgeable source, art museums have found that it is cheaper to hire people to staff the galleries than it is to try to insure the irreplaceable works inside.)

“The switch”

One of side streams of MW this year was a fascination with ‘the switch’. This arose from some late night shenanigans in the ‘spinny bar’ – a revolving restaurant atop the Hyatt. The ‘switch’ was what turned the bar’s rotation on and off and on the final day a small group were ushered into the bar and witnessed the ‘turning on’. Charles, the head of engineering at the hotel, gave us a one hour private tour of the ‘switch’ and the motor that ran the bar – it was fascinating and a timely reminder of the value of the ‘private tour’ and the ‘behind the scenes’. In return, Charles asked all of us plenty of questions about the role of technology in his children’s education and how to get the most out of it.

We need more museum experiences like this!

Categories
Exhibition technology MW2009

MW2009 – Multi-touch: what does this technology hold for future musuem exhibits?

_dsc0106

Hi I’m Paula Bray and I usually blog over at Photo of the Day.

Today, whilst Seb was slaving away giving two workshops in a row at Museums and the Web 2009 I spent the day with Jim Spadaccini and Paul Lacey in a great, full-day workshop called ‘Make It Multi-touch’ that showcased the custom built 50” touch-table. You can view it over at Ideum .

We got inside information on how this technology was developed from the initial prototype back in September 2008 that featured a dual mirror and two camera solution that resulted in the need to process complicated gestures and quickly. Two prototypes later is the final product you can see here. This technology can process simple to complex gestures known as ‘blobs’ (fingers reflected) which is fed to software that can process touch, drag and drop, pinch and expand, drawing, rotate and double tap features that are all intuitive to the user within a short time-frame. The aim is to provide an interactive social experience that is very different to the traditional computer based interactive exhibits that can tend to isolate the experience to one visitor.

_dsc01092

What can we learn from the public about using museum collections and content through technology such as multi-touch? This form of technology may be a novelty for some at this stage but the future design of this product holds potentials for change amongst many museum applications.

Scenario: Multi-touch tables are available in a museum exhibition for the public to use and interact with exhibition content. Images of collection objects can be moved across the table, details of content can be zoomed in through simple “blob” (finger) movements. Descriptive information about the object can be shown through XMP metadata stored in the file. Location data can be retrieved and the user can create their own exhibit and learning experience. This is a very different user application that can change visitor’s experiece. Do we need to compete with devices that are currently available at home and make it social and educational in the museum? Does fixed navigation work anymore?

_dsc0094

Multi touch technology has potential to change museums experience and it will be interesting to watch this technology develop. Will the public start to expect to come to museums to interact with exhibits in this new way?

This is definitely more than a “big-ass table”.

Post & photography by Paula Bray

Categories
MW2009 Web metrics

Better web metrics for museums – a MW09 workshop, April 2009

The Museums and the Web 2009 programme is now out and registration has started. This year the action takes place in Indianapolis and many of us faraway people are looking forward to checking out the IMA.

If you attended MW last year or the recent National Digital Forum in NZ, or maybe your organisation has had one of my private workshop sessions, you might have heard my rant about the dire problems with how museums ‘measure’ the success or otherwise of their websites and online projects.

My paper on the subject from last year’s MW still stands but now I’ve fleshed the content out to a half day workshop.

This year’s workshop in Indianapolis is now taking bookings and is limited in capacity (unlike last year) and we’re going to be doing a lot more digging into participants’ own sites and I’m hoping everyone who attends will share a month’s worth of data for comparison and analysis purposes.

I’m going to be building this into a solid foundational workshop for basic web analytics as well as a specialised look at the sort of metrics museums, libraries, archives and government web projects need to be engaging with.

If this sounds like it is of interest to you and you happen to be coming to MW09, then register and book a place.