Categories
MW2012

Call for submissions: Epic Fail at Museums & the Web 2012, San Diego

Jane Finnis (Culture24) and I are hosting the closing plenary at Museums & the Web in San Diego this year. We’ve called it Epic Fail and we’re going to be shining a light on the failures that we individually and we collectively have had as project teams, institutions, and maybe even the sector as a whole.

Inspired by the valuable lessons we’ve learned personally from over-sharing our own failures on our blogs, and the growing trend in the non-profit and social enterprise sectors to share analyse, and learn from failures – we think the time has come for Museums and the Web to recognise the important role that documenting failures plays in making our community stronger.

Failure?

Well, taking a cue from FailFaire, there are many common reasons for failure in the non-profit sector –

1. The project wasn’t right for the organisation (or the organisation wasn’t right for the project)
2. Tech is search of a problem
3. Must-be-invented-here syndrome
4. Know thy end-users
5. Trying to please donors rather than beneficiaries (and chasing small pots of money)
6. Forgetting people
7. Feature creep
8. Lack of a backup plan
9. Not connecting with local needs
10. Not knowing when to say goodbye

Sound familiar? I thought so.

So . . .

We’re doing a call out for ‘failures’ to be featured in our closed door session (that means no tweeting, no live blogging).

Each Fail will present a short 7-10 minute slot followed by 10 minutes panel and open-mic discussion. Each Fail needs to be presented by someone who worked on the project – this isn’t a crit-room – and we want you to feel comfortable enough to be honest and open. We want you to explore the reasons why you thought the project was a failure, diagnose where it went wrong, what would you do differently, and then collectively discuss the key lessons for future projects of a similar nature or targeting similar people.

Maybe, like me, you did an early project with QR codes that didn’t take into account the lighting situation in your exhibition, not to mention the lack of wifi? Or maybe a mobile App that you forgot to negotiate signage for the exhibition space? Or an amazing content management system that failed to address the internal culture and workflow for content production and ended up not being used?

In fact in my career, I can’t think of any project that hasn’t had its own share of failure. But in most cases I’ve been able to address the problem and iterate, or, if necessary, as they say in the startup game, ‘pivot‘.

The more significant the failure, the better is its potential to be an agent of change.

So, if you are coming to Museums and the Web in San Diego in April this year, get in touch to nominate your project for a spot! We promise to create a safe environment for sharing these important lessons and end this year’s conference on a high.

Get in touch with the Fail Team – epicfail [at] freshandnew [dot] org

Categories
Museum blogging

Six weeks in and Cooper-Hewitt Labs launches

The last six weeks have been a bit of a blur – settling into a new city, a new job, trying to find proper coffee nearby (still unsuccessful!). As you do in a new job, my first weeks have been spent looking at the lie of the land and analysing the data available about the land itself (and configuring better data collection tools if the data you have isn’t suitably illuminating).

The Cooper-Hewitt has just closed its last exhibition for a little while and the focus is firmly on the museum’s re-building and getting all the back of house digital infrastructure up to date and in order.

The question that underpins most of what comes after that is clearly – “how can a museum make the most of online and digital operations when its buildings are closed?”.

So . . .

Today we launched a new blog over at the Cooper-Hewitt – Cooper-Hewitt Labs. This one focusses on the work my team is doing – and the challenges that lie ahead. Being the Labs, we’re going to be undertaking a range of experiments that we’re going to need your help with, as well as offering some opportunities to intern with us (hint! hint!).

Go check out the Cooper-Hewitt Labs. (And don’t forget to leave a little offering for the tanuki while you are there.)

(awesome animated gif by Fealoki!)

Categories
Conceptual Interviews

The museum website as a newspaper – an interview with Walker Art Center

There’s been a lot of talk following Koven Smith’s (Denver Art Museum) provocation in April – “what’s the use of the museum website?”. Part driven by the rapid uptake of mobile and part driven by the existential crisis brought on Koven, many in the community have been thinking about how to transform the digital presence of our institutions and clients.

At the same time Tim Sherratt has been on a roll with a series of presentations and experiments that are challenging our collections and datasets to be more than just ‘information’ on the web. He calls for collecting institutions “to put the collections themselves squarely at the centre of our thoughts and actions. Instead of concentrating on the relationship between the institution and the public, we can can focus on the relationship we both have with the collections”.

Travelling back in time to 2006 at the Powerhouse we made a site called Design Hub. Later the name was reduced to D’Hub, but the concept remained the same. D’Hub was intended to be a design magazine website, curated and edited by the museum and, drawing upon the collection, engaging and documenting design events, people and news from that unique perspective. For the first two years it was one of the Powerhouse’s most successful sites – traffic was regularly 100K+ visits per month – and the content was as continuous as it could be given the resourcing. After that, however, with editorial changes the site began to slip. It has just relaunched with a redesign and new backend (now WordPress). Nicolaas Earnshaw at the Powerhouse gives a great ‘behind the scenes’ teardown of the recent rebuild process on their new Open House blog.

It is clear that the biggest challenge with these sorts of endeavours is the editorial resourcing – anything that isn’t directly museum-related is very easily rationalised away and into the vortex, especially when overall resources are scarce.

So with all that comes the new Walker Art Center website. Launched yesterday it represents a potential paradigm shift for institutional websites.

I spoke to Nate Solas, Paul Schmelzer and Eric Price at the Walker Art Center about the process and thinking behind it.

F&N: This is a really impressive redesign and the shift to a newspaper format makes it so much more. Given that this is now an ‘art/s newspaper’, what is the editorial and staffing model behind it? Who selects and curates the content for it? Does this now mean ‘the whole of Walker Art Center’ is responsible for the website content?

Paul Schmelzer (PS): The Walker has long had a robust editorial team: two copy editors, plus a managing editor for the magazine, but with the content-rich new site, an additional dedicated staffer was necessary, so they hired me. I was the editor of the magazine and the blogs at the Walker from 1998 until 2007, when I left to become managing editor of an online-only national political news network. Coming back to the Walker, it’s kind of the perfect gig for me, as the new focus is to be both in the realm of journalism — we’ll run interviews, thinkpieces and reportage on Walker events and the universe we exist in — and contemporary art. While content can come from “the whole of the Walker Art Center,” I’ll be doing a lot of the content generation and all of the wrangling of content that’ll be repurposed from elsewhere (catalogue essays, the blogs, etc) or written by others. I strongly feel like this project wouldn’t fly without a dedicated staffer to work full-time on shaping the presentation of content on the home page.

F&N: The visual design is full of subtle little newspaper-y touches – the weather etc. What were the newspaper sites the design team was drawing upon as inspiration for the look and feel?

Nate Solas (NS): One idea for the homepage was to split it into “local, onsite” and “the world”. A lot of the inspiration started there, playing with the idea that we’re a physical museum in the frozen north, but online we’re “floating content”. We wanted to ground people who care (local love) but not require that you know where/who we are. “On the internet, nobody knows you’re a dog”.

The “excerpts” of articles was another hurdle we had to solve to make it feel more “news-y”. I built a system to generate nice excerpts automatically (aware of formatting, word endings, etc), but it wasn’t working to “sell the story” in most cases. So almost everything that goes on the homepage is touched by Paul, but we use the excerpt system for old content we haven’t manually edited.

PS: Yeah, the subtle touches like the weather, the date that changes each day, and the changing hours/events based on what day it is all serve as subtle reminders that we’re a contemporary art center, that is, in the now. The churn of top stories (3-5 new ones a week) and Art News from Elsewhere items (5-10 a day, ideally) reinforces this aspect of our identity. The design team looked at a wide range of news sites and online magazines, from the New York Times to Tablet Magazine to GOOD.

Eric Price (EP): Yeah, NYTimes, Tablet, and Good are all good. I’d add Monocle maybe. Even Gawker/Huffington Post for some of the more irreverent details. We were also taking cues from print – we’re probably closest in design to an actual printed newspaper.

F&N: I love the little JS tweaks – the way the article recommendations slide out at the base of an article when you scroll that far – the little ‘delighters’. What are you aiming for in terms of reader comments and ‘stickiness’? What are your metrics of success? Are you looking at any newspaper metrics to combine with museum-y ones?

NS: It’s a tricky question, because one of the driving factors in this content-centric approach is that it’s ok (good even) to send people away from our site if that’s where the story is. We don’t have a fully loaded backlog of external articles yet (Art News from Eleswhere), but as that populates it should start to show up more heavily in the Recommendation sections. So the measure of success isn’t just time on site or pageviews, but things like – did they make it to the bottom of the article? Did they stay on the page for more than 30 seconds (actually read it)? Did they find something else interesting to read?

My dream is of the site to be both the start and also links in a chain of Wikipedia-like surfing that leads from discovery to discovery, and suddenly an hour’s gone by. (We need more in-article links to get there, but that’s the idea.)

So, metrics. I think repeat visitors will matter more. We want people to be coming back often for fresh & new content. We’ll also be looking for a bump in our non-local users, since our page is no longer devoted to what you can do at the physical space. I’m also more interested in deep entrance pages and exit pages now, to see if we can start to infer the Wikipedia chain of reading and discovery. Ongoing.

F&N: How did you migrate all the legacy content? How long did this take? What were the killer content types that were hardest to force into their new holes?

NS: Content migration was huge, and is ongoing. We have various microsites and wikis that are currently pretty invisible on the new site. We worked hard to build reliable “harvesting” systems that basically pulled content from the old system every day, but was aware of and respected local changes. That worked primarily for events and articles.

A huge piece of the puzzle is solved by what we’re calling “Proxy” records – a native object that represents pretty much anything on the web. We are using the Goose Article Extractor to scrape pages (our own legacy stuff, mostly) and extract indexable text and images, but the actual content still lives in its original home. We obviously customized the scraper a bit for our blogs and collections, but by having this “wrapper” around any content (and the ability to tag and categorize it locally) we can really expand the apparent reach of the site.

F&N: How do you deal with the ‘elsewhere’ content? Do you have content sharing agreements?

NS: [I am not a lawyer and this is just my personal opinion, but] I feel pretty strongly that this is fair use and actually sort of a perfect “use case” for the internet. Someone wrote a good thing. We liked it, we talked about it, and we linked right to it. That’s really the key – we’re going beyond attribution and actually sending readers to the source. We do scrape the content but only for our search index and to seed “more like this” searches, we never display the whole article.

That said, if a particular issue comes up we’ll address it responsibly. We want to be a good netizen, but part of that is convincing people this is a good solution for everyone.

F&N: What backend does the new site run on? Tech specs?

Ubuntu 11.04 VMs
LibVirt running KVM/QEMU hypervisor
Django 1.3 with a few patches, Python 2.7.
Nginx serving static content and proxying dynamic stuff to Gunicorn (Python WSGI).
Postgres 8.4.9
Solr 3.4.0 (Sunburnt Python-Solr interface)
Memcache
Fabric (deployment tool)
ImageMagick (scaling, cropping, gamma)

F&N: What are you using to enable search across so many content types from events to collections? How did you categorise everything? Which vocabularies?

NS: Under the hood it’s Apache Solr with a fairly broad schema. See above for the trick to index multiple content-types: basically reduce to a common core and index centrally, no need to actually move everything. A really solid cross-site search was important to me, and I think we’re pretty close.

We went back and forth forever on the top-level taxonomy, and finally ended with two public-facing categories: Genre and Type. Genre applies to content site-wide (anything can be in the “Visual Arts” Genre), but Type is specific to kind of content (Events can be of type “Screenings”, but Articles can’t). The intent was to have a few ways to drill down into content in cross-site manner, but also keep some finer resolution in the various sections.

We also internally divide things by “Program”, programming department, and this is used to feed their sections of the site and inform the “VA”, “PA”, etc tags that float on content. So I guess this is also public-facing, but it’s more of a visual cue than a browsable taxonomy.

Vocabularies are pretty ad-hoc at this point: we kept what seemed to work from the old site and adjusted to fit the new presentation of content.

The two hardest fights: keeping the list short and public-facing. This is why we opted to do away with “programming department” as a category: we think of things that way, no one else does.

F&N: Obviously this is phase one and there’s affair bit of legacy material to bring over into the new format – collections especially. How do you see the site catering for objects and their metadata in the future?

NS: Hot on the heels of this launch is our work on the Online Scholarly Catalogue Initiative from the Getty. We’re in the process of implementing CollectionSpace for our collections and sorting out a new DAMS, and will very soon turn our attention to building a new collections site.

An exciting part of the OSCI project for me is to really opening up our data and connecting it to other online collections and resources. This goes back to the Wikipedia surfing wormhole: we don’t want to be the dead-end! Offer our chapter of the story and give them more things to explore. (The Stedelijk Museum is doing some awesome work here, but I don’t think it’s live yet.)

F&N: When’s the mobile version due?

NS: It just barely didn’t make the cut for launch. We’re trying to keep the core the same and do a responsive design (inspired by but not as good as Boston Globe). We don’t have plans at the moment for a different version of the site, just a different way to present it. So: soon.

Go and check out the new Walker Art Center site.

Categories
Mobile User behaviour

Chickens, eggs & QR codes

Adam Greenfield at Urbanscale just posted some interesting research his team has been doing in NYC on the citizen familiarity of QR codes.

This is especially timely as QR codes are getting a lot of interest (finally) from the cultural sector. The Powerhouse Museum in Sydney has been doing QR codes for a few years – first failing – but now perhaps getting good traction with them now that the code scanner is built into the exhibition catalogue App. Shelley Bernstein’s team at the Brooklyn Museum have also been rolling them out. And Wikipedia’s been promoting the nifty language ‘auto-detect’ QR codes that Derby Museum & Art Gallery have developed (QRpedia).

But there are still very valid concerns about the appropriateness of them – especially now that visual recognition is coming along rapidly (see Google Goggles at the Getty) and maybe even NFC might gain traction (see Museum of London’s Nokia trial). QR codes feel very much like a short term intermediate solution that isn’t quite right.

Here’s Greenfield:

While general awareness of the codes was frankly rather higher than we’d expected, and a majority of our respondents knew more or less what they were for, very few … were successfully able to use QR codes to resolve a URL, even when coached by a knowledgeable researcher.

A strong theme that emerged — which we certainly found entirely unsurprising, but which ought to give genuine pause to the cleverer sort of marketers — is that, even where respondents displayed sufficient awareness and understanding of QR codes to make use of them, virtually no one expressed any interest in actually doing so. As one of our respondents put it, “I’ve already seen the ad, and now I’m going to spend my data plan on watching your commercial? No thanks.”

These findings mirror the anecdotal experience most of us have had with QRs ourselves. The value proposition just isn’t obvious – and the amount of scaffolding required to encourage scanning can, in museums, sometimes take up as much visual space as the content that ends up being displayed (especially for object labels).

Is this just a chicken and egg situation? I’m not sure.

Greenfield’s initial findings do show that even when there is awareness there isn’t interest. And, I’d add, even when there is interest, museums need to be especially careful to consider what visitors actually want/expect to see when they scan vs what museums are able to show/tell. This is a crucial distinction that is often missed in discussions of in-gallery content delivery.

Categories
General

Farewell Powerhouse, Hello Cooper-Hewitt National Design Museum

It is official now.

Today I’m leaving the Powerhouse after a long stint to take up a new role as Director of Digital & Emerging Media at the Cooper-Hewitt National Design Museum in New York. I’ll be starting at the Cooper-Hewitt on November 28 (2011).

I’m looking forward to the new challenges and also the opportunities that I hope will flow from being part of the larger Smithsonian Institution whilst being in the cultural epicentre that is New York. I’m especially excited to be working for the Cooper-Hewitt with its high calibre exhibitions, and well established national education projects.

I’m continuing to write Fresh & New so don’t fret about any loss of signal. It will just be from a different timezone – and possibly, over time, a slightly different set of spelling conventions.

I’d like to thank the support of the Powerhouse over many years – the teams I’ve managed and my colleagues are all kinds of awesome. My digital colleagues have made the workplace one where ideas have flourished and everyone has been committed to trying out new things fueled by coffee, sugary treats, and a sense of mirth. I’ve been extraordinarily lucky to have worked with such people.

Of course none of the work that’s been done would have been possible without the rest of the Powerhouse, especially the curatorial, registration, and education staff who’ve been at the frontline of how the ‘new museum’ has adapted to rapid technological change. The IT team at the Powerhouse, where I first began as an employee, has also been instrumental in providing a flexible technology environment in which to test and trial new ideas, and they embody the notion that a real IT department should be ‘enablers’, not just ‘fixers’.

I also need to thank my series of supervisors over the years each of whom has supported experimentation and encouraged the prototyping of many wild ideas. I hope my own management style has learned from them.

Most of all I’ve made some (hopefully) lifelong friendships working at the Powerhouse and I’m going to miss hanging out and making stuff with such great people.

It also needs to be said that the Powerhouse, as a workplace, provided a rare luxury – a job that provided great creative stimulation and opportunity, flexible working hours and work/life balance, even within the constraints of a shrinking public service. The opportunity to do ‘purposeful work’ – not just a job – is a luxury not afforded to many and one that needs to be seized.

And of course, “done is the engine of more”.

Now let’s see how it turns out in “the city that is a goal”.

Fresh and New readers should also keep an eye on a new technology and museology blog from the Powerhouse being coordinated by Paula Bray called Open House. It is going to be broader in focus and draw in contributions form across the Powerhouse so make sure you add it to your RSS reader.

Categories
API Collection databases Search

Museum collection meets library catalogue: Powerhouse collection now integrated into Trove

The National Library of Australia’s Trove is one of those projects that it is only after it is built and ‘live in the world’ that you come to understand just how important it is. At its most basic,Trove provides a meta-search of disparate library collections across Australia as well as the cultural collections of the National Library itself. Being an aggregator it brings together a number of different National Library products that used to exist independently under the one Trove banner such as the very popular Picture Australia.

Not only that, Trove,has a lovely (and sizeable) user community of historians, genealogists and enthusiasts that diligently goes about helping transcribe scanned newspapers, connect up catalogue records, and add descriptive tags to them along with extra research.

Last week Trove ingested the entirety of the Powerhouse’s digitised object collection. Trove had the collection of the Museum’s Research Library for a while but now they have the Museum’s objects too.

So this now means that if, in Trove, you are researching Annette Kellerman you also come across all the Powerhouse objects in your search results too – not just books about Kellerman but also her mermaid costume and other objects.

The Powerhouse is the first big museum object collection to have been ingested by Trove. This is important because over the past 12 months Trove has quickly become the first choice of the academic and research communities not to mention those family historians and genealogists. As one of the most popular Australian Government-run websites, Trove has become the default start point for these types of researchers it makes sense that museum collections need to be well represented in it.

The Powerhouse had been talking about integrating with Trove and its predecessor sub-projects for at least the last five years. Back in the early days the talk was mainly about exposing our objects records using OAI, but Trove has used the Powerhouse Collection API to ingest. The benefits of this have been significant – and surprising. Much richer records have been able to be ingested and Trove has been able to merge and adapt fields using the API as well as infer structure to extract additional metadata from the Powerhouse records. Whilst this approach doesn’t scale to other institutions (unless others model their API query structure on that of the Powerhouse), it does give end-users access to much richer records on Trove.

After Trove integration quietly went live last week there was a immediately noticeable flow of new visitors to collection records from Trove. And as Trove has used the API these visitors are able to be accurately attributed to Trove for their origin. The Powerhouse will be keeping an eye on how these numbers grow and what sorts of collection areas Trove is bringing new interest to – and if these interests differ to those arriving at collection records on the Powerhouse site through organic search, onsite search, or from other places that have integrated the Powerhouse collection as well such as Digital NZ.

Stage two of Trove integration – soon – is planned to allow the Powerhouse to ingest any user generated metadata back into the Powerhouse’s own site – much in the way it had ingested Flickr tags for photographs that are also in the Commons on Flickr.

This integration also signals the irreversible blending of museum and library practice in the digital space.

Only time will tell if this delivers more value to end users than expecting researchers to come to institutional websites. But I expect that this sort of merging – much like the expanding operations of Europeana – do suggest that in the near future museum collections will need to start offering far more than a ‘rich catalogue record’ online to pull visitors in from aggregator products (and, ‘communities of practice’) like Trove to individual institutional websites.

Categories
Mobile User behaviour

Early MoveMe wi-fi heat maps from Love Lace exhibition

Several months ago I announced that the Powerhouse Museum was a partner in the MoveMe pilot project funded under NSW Government’s Collaborative Solutions Program.

We’ve been working with Ramp, MOB Labs, ShopperTrak and Smarttrack RFID to deploy the pilot in our recent Love Lace exhibition.

This exhibition is ideal for trialling location aware content delivery because it is already kitted out with public wi-fi and we have the cross platform iOS and Android free exhibition App. Even better, the exhibition uses QR codes and the QR code reader in the exhibition App which gives the pilot project a great baseline to compare usage against.

While we don’t yet have the location aware content delivery working – that will come in a future version of the exhibition App – we have started to get access to wi-fi tracking data using the ShopperTrak system. As explained by Christopher Ainsley & Julian Bickersteth in their paper for Museums & the Web earlier this year, the ShopperTrak system is already used to create heatmaps and visitor journeys through shopping centres (or ‘malls’ as some readers might describe them).

The first data has started to emerge from the system and it is already very interesting.

Here’s a dwell time heat map that shows the areas of the exhibition where the wi-fi enabled devices (presumably carried by visitors) spend the longest time. This shows data from Sunday Oct 30 and 226 tracked devices.


(click for larger version)

A couple of important caveats.

Whilst the sample sizes are unexpectedly quite high (largely because the wi-fi tracking doesn’t require an actual connection to our wi-fi network, just that it is switched on on the device/phone), the sample rate at which devices are ‘pinged’ is quite low. iOS devices, for example, are only pinged every 2 minutes and so the resolution is very low – unless they are actively connected to our wi-fi network for the exhibition. This means that if an iOS device has wi-fi switched on but they aren’t using our Love Lace App and not connected to the exhibition wi-fi and they spend 10 minutes walking around the gallery their device will be counted in a maximum of 5 locations. Of course this can be offset by the volume of tracked devices (which almost certainly exceeds that of other manual people counting methods employed by traditional audience research).

What is interesting about the data is that it pretty much mirrors the distribution of the QR code usage I blogged about earlier. Unsurprisingly the longer dwell times are where the sit-down video experience is.

Categories
Mobile User experience

Experiencing The O at MONA – a review

A lot has been written about the Museum of Old and New Art and I’m not going to rehash any of that. Instead I’m going to look at their mobile guide – The O – which is provided to every visitor and included in the admission price.

Here some of the fleet of 1300 Os sit charging in enormous custom charging bays where they can also be updated.

The O is an iOS App that runs on an iPod Touch comes ready to run and with a quality pair of Audio Technica headphones. Developed by Art Processors, The O is described thus;

Wall labels are at once didactic and limited. They inhibit imagination. Squinted at through a dozen huddled heads, they are barely useful tools for learning, much less free thinking, or a private appreciation of the objects they describe.

The O solves these problems. It delivers information in a way that enhances the visitor’s experience of the gallery, and enables curators and exhibition designers to display the works the way they want. Museum researchers can present the best, most relevant textual, visual and audio content at their discretion. It provides information on visitor viewing habits, trending and satisfaction via integrated statistical reports. Above all, The O is an intimate, intuitive interface of the learning and autonomous response.

None of this would matter if it was a pain to use.

I was very impressed by the ‘technology concierge’ skills of the ticketing staff – they run you through the basics of the App and the hardware as they sell you your ticket and set you off on your way. Sitting beside the cash register is a graphic clearly explaining each of the main interface screens of the O as well. I’ve never seen this level of ‘scaffolding’ happen in other museums and the deftness with which visitors are set off on their way quickly is a testament to their staff training (and acceptance amongst these staff of the value of the O itself).

Descending into the museum itself you launch the O and you are off. Pop up instructions help you through the basic App operations and after a while you are prompted to enter your email address (and optional country) to ‘save’ your journey to the MONA website. Once this is done there are no further prompts and even when, as I did, returned after lunch and was given a different O device, the final ‘saved tour’ seemed to accurately aggregate my whole visit (over the two different devices).

At its most basic level The O replaces wall labels. Entering a space you simply click ‘Update’ and, using wifi triangulation a proprietary real-time system (see comments), the device provides a list, with thumbnails, of objects ‘near’ you. This works surprisingly well despite the split levels and bulk showcases of coins and other small objects in some areas. The scrollable list relieves the technology of the difficult task of ‘exactly positioning the visitor’ whilst at the same time emphasising the visitor’s own agency in choosing what they are ‘seeing’. (I think this is going to be an increasingly important balance as location and compass headings give mobile devices better granularity at guessing what you are looking at).

However the most impressive part of The O is the content – not the technology.

The O provides simple label text and an image for every object. I was disappointed that the images weren’t zoomable, however on most objects there was also a curator’s piece amusingly titled Art Wank. These were short, very accessible and gave useful context and background without overdoing it. A slightly smaller subset of objects are augmented with options called ‘Ideas’, ‘Gonzo’, and ‘Media’. It is in these three areas that The O really differentiates itself from every other museum mobile App or guide I’ve experienced.

‘Ideas’ is simply a set of provocations – or talking points. Some are quotes, others are just statements. One of the many ‘delighters’ I discovered on The O visiting with my companion (with her own O), was that often there were multiple ‘Ideas’ and that very rarely would we both get the same one at the same time. This gave us prompts to talk to each about the objects we were looking at – ensuring that sociality was not eroded by every visitor being glued to their own screens.

‘Gonzo’ is almost mostly responses or stories from MONA’s owner David Walsh. Sometimes these are stories about the acquisition of various objects, other times they are hilarious, for want of a better word, ‘rants’ about the artist, a style, or a moment. Like the ‘Ideas’ they make great talking points.

‘Media’ are short audio files – interviews with the artist and others. Some objects also have songs by Damien Cowell who was commissioned to record them ‘about’ certain works.

The interviews blew me away.

Unlike every other ‘museum tour’ the audio interviews are completely raw and lo-fi. This shocked me – and I loved it. Almost all the interviews that I listened to sounded like they were recorded in a noisy cafe – and in more than a few the interviewee’s mobile phone rang in the middle of the recording (usually followed by an apology ‘sorry I’m in a meeting’). This made it so approachable and friendly – and, importantly, felt candid – like I was there with the artist. This also reminded me that the quality of the content always trumps the fidelity of the recording.

‘Loving’ or ‘hating’ objects is possible too, and doing so gives you a simple quantitative statistic on the objects popularity amongst other visitors. I did wish that this recommended me other things to go and see. I also missed any kind of search functionality – I understand that this is probably because ‘searching’ is the exact kind of intentionality that MONA is trying to disrupt, instead forcing you to be in the moment – but it was frustrating when there were certain works I knew about that I wanted to locate.

Leaving MONA, the headphones and The O were given back to the friendly staff at the door. Arriving back home, there in my inbox was an email from MONA linking me to their website where I could browse through the objects that I’d seen – after supplying the email address I used to register), and find out which I’d missed.

The post-visit web experience is interesting in that it requires a MONA visit (and user registration through The O) to get access. On one hand this might seem exclusionary – and is definitely an option that is really now only open to private museums with no public mandate – but on the other hand this did re-emphasise the importance of connecting the physical experience of MONA and its works with the online experience. And, that I couldn’t access the objects of the museum before my visit (beyond a few selected pieces), meant that I was more open to exploring than targeting only things I was interested in when I was in the galleries.

On the web you have access to all the same content you could get on The O – the audio, the text, but rather disappointingly only the same small size of image. Your path through MONA is visualised and able to be played back on a timeline. I’m not sure that this adds any navigation ‘value’ but it does re-emphasise the physicality of visiting MONA, its unique spatial construct, and its primacy in understanding and experiencing ‘the works’ inside.

This is one of the few examples of where a museum website actually enhances the post-visit experience by connecting it concretely back to the physical experience (and does so be explicitly preventing pre-visit planning and expectations).

There’s a couple of minor quirks (primitive audio player controls especially) with The O but overall it sets a new benchmark in terms of integrated interpretative devices.

I do wonder, though, how much it relies upon a few uniquely MONA attributes – its entirely private vision (versus public duty/mission), the design of the museum itself which prevents any other form of internet access (it is underground), and the tabula rasa upon which it has been able to construct its content all at once (no legacy material or practices to deal with)?

And, how the aggregate usage data – the loves/hates, the pieces that are most/least viewed, the contours of content – is used will be fascinating to see.

Categories
General

We have a new address – Freshandnew.org!

You might have noticed the web address of Fresh & New(er) has changed. We are now running at http://www.freshandnew.org and hopefully everything, including RSS feeds, is working as expected.

Categories
Conferences and event reports

Culture + heritage + digital at Web Directions South 2011

Sebastian Chan and Luke Dearnley

Luke Dearnley and I were last minute additions to the Web Directions South lineup last week. Coaxed by Maxine Sherrin to do a ‘fireside chat’ we sat comfortably by a digital fire and talked broadly around some of the exciting projects that are happening in the digital heritage space right now.

We tried to cover a lot of ground and tease out some of the issues in the sector as libraries and museums around the world finally begin to build significant momentum around digital content. Taking these discussions to the web developer community is important because all this is happening at a time when the government is calling for discussion of the National Cultural Policy where there is talk about ’emerging technologies’ and the NBN in the ‘arts’. (See the Ideascale on the digital culture response to the NCP.)

Here’s a brief rundown of what we covered in our free-wheeling talk done without notes (and, sadly, much sleep).

I started out looking at where we were at the Powerhouse in 2001. Back then we were talking about the ‘virtual museum’ and exploring 3D tours and building monolithic encyclopaedic resources using our ‘authority’. Whilst there was some amazing stuff built back then, that won awards (and we still get enquiries about), the web has changed.

And now where we are in our thinking in 2011.

Now it is all about being a data provider, getting the our knowledge and collections out into the community where they can be debated and gather feedback and attract interest. The social web and now the mobile web has made this possible at the kind of scale that wasn’t possible in 2001. At the same we now have ‘contextual authority’ rather than what we previously imagined was ‘overall authority’. Remember that in 2001 Wikipedia was only just starting and had only 6,000 articles.

At the same time the user is firmly in control not only of how they navigate ever growing competing information sources, they also are using interfaces that fundamentally change how they perceive their computing devices. Touch and now voice interfaces, radically personalise, even anthropomorphise our devices. They are carried closer to us than ever before, creating a sense of intimacy and helping us form (unhealthy?) relationships with our mobile technologies. (“Excuse me while I just check my iPhone one more time – I haven’t touched it in the last five minutes.”)

In the background of this slide you can see an early heat map that is produced by tracking the dwell time of visitors carrying wifi devices in one of our exhibitions (they don’t even need to be connected to our wifi to be picked up). I’ll be blogging about that shortly in a new post but for now it should serve as a reminder that this sense of personal connectivity comes at a high price of personal trackability. It isn’t simply bundled up under ‘privacy’ and there’s a long way to go in the public discussions and debate about the trade off between utility and privacy.

The other big change is that of scale.

A collection like that of the Powerhouse used to feel ‘large’ but in actual fact it is tiny. It’s value in the digital space now is no longer as an island but only in what it can contribute to national and international collections – a collection of collections. That’s a tough challenge for a State-funded museum whose majority of ‘visitors’ walking in the door live in Sydney.

But at scale new possibilities emerge.

At this point we started to look at some of the initiatives that are exciting us around the world at the moment. Initiatives where the ‘value’ wasn’t necessarily obvious at the beginning but emerged only after time.

We showed and talked about ->

Tim Sheratt’s work with the digitised newspaper collections in Trove and the emergent stories he is starting to knit together by analysing the changes in language in newspaper articles over time, or by facial recognition in archival collections. These stories are only possible at scale – and even now they are terribly incomplete with uneven digitisation of each State’s newspapers in Trove – but they are getting better over time. Everyone (even you, dear reader) needs to go an read the transcript of Tim’s recent keynote at ANZSI. We are at the very very beginning of this but Tim’s work hints at some of the possibilities.

– New York Public Library’s historical menus project and how marking these menus up in the way they have lets us observe the changes in diet and ingredients, as well as food prices over time. And how, of course, dining at the Possum Club in 1900 would have been quite an experience.

– The other thing about the NYPL menus project is the way that, prior to releasing an API, they’ve done what we did at Powerhouse. They’ve released the whole data set as a ZIP. As we found with our own collection, a downloadable full dataset allows people to do mass scale analysis more quickly and easily (and with less drain on your server) than using an API.

– Looking at scale we briefly showed the free ImagePlot toolkit from the Software Studies Institute at UC San Diego, and how it by allowing you to do image analysis of enormous corpora of image files new patterns and relationships can be discovered.

– Luke talked about linked data and how connecting everything up is slowly becoming possible as more things and thesauri go online. We showed a couple of nice front-end examples of some of the possibilities when collections get connected up. Our very own infant site – the Australian Dress Register – which is slowly growing and bringing on new contributors; and the newly re-designed and re-configured Design and Art Australia Online (formerly Dictionary of Australian Artists Online). Here’s a biographical entry for one of the designers with lots of objects in the Powerhouse collection. Here it becomes possible to traverse her ‘associates’ as well as all the exhibitions etc she has been involved in all over the world.

– We looked at some other exciting community transcription projects that are overcoming difficult issues of both relevance and specialised content. We showed the fantastic Old Weather project with the Citizen Science Alliance using old ship logs from the National Maritime Museum to gather geolocated climate data form the past. It is one of our personal favourites and Fiona Romeo at the NMM published a great paper on it at Museums and the Web earlier in 2011 which you should read. What we find really lovely about this project is that it finds deep value in the kind of collection that museums find very difficult to ‘exhibit’. Actual ships – easy and attractive to put in an exhibition but the ship logs – much harder.

– We also showed the interface for another Citizen Science Alliance project called Ancient Lives. This project is getting citizens to help transcribe papyrus scrolls from the Oxyrhynchus collection whose story of acquisition and discovery is enough to encourage you to give it a go.

In wrapping up we started to ask a number of questions that remain unanswered/unanswerable:

– what the barriers to a Europeana-like project are in Australia, let alone a Digital NZ? Are they more cultural reasons than anything else? What is of ‘national significance’ that we can all agree upon? Is such agreement even possible in a fragmented nation?

– does the ‘open’ in linked open data matter more than just linked data in the short term?

– are libraries able to knuckle down and focus on digitisation better than museums because they aren’t expected to ‘also do exhibitions’? This looped back to an early slide where we talked about the ‘post-web accord’ that emerged in the mid 00s. Is this accord coming under pressure as a result of changing economic circumstances? Or is this just one of the many museum challenges that are under discussion in the sector.