Fresh & New(er)

discussion of issues around digital media and museums by Seb Chan

Fresh & New(er) header image 2

Interview with Mia Ridge on museum metadata games

January 3rd, 2011 by Seb Chan

museum games logo

Mia Ridge is the lead developer at the Science Museum in London. She approached us in 2010 to use our collection database in her Masters research project which looks at the impact of different interfaces in museum collection-related ‘games’. Her research project is up and running at Museumgam.es where you can partake in a variety of different collection description activities.

We’ve had tagging on our collection database since 2006 and the results have, after an initial phase of interest, been quite mixed. During 2011 we’re rebuilding the entire collection database from the ground up and we’ve been rethinking the whole idea of tagging and its value in both metadata enhancement and community building.

I am particularly excited by Mia’s research because it looks explicitly ways of enhancing the opportunities for metadata enhancement of the ‘least interesting’ objects in online museum collections – the ones that have minimal documentation, never get put out on public display, have unknown provenance. These objects make up the vast bulk of the collections of museums like the Science Museum and the Powerhouse, and whilst sometimes they connect online with family historians or specialist communities, they do require a certain amount of basic documentation in order to do so. Similarly, being at the far end of the long tail they don’t generate enough views and engagement to be able to effectively ‘validate’ crowdsourced contributions.

I’m hoping we can use Mia’s findings to help us design better minigames in our new collection database, and I’m also hoping others, especially those outside of the museum community, will use her findings to build better games with our collection API as well as those of other museums.

Mia answered some questions about her project whilst snowed in in London.

Q – What was the inspiration/s behind Museum Metadata Games (MMG)?

The inspiration for the museum metadata games I’ve made was my curiosity about whether it was was possible to design games to help improve the quality of museum catalogue records by getting people to create or improve content while having fun with collections.

I’m also exploring ways to encourage public engagement with the less glamorous bulk of museum collections – I wondered if games could tap into everyone’s inner nerd to create casual yet compelling experiences that would have a positive impact on a practical level, helping improve the mass of poorly catalogued or scantily digitised records that make up the majority of most museum collections.

People ask for access to the full records held by museums, but they rarely realise how little information there is to release once you’ve shared those for objects that have been on display or fully documented at some point. Museum metadata games are a way of improving the information as well as providing an insight into the challenges museum documentation and curatorial teams face.

The motivation to actually build them was my dissertation project for my MSc in Human-Centred Systems. I’ll keep working on the games on MMG after my project is finished, partly because I want to release the software as a WordPress plugin, and partly because now that the infrastructure is there it’s quite easy to tweak and build new games from the existing code.

Q – What do you think are the main challenges for crowdsourcing metadata in the cultural sector?

Quite a few projects have now demonstrated that the public is willing to tag content if given the chance, but the next step is properly integrating user-created content into existing documentation and dissemination work so that public work is actually used, and seen to be used. The people I’ve interviewed for this project are so much more motivated when they know the museum will actually use their content. Museums need to start showing how that content is enriching our websites and catalogue systems. In some interviews I’ve shown people the tags from Flickr on objects on the Powerhouse collection site, and that’s immediately reduced their scepticism.

My research suggests that results are improved when there’s some prep work put into selecting the objects; and while museums can build games to validate data created by the public, I think a small time investment in manually reviewing the content and highlighting good examples or significant levels of achievement helps motivate players as well as encouraging by example. However it’s often difficult for museums to commit time to on-going projects, especially when there’s no real way of knowing in advance how much time will be required.

Museums also need an integrated approach to marketing crowdsourcing projects to general and specialist audiences.

And it might seem like a small thing, but most museum crowdsourcing sites require registration before you can play, or even check out how
the crowdsourced task works, and that’s an immediate barrier to play, especially casual play.

Identifying gaps in existing collections that can realistically be filled by members of the public or targeted specialist groups and then tailoring gameplay and interactions around that takes time, and the ideal levels of prototyping and play testing might require a flexible agency or in-house developers. This became apparent when I found that the types of game play that were possible changed as more data was added – for example, I could use previously added content to validate new content, but if I wasn’t writing the code myself I might not have been able to work with those emergent possibilities.

Q – Can you give some examples of what you see as ‘best practice’ in metadata crowdsourcing both from the cultural sector and also from elsewhere?

The work of Luis von Ahn and others for the ‘games with a purpose’ project at Carnegie Mellon University has inspired many of the projects in the cultural heritage sector.

Also I think Brooklyn Museum have done a great job with their tagging game – it’s full of neat touches and it feels like they’ve really paid attention to the detail of the playing experience.

I also like the experience the National Library of Australia have designed around digitising newspapers. The Dutch project Waisda? was designed to encourage people to tag multimedia, and seemed to produce some really useful analysis.

Q – What is MMG specifically trying to determine/ascertain with Dora, Donald and the Tag challenges?

My original research question was “which elements of game mechanics are effective when applied to interfaces to crowdsource museum collections enhancement?”.

Over the life of the project, my question changed to ‘can you design data crowdsourcing games that work on ‘difficult’ types of museum content? e.g. technical, randomly chosen or poor-quality records?’ and ‘can you design to encourage enhancements beyond tags (but without requiring more advance data cleaning, selection or manual game content validation)?’.

The designs were based around user personas I’d created after research into casual games, and the tagging game, Dora seems to work particularly well for people close to the design persona, which is encouraging.

I think I’d revisit the personas and create a new one for the fact-finding game (Donald) if I was continuing the research project, and I’d re-examine the underlying game mechanics to deal with the different motivations that would emerge during that process. I’d also like to tweak the ‘success’ state for Donald – how does a player know when they’ve done really well? How does the game know which content is great and which is just ok, if it can’t rely on manual review by the game producers?

The ‘tagging activity’ was created as a control, to test the difference game mechanics made over the simple satisfaction of tagging objects.

Q – What happens to the data after your dissertation?

I’ll pass it onto the museums involved (PHM and SciM) and hopefully they’ll use it. I’ve noticed that people have tagged objects in games
that aren’t tagged on PHM site, so I think the content already supplements existing tags.

Q – What do you think of the debates around ‘gamification’, motivation and rewards?

I think Margaret Robertson’s post, ‘Can’t play, won’t play’ summed it up really well and Use Game Mechanics to Power Your Business also covers some of the dangers of cheap badgeification.

Gamification isn’t a magic elixir. There’s a risk that it all sounds really easy, and that museums will be tempted to skip the hard work of thinking about what a successful experience looks and feels like for their project, audiences and content, choosing their core goals and designing a game around them. If you don’t understand what engagement, fun and learning mean for your content, you can’t build a game around it.

Q – What mistakes do you see museums making with gamification?

I think I covered most of the burning issues in ‘challenges’ above… Requiring the visitor to sign-up to start playing is a huge barrier to participation, and in most cases it’s trying to prevent something that wouldn’t happen anyway – like spam. I haven’t been running my games for long but they’ve been posted widely on Facebook and twitter and I’ve not had any malicious content added yet, and there’s only been two spam attempts in over 500 turns on the two games.

In the evaluation I’ve done, people have said they’re more motivated when they think a museum will actually use their data. If you can show how it’s used, people are much more likely to believe you than if you just tell them.

Q – How much granularity are you tracking with MMG? (By this I mean are you segmenting behaviour by gender, age, location etc?)

I’m using two evaluation methods – in-depth interviews alongside play tests, and releasing the games to the public and seeing what kinds of
data is generated.

For the second, I haven’t tried to collect demographic data as I was more concerned with analysing the types of content generated and looking for factors such as:

Image quality e.g. black and white vs colour images
Technical vs social history objects
Photos vs objects
Extent of existing content – title, dates, places, description
‘Nice’ vs reference images

I’m also looking at factors like number of tags or facts per session, bounce rate, number of repeat sessions, sign-up rates vs play rates, time on site; and analysing the data to see if the types of content created can be usefully categorised.

Now go and have a play with Mia’s games!

Tags: 2 Comments