Love Lace exhibition App v1.21 released with videos, social sharing and favourites

Late last week in time for the launch of the Janet Echelman work being suspended in the city as part of Art & About, the new version of the free Love Lace exhibition App went live in both the iTunes App Store and the Android Marketplace.

The new version now allows for favouriting of works, social sharing (including sharing of lists of favourites), and quick access to the behind the scenes videos.

We’re expecting that there will be one more point release this year to include the MoveME wifi tracking but beyond that the App will only receive bug fixes and minor tweaks.

Just to reiterate the importance of seeing museum Apps as ‘live products’ with an ongoing commitment to development and support, we’ve had to make several point releases since v1.0 on both Android and iOS. These changes have been a result of issues with user devices (almost entirely Android variations), and some user interface issues that have been revealed through looking at the Flurry Analytics and watching what people try to do with the App.

Download v1.21 for iOS or Android.

Watch the behind the scenes slideshow of the Janet Echelman installation.

API Collection databases Metadata open content Semantic Web

Things clever people do with your data #65535: Introducing ‘Free Your Metadata’

Last year Seth van Hooland at the Free University Brussels (ULB) approached us to look at how people used and navigated our online collection.

A few days ago Seth and his colleague Ruben Verborgh from the University Ghent launched Free Your Metadata – a demonstrator site for showing how even irregular metadata can have valued to others and how, if it is released rather than clutched tightly onto (until that mythical day when it is ‘perfect’), it can be cleaned up and improved using new software tools.

What’s awesome is that Seth & Ruben used the Powerhouse’s downloadable collection datafile as the test data for the project.

Here’s Seth and his team talking about the project.

F&N: What made the Powerhouse collection attractive for use as a data source?

Number one, it’s available for everyone and therefore our experiment can be repeated by others. Otherwise, the records are very representative for the sector.

F&N: Was the data dump more useful than the Collection API we have available?

This was purely due to the way Google Refine works: on large amounts of data at once. But also, it enables other views on the data, e.g., to work in a column-based way (to make clusters). We’re currently also working on a second paper which will explain the disadvantages of APIs.

F&N: What sort of problems did you find with our collection?

Sometimes really broad categories. Other inconveniences could be solved in the cleaning step (small textual variations, different units of measurement). All issues are explained in detail in the paper (which will be published shortly). But on the whole, the quality is really good.

F&N: Why do you think museums (and other organisations) have such difficulties doing simple things like making their metadata available? Is there a confusion between metadata and ‘images’ maybe?

There is a lot of confusion about what the best way is to make metadata available. One of the goals of the Free Your Metadata initiative, is to put forward best practices to do this. Institutions such as libraries and museums have a tradition to only publish information which is 100% complete and correct, which is more or less impossible in the case of metadata.

F&N: What sorts of things can now be done with this cleaned up metadata?

We plan to clean up, reconcile, and link several other collections to the Linked Data Cloud. That way, collections are no longer islands, but become part of the interlinked Web. This enables applications that cross the boundaries of a single collection. For example: browse the collection of one museum and find related objects in others.

F&N: How do we get the cleaned up metadata back into our collection management system?

We can export the result back as TSV (like the original result) and e-mail it. Then, you can match the records with your collection management system using records IDs.

Go and explore Free Your Metadata and play with Google Refine on your own ‘messy data’.

If you’re more nerdy you probably want to watch their ‘cleanup’ screencast where they process the Powerhouse dataset with Google Refine.

User behaviour Web metrics

Let’s Get Real report from Culture24 now available

Over in the UK right now Culture 24 are launching a report I worked on with them and many of the major cultural institutions in the UK. Coming from a need amongst web/digital people to find better ways of measuring the effectiveness of their work in the sector, the report – Let’s Get Real – pulls together analytics data from 3 years of activities online and in social media and makes a number of recommendations that are aimed at kickstarting, in the words of Culture24 Director, Jane Finnis, “a dramatic shift in the way we plan, invest and collaborate on the development of both the current and next generation digital cultural activities”.

The inability to effectively communicate the connection between delivering the institutional mission and digital projects is an ongoing concern to everyone working in museums. And at a time when there are increasing calls for museums to take roles that are more akin to broadcasters and publishers in the digital space, yet the majority of internal and external stakeholder value is still perceived as coming from visits to exhibitions and buildings, there is a pressing need to keep thinking about the ways digital projects report success (or otherwise!).

From my perspective, working with this diverse group of institutions was a lot of fun and very illuminating. It helped consolidate much of my thinking about the state of digital projects in the cultural sector and the long road ahead to really transform the way, particularly museums (less so the performing arts), use and adequately resource digital in their institutions. At the same time there were many unexpected surprises – the very different geographies of online visitors between institutions, and the comparatively low impact of social media in terms of website traffic, even for particularly well-promoted campaigns were revealing. The social media work by Rachel Clements also demonstrated that the easy option – reporting the numbers – greatly undersells the value of social media. The alternative, qualitative analysis, is much harder and requires more time and an understanding of why you are active in social media in the first place.

Have a read of the report (PDF) and see what you think.

For those involved in the project there was a lot more than number crunching – there were some amazingly productive working sessions and meetups – and the launch conference that is taking place right now in Bristol (check the #C24LGR hastag conversations!). In many ways the report captures only a fragment of the ‘value’ of the project as a whole.

Mobile User behaviour User experience

More on mobile tech impacts in museums (extended Mashable remix)

There’s a nice introductory piece today that features some of the recent Powerhouse Museum work in Mashable. It is a broad overview piece of how the Smithsonian, the NY Museum of Jewish Heritage and the Powerhouse have been utilising mobile technologies in galleries and exhibitions.

Reading some of the comments and picking up on some of the chatter on Twitter I thought it might be valuable to include two of the Q&A from the journalist that didn’t make the cut in the final story. They add a little more context and introduce more complexity into the issue – probably less interesting for non-museum people but useful to those deeply engaged in the field.

Q – How are you measuring the effectiveness of the technology you’ve deployed? Downloads? Data capture? Usage stats? I noticed you are going to put in moveME wifi triangulation system. What will the data from this tell you – you had mentioned in a post dwell time and loves but how will you put those findings to use? (Why are you doing this?)

We’re really interested in changing the physical design of our galleries so that they are able to deliver better experiences and tell more effective stories to and with our visitors. Once a visitor carries a fully searchable encyclopedia in their pocket (not too mention access to all our collection including the objects not on display), the whole idea of a ‘museum’ and how it could and should be designed, changes.

The ‘effectiveness’ of technologies has a number of different facets –

1. We look at raw usage data – downloads, views, interactions in order to redesign and iterate new versions of the technology itself.

2. Then we look at how visitors are using it both individually as as groups through observation and also data collection. This helps us to think about the social impact of our technologies in the galleries. For example, are our mobile apps meaning that families visiting together are talking to each other less than before? (a possibly negative outcome!)

3. We also look at the aggregate usage data to help us think about what content is being accessed (and what is being ignored) and then follow up with qualitative research to understand why. This, over time, helps us better understand which objects, for example, visitors are interested in finding out more about, and which, perhaps need a little more prompting.

4. Finally, and holistically, we aim to bring all this data together to better inform the spatial layout of galleries, and also the ancillary services such as education kits for teachers or curator-guided tours, that might further enhance a visit.

As we move from 1 to 4 the impact and time taken gets longer and longer obviously – and impacts much more broadly on the museum and its various operations.

Q – Where do you think things are going in terms of digital tech in your museum and in museums in general?

At the Powerhouse we are certainly getting far more strategic in our deployments rather than being seduced by novelty. This has been largely possibly because of the way digital has evolved at the museum with significant internal capacity and on-staff developers, digital producers, and strategy.

Broadly in the museum world we are seeing much higher volumes of technologies deployed – Google Goggles at the Getty, NFC at the Museum of London, AR at the Stedelijk, touch-tables everywhere – and I expect that over the next decade we will see the very idea of a ‘digital team’ or ‘digital unit’ or even ‘CTO’ at a museum as quaint. Simply because the very definition of a museum will be, itself, ‘digital’ and cross-platform.

Arduino Visualisation

Carlos & Nico discuss the making of Facetweetbox with Lego & Arduino

One of the pleasures of working with the teams that I do at the Powerhouse is that even in the busiest part of the year they manage to find time to experiment and turn up something quirky. Sometimes these quirky experiments even go on to become the foundation stones of future projects.

Most recently, during a hectic July when festival microsites, major exhibition microsites, and mobile apps were all running as parallel live projects, Carlos and Nico started building something with Lego.

It became the Facetweetbox.

I asked Nico and Carlos about this and its genesis.

F&N: What is the main idea behind building Facetweetbox?

Nico: I sit right next to this ‘ideas and implementation factory’ called Carlos. One day he rode his digital pony into the office and showed me a project by Matt Reed from Red Pepper.

Immediately we started talking about doing something in a similar vein ourselves.

It seemed to be a fun little side project that would help us encourage other staff and visitors to engaging on social media channels we are active in.

Let me explain.

As you know, the Powerhouse Museum is a very active cultural institution, both in physical and virtual form. We do more than store and sometimes display great collections. We are always pushing to build stories and help others build stories around objects and shared ideas. Exchanges of ideas and information happen in many places, from dry repositories like the Museum Metadata Exchange to lively posts on a festival’s Facebook page. I believe we have has a responsibility to encourage even informal conversation towards educative and creative exchange.

The museum is very lucky in that we tend to naturally pick up a lot of social media chatter. I think this is in large part due to operating as a fairly well respected cultural institution that actually does lots of festivals, events, talks and exhibitions. The challenge for our team is to make visitors aware of these exchange and then encourage them to explore these spaces. The other challenge is to push our staff to engage with these channels. Without both social media becomes either unguided or just ‘PR shouty’.

So Facetweetbox was seen as a fun way to encourage both visitors and staff alike.

When you’re at a conference or event and there’s a stream of tweets displayed on a screen, I often think it is great to be reminded of that channel but it feels out of place to have Twitter of Facebook broadcast like that. Sure, twitter is public message exchange but placing it outside of my hand and on a TV screen misses the implicit joy and satisfaction of information exchanges and the playfulness of backchannel communication. Facetweetbox shows you that something is going on but not what that actually is. The contents of a tweet is not expressed so it is a great little call to action playing on the human instinct to want to know what is being said, by whom and why.

Finally, from a actual construction point of view it was really all about fun. I mean, who does not want to play/work with Lego.

I had really hoped to debut this bit of kit at the Sydney Design 2011 but with delays in the delivery of some parts it did not happen.

CARLOS: It was really simple.

Facetweetbox is an extension of the type of projects I do for enjoyment outside of work at the Museum. I’ve always sought to move the digital space from the virtual to the ‘real world’ – to generate physical reactions. To surprise people. For example, to ‘blow out a candle via the Internet’. That was a project I did a year ago.

And, given the high intensity of that period in the office, the main idea was to have fun and challenge Nico a bit.

Q: How did you custom design the Lego? Are there any Lego bits you wished you could have added?

NICO: I started by sketching it out and using the Lego Designer to play around with some designs, from complex to simple shapes. In the end it really came down to function and cost winning over aesthetics. Don’t get me wrong, I’m really happy with the end product, but certainly was the more conservative of the designs. In the end I wanted something that was;
– robust, hence the sometimes over engineered interlacing of the bricks.
– reusable, hence the dual box configuration with transparent bricks front. If a new player or a logo change happens we should be able to create it using generic white bricks.
– compartmentalized, hence separate spacing for the power units, controllers and the LED light areas.
– accessible, hence the use of a easy to access area for the power unit (battery or power supply) and control units.

In the end the physical build you see today is a stripped down version to some degree, mainly for the cost reasons listed about. Lego Control Computers and step ladders are there so the Minfigs can access the unit via the back door to keep everything going.

Q: What Arduino components are used? How do they work? Are there any Arduino components that you wished existed that don’t?

CARLOS: The following components where used;
– Arduino Decimillia (temporarily)
– Arduino jumper cables
– WiShield 2.0
– 32 LED RGB strip addressable
– 9 Volts adaptor 1000 milliAmps

There is also a server component to this setup. We have a (virtualised) Linux box running a Django site that queries Facebook and Twitter with the given values. For Twitter it can be set to any string, and for Facebook is can be any link that is associated with a Like button.

In the future I would like to see this extended to comments.

The services are queried every 30 seconds by the Lego people inside the box and then the they receive a simple string format response that instructs them to flick the switches inside the box to flash in a particualr way – duration, repetition, location and colour.

I would like to enable the device to run of mains power as well as lipo batteries to ensure lasting performance and portability. Oh, and sound effects (*pew*pew*pew*)!

I’d also like to start to look at sentiment analysis and translate that into a colour scheme and/or pattern. In the case of Twitter, I want to map particular strings to colours and/or patterns.

You could also target the installations of LEDs within gallery spaces to react to particular strings. For example if you are in a Lace exhibition and someone takes a photo and tweets it with hashtag #lovelace a particular visual or sonic reference could be triggered in the exhibition space.