Categories
Conceptual Interactive Media

The museum as a text adventure – Inform7 and TourML/TAP

Today I was sitting at WebWise 2012 listening to Rob Stein talk about TAP/TourML and he started talking about games and stories referencing Marc Reidl’s work.

It reminded me a lot of the world of interactive fiction and it got me thinking about whether it would be possible to use TourML to generate text adventures.

And then, whether long established interactive fiction authoring tools like Inform7 (used as the system behind PlayFic) could be used to author gallery tours.

Being of a generation that has fond memories of playing Infocom adventures (I vividly remember my dad buying Zork II for our Commodore 64) – there’s definitely a lot to learn about how this narrative genre works that could equally be applied to the creation and support of visitor narratives.

So I took 20 minutes to whip up a very very basic ‘playable’ text advennture rendering of the conference experience.

Go play it on PlayFic! (It obviously isn’t finished)

Here’s the source code. (contains spoilers!)

The story headline is "Adventures at WebWise". 

The story description is "A quick journey into interactive fiction inspired by Rob Stein's introduction to TAP presentation and his referencing of Marc Reidl. It raised, in my mind, that there are already robust frameworks for quickly generating interactive fiction of the sort that makes the foundation of a mobile tour - so, could TAP use the Inform7 language for advanced authoring?"

The Main Conference Room is a room. "Rows of tables, each with their own powerstrip stretch endlessly toward the speaker podium. Two projection screens show the wifi login details whilst unfashionably out of date pop music plays softly over the speaker system.

On the table nearest you is a conference pack and an abandoned Samsung Galaxy.

The foyer is to the South."

Projection screens are scenery in the Main Conference Room. Speaker system is scenery in the Main Conference Room.

Samsing Galaxy is a thing. The Samsung Galaxy is in the Main Conference Room. The description is "The Samsung Galaxy is turned off. You cannot figure out how to turn it on, and, turning it over, you realise that the battery has been removed. Helpful isn't it?"

Conference Pack is a thing. Conference pack is in the Main Conference Room. The description is  "The conference pack, like all conference packs, is looking for the recycling bin. You notice that the conference schedule has already been removed, leaving only  the wad of promotional materials."

South of the Main Conference Room is the Foyer. 

The Foyer is a room."The foyer is empty.

Lukewarm coffee drips from a boiler but there are no cups nearby. The crumbs of food that used to be here litter the floor. Obviously these places don't pay their venue staff very well. A faint waft of perfume comes from the East."

East of the Foyer is the Lifts.

Lifts is a room. "As you enter the lift lobby you notice the furthest-most door has just closed.

The whirring of motors comes from behind closed lift doors. 

Strangely, there are no lift buttons and the concierge must have gone on a break."

That doesn’t look like source code does it?

Doesn’t it look exactly like the sort of language that museum educators and curators coud quickly learn and write?

Categories
Collection databases Conceptual

Metadata as ‘cultural source code’

A quick thought.

Last week I wrote about collection data being ‘cultural source code’ in the context of the upload of the Cooper-Hewitt collection to GitHub.

As I wrote over there,

Philosophically, too, the public release of collection metadata asserts, clearly, that such metadata is the raw material on which interpretation through exhibitions, catalogues, public programmes, and experiences are built. On its own, unrefined, it is of minimal ‘value’ except as a tool for discovery. It also helps remind us that collection metadata is not the collection itself.

If you look at the software development world, you’ll see plenty of examples of tools for ‘collaborative coding’ and some very robust platforms for supporting communities of practice like Stack Overflow.

Yet where are their equivalents in collection management? Or in our exhibition and publishing management systems?

(I’ll be cross-posting a few ideas over the next little while as I try to figure out ‘what goes where’. But if you haven’t already signed up to the Cooper-Hewitt Labs blog, here’s another reminder to do so).

Categories
Conceptual Conferences and event reports

Back to reality. Returning from the Horizon Retreat.

Last week I was at the Horizon New Media Consortium 10 Year Retreat – The Future of Education. It was a fascinating glimpse into the world of bright-eyed educators and a few museum people who want the future of education to be something far better than it is now. If that sounds a little utopian, it should.

The Horizon Reports have always made for good reading. I contributed to some of the Horizon.Au reports in and have had a fair number of my projects included over the years as ‘examples’. These reports have more-or-less predicted most of the technology trends over the last decade, even if their timeframes are too optimistic. Their methodology – a wiki-made document compiled by hand selected specialists works especially well and avoids a lot of the traps of most futurist predictions. What is especially useful is that these wikis remain available after the reports are published – so it is possible to read the internal discussions that informed the creation of the report.

Summing up the predictions of the Horizon reports over the past decade was this great chart from Ruben Puentedura. You’ll notice recurring themes and the emergence of the social web, then mobile, then open content in the reports over the last decade.

The retreat, set outside of a stormy Austin, Texas, locked 100 people from several continents in a room with huge sheets of butcher’s paper and some great facilitation. Over two days meta-trends were identified and ideas shared. Thousands of tweets were tweeted on the #NMCHz hashtag, and many productive discussions were had.

Ed Rodley sums up the event nicely – day one and day two – over on his blog. Ed and I spent a fair bit of time throwing around ideas around the role of science museums in the modern world (from his experience at Boston and mine at Powerhouse) which should become the topic of a future blogpost.

But gnawing away at me during the Horizon Retreat was this article from the New York Times on Apple and its supply chains, and a broader follow up opinion piece in The Economist.

For all the talk of digital literacy, educating for megatrends, and the role that museums can play in fostering creativity – all this talk of open content and collaborative learning – these words continue to concern me.

The most valuable aspects of an iPhone, for instance, are its initial design and engineering, which are done in America. Now, one problem with this dynamic is that as one scales up production of Apple products, there are vastly different employment needs across the supply chain. So, it doesn’t take lots more designers and programmers to sell 50m iPhones than it does to sell 10m. You have roughly the same number of brains involved, and much more profit per brain. On the manufacturing side, by contrast, employment soars as scale grows. So as the iPhone becomes more popular, you get huge returns to the ideas produced in Cupertino, and small returns but hundreds of thousands of jobs in China.

Maybe it is just pessimism brought about by having two consecutive winters creeping in.

You can grab the summary ‘communique’ from the Retreat from the Horizon site.

Categories
Conceptual Interviews

The museum website as a newspaper – an interview with Walker Art Center

There’s been a lot of talk following Koven Smith’s (Denver Art Museum) provocation in April – “what’s the use of the museum website?”. Part driven by the rapid uptake of mobile and part driven by the existential crisis brought on Koven, many in the community have been thinking about how to transform the digital presence of our institutions and clients.

At the same time Tim Sherratt has been on a roll with a series of presentations and experiments that are challenging our collections and datasets to be more than just ‘information’ on the web. He calls for collecting institutions “to put the collections themselves squarely at the centre of our thoughts and actions. Instead of concentrating on the relationship between the institution and the public, we can can focus on the relationship we both have with the collections”.

Travelling back in time to 2006 at the Powerhouse we made a site called Design Hub. Later the name was reduced to D’Hub, but the concept remained the same. D’Hub was intended to be a design magazine website, curated and edited by the museum and, drawing upon the collection, engaging and documenting design events, people and news from that unique perspective. For the first two years it was one of the Powerhouse’s most successful sites – traffic was regularly 100K+ visits per month – and the content was as continuous as it could be given the resourcing. After that, however, with editorial changes the site began to slip. It has just relaunched with a redesign and new backend (now WordPress). Nicolaas Earnshaw at the Powerhouse gives a great ‘behind the scenes’ teardown of the recent rebuild process on their new Open House blog.

It is clear that the biggest challenge with these sorts of endeavours is the editorial resourcing – anything that isn’t directly museum-related is very easily rationalised away and into the vortex, especially when overall resources are scarce.

So with all that comes the new Walker Art Center website. Launched yesterday it represents a potential paradigm shift for institutional websites.

I spoke to Nate Solas, Paul Schmelzer and Eric Price at the Walker Art Center about the process and thinking behind it.

F&N: This is a really impressive redesign and the shift to a newspaper format makes it so much more. Given that this is now an ‘art/s newspaper’, what is the editorial and staffing model behind it? Who selects and curates the content for it? Does this now mean ‘the whole of Walker Art Center’ is responsible for the website content?

Paul Schmelzer (PS): The Walker has long had a robust editorial team: two copy editors, plus a managing editor for the magazine, but with the content-rich new site, an additional dedicated staffer was necessary, so they hired me. I was the editor of the magazine and the blogs at the Walker from 1998 until 2007, when I left to become managing editor of an online-only national political news network. Coming back to the Walker, it’s kind of the perfect gig for me, as the new focus is to be both in the realm of journalism — we’ll run interviews, thinkpieces and reportage on Walker events and the universe we exist in — and contemporary art. While content can come from “the whole of the Walker Art Center,” I’ll be doing a lot of the content generation and all of the wrangling of content that’ll be repurposed from elsewhere (catalogue essays, the blogs, etc) or written by others. I strongly feel like this project wouldn’t fly without a dedicated staffer to work full-time on shaping the presentation of content on the home page.

F&N: The visual design is full of subtle little newspaper-y touches – the weather etc. What were the newspaper sites the design team was drawing upon as inspiration for the look and feel?

Nate Solas (NS): One idea for the homepage was to split it into “local, onsite” and “the world”. A lot of the inspiration started there, playing with the idea that we’re a physical museum in the frozen north, but online we’re “floating content”. We wanted to ground people who care (local love) but not require that you know where/who we are. “On the internet, nobody knows you’re a dog”.

The “excerpts” of articles was another hurdle we had to solve to make it feel more “news-y”. I built a system to generate nice excerpts automatically (aware of formatting, word endings, etc), but it wasn’t working to “sell the story” in most cases. So almost everything that goes on the homepage is touched by Paul, but we use the excerpt system for old content we haven’t manually edited.

PS: Yeah, the subtle touches like the weather, the date that changes each day, and the changing hours/events based on what day it is all serve as subtle reminders that we’re a contemporary art center, that is, in the now. The churn of top stories (3-5 new ones a week) and Art News from Elsewhere items (5-10 a day, ideally) reinforces this aspect of our identity. The design team looked at a wide range of news sites and online magazines, from the New York Times to Tablet Magazine to GOOD.

Eric Price (EP): Yeah, NYTimes, Tablet, and Good are all good. I’d add Monocle maybe. Even Gawker/Huffington Post for some of the more irreverent details. We were also taking cues from print – we’re probably closest in design to an actual printed newspaper.

F&N: I love the little JS tweaks – the way the article recommendations slide out at the base of an article when you scroll that far – the little ‘delighters’. What are you aiming for in terms of reader comments and ‘stickiness’? What are your metrics of success? Are you looking at any newspaper metrics to combine with museum-y ones?

NS: It’s a tricky question, because one of the driving factors in this content-centric approach is that it’s ok (good even) to send people away from our site if that’s where the story is. We don’t have a fully loaded backlog of external articles yet (Art News from Eleswhere), but as that populates it should start to show up more heavily in the Recommendation sections. So the measure of success isn’t just time on site or pageviews, but things like – did they make it to the bottom of the article? Did they stay on the page for more than 30 seconds (actually read it)? Did they find something else interesting to read?

My dream is of the site to be both the start and also links in a chain of Wikipedia-like surfing that leads from discovery to discovery, and suddenly an hour’s gone by. (We need more in-article links to get there, but that’s the idea.)

So, metrics. I think repeat visitors will matter more. We want people to be coming back often for fresh & new content. We’ll also be looking for a bump in our non-local users, since our page is no longer devoted to what you can do at the physical space. I’m also more interested in deep entrance pages and exit pages now, to see if we can start to infer the Wikipedia chain of reading and discovery. Ongoing.

F&N: How did you migrate all the legacy content? How long did this take? What were the killer content types that were hardest to force into their new holes?

NS: Content migration was huge, and is ongoing. We have various microsites and wikis that are currently pretty invisible on the new site. We worked hard to build reliable “harvesting” systems that basically pulled content from the old system every day, but was aware of and respected local changes. That worked primarily for events and articles.

A huge piece of the puzzle is solved by what we’re calling “Proxy” records – a native object that represents pretty much anything on the web. We are using the Goose Article Extractor to scrape pages (our own legacy stuff, mostly) and extract indexable text and images, but the actual content still lives in its original home. We obviously customized the scraper a bit for our blogs and collections, but by having this “wrapper” around any content (and the ability to tag and categorize it locally) we can really expand the apparent reach of the site.

F&N: How do you deal with the ‘elsewhere’ content? Do you have content sharing agreements?

NS: [I am not a lawyer and this is just my personal opinion, but] I feel pretty strongly that this is fair use and actually sort of a perfect “use case” for the internet. Someone wrote a good thing. We liked it, we talked about it, and we linked right to it. That’s really the key – we’re going beyond attribution and actually sending readers to the source. We do scrape the content but only for our search index and to seed “more like this” searches, we never display the whole article.

That said, if a particular issue comes up we’ll address it responsibly. We want to be a good netizen, but part of that is convincing people this is a good solution for everyone.

F&N: What backend does the new site run on? Tech specs?

Ubuntu 11.04 VMs
LibVirt running KVM/QEMU hypervisor
Django 1.3 with a few patches, Python 2.7.
Nginx serving static content and proxying dynamic stuff to Gunicorn (Python WSGI).
Postgres 8.4.9
Solr 3.4.0 (Sunburnt Python-Solr interface)
Memcache
Fabric (deployment tool)
ImageMagick (scaling, cropping, gamma)

F&N: What are you using to enable search across so many content types from events to collections? How did you categorise everything? Which vocabularies?

NS: Under the hood it’s Apache Solr with a fairly broad schema. See above for the trick to index multiple content-types: basically reduce to a common core and index centrally, no need to actually move everything. A really solid cross-site search was important to me, and I think we’re pretty close.

We went back and forth forever on the top-level taxonomy, and finally ended with two public-facing categories: Genre and Type. Genre applies to content site-wide (anything can be in the “Visual Arts” Genre), but Type is specific to kind of content (Events can be of type “Screenings”, but Articles can’t). The intent was to have a few ways to drill down into content in cross-site manner, but also keep some finer resolution in the various sections.

We also internally divide things by “Program”, programming department, and this is used to feed their sections of the site and inform the “VA”, “PA”, etc tags that float on content. So I guess this is also public-facing, but it’s more of a visual cue than a browsable taxonomy.

Vocabularies are pretty ad-hoc at this point: we kept what seemed to work from the old site and adjusted to fit the new presentation of content.

The two hardest fights: keeping the list short and public-facing. This is why we opted to do away with “programming department” as a category: we think of things that way, no one else does.

F&N: Obviously this is phase one and there’s affair bit of legacy material to bring over into the new format – collections especially. How do you see the site catering for objects and their metadata in the future?

NS: Hot on the heels of this launch is our work on the Online Scholarly Catalogue Initiative from the Getty. We’re in the process of implementing CollectionSpace for our collections and sorting out a new DAMS, and will very soon turn our attention to building a new collections site.

An exciting part of the OSCI project for me is to really opening up our data and connecting it to other online collections and resources. This goes back to the Wikipedia surfing wormhole: we don’t want to be the dead-end! Offer our chapter of the story and give them more things to explore. (The Stedelijk Museum is doing some awesome work here, but I don’t think it’s live yet.)

F&N: When’s the mobile version due?

NS: It just barely didn’t make the cut for launch. We’re trying to keep the core the same and do a responsive design (inspired by but not as good as Boston Globe). We don’t have plans at the moment for a different version of the site, just a different way to present it. So: soon.

Go and check out the new Walker Art Center site.

Categories
Conceptual Geotagging & mapping Interviews Mobile User experience

A new Powerhouse Walking Tours App and a Q&A with Glen Barnes

About a month ago our second walking tour App went live in the AppStore and was promptly featured by Apple leading to a rapid spike in downloads.

The Powerhouse Museum Walking Tours App is a free download, unlike our Sydney Observatory App, and it comes pre-packaged with two tours of the suburbs surrounding the Museum – Pyrmont and Ultimo. Both these tours are narrated by curator Erika Dicker and were put together by Erika and Irma Havlicek (who did the Sydney Observatory one) based on an old printed tour by curator Anni Turnbull.

Neither Pyrmont or Ultimo are suburbs that are likely to be attracting the average tourist so we felt that they should be free (as opposed to the Sydney Observatory one) inclusions with the App.

Additionally, as an in-App purchase you can buy a really great tour of historic Sydney pubs around the CBD written and narrated by Charles Pickett. We’re experimenting with this ‘freemium’ approach to see what happens – especially in comparison to the Observatory tour which requires an upfront payment. So, for a total of AU$1.99 the buyer can get the two included tours and the pubs tour.

So how’s it going?

As of last week we’d had 1,437 downloads of the free App with the two included tours since launch on June 13. 13 of the 1,437 have made the decision to go with the in-App purchase (that’s a upgrade conversion rate of less than 1%). We started getting featured on the AppStore on June 25 and the downloads spiked but there was no effect on in-App purchases. In comparison, the priced Sydney Observatory tour has sold 53 copies since launch a few weeks earlier on May 23.

We’re pretty happy with the results so far despite the low in-App conversions and we’re yet to do any serious promotion beyond that which has come our way via the AppStore. We’re also going to be trying a few other freemium upgrades as we do know that the market for a tour of Sydney pubs is both smaller and different to that of more general historical tours. You’re unlikely to see families taking their kids around Sydney’s pubs, for example.

We even had an unsolicited review from local blogger Penultimo –

We learned a few things very quickly – mostly about our own expectations. The first was this: it’s not going to be like a museum audio tour. The Powerhouse Museum did not pay a professional audio-speaker to make these tours. This means they have a kind of nice, very slightly amateur feel to them. At first this felt a little strange, but we got used to it.

Glen Barnes gets inspired about outdoor mobile tours during a visit to Pompeii 2003

Glen Barnes runs MyTours, the company behind the software platform we’ve been using to make these tour Apps. Since KiwiFoo, Glen and I had been conversing on and offline about a lot of tour-related issues and I got him to recount some of these conversations in a Q&A.

F&N: My Tours has been very easy for non-technical staff to build, prototype and test tours with. How diverse is the current user base? What are some of the smallest organisations using it?

We’ve got about 26 apps out right now covering 3 main areas:

– Tourism boards and destination marketing organisations (Positively Wellington Tourism in New Zealand and the St Andrews Partnership in Scotland)
– Museums and cultural institutions (Powerhouse Museum, Invisible City Audio Tours, Audio Tours Australia and Invisible City Audio Tours App mainly because the content is great and the they’ve spent a lot of time on the stories, photos and audio. (Did you know that people used to sink ships of San Francisco so they could claim the land over the top of them when it got reclaimed? How awesome is that!)

Invisible City App

I think a good tour has to have something to hold it all together – putting pins on a map just simply doesn’t cut it and neither does copying and pasting from Wikipedia.

I’m also a big fan of real people talking about their experiences or their expertise and this was really bought home to me when I meet Krissy Clark from Stories Everywhere at Foo Camp a couple of months ago. We went exploring out into the orchard and ‘stumbled’ across a song that was written about the place by a passing musician. The combination of the story and the song really took me back to what it must have been like in the middle of the hippy era.

Of course a great story is no good if people can’t find it. Promotion is key to any app.

I think this is one area where organisations really have to start working with local tourism boards and businesses. If you are from a smaller area then band together and release one app covering the local heritage trail, museum and gardens. The tourism organisations tend to have more of a budget to promote the area and by working together you can help stand out amongst the sea of apps that are out there. Also make sure that you tell people about it and don’t rely on the app stores. Get links of blogs, the local newspaper and in real life (Welly Walks had a full page article in a major newspaper, two more articles and a spot in KiaOra magazine). Talk to people and make sure the local hotels and others who recommend places-to-go know about what you are doing.

F&N: Do you see My Tours as creating a new audiences for walking tours or helping transition existing printed tours to digital? I’m especially interested to know your thoughts on whether this is a transition or whether there might actually be a broader market for tours?

We fit the bill perfectly for transitioning existing printed tours to the mobile space but that is definitely only the start. It is easy to do and creates a first step in creating more engaging content. A criticism some people make is that some of the tour apps don’t have audio – but in reality audio can be expensive to produce. I don’t mean we shouldn’t strive for the best but I would rather see some tours out there and made accessible than not published at all. Also if a few new people who wouldn’t dream of going to the library to pick up a walking tour brochure or booking a tour with the local historical society get interested enough to spend their Sunday exploring the town then that is good enough for me.

F&N: Here at PHM we’re trying both a Freemium and an upfront payment model for the two apps we have running. How have you seen these models work across other My Tours products?

We’ve tried to experiment a bit with different pricing models both for our own pricing and the app pricing. In-app purchasing hasn’t really taken off just yet and I’m not sure how this is going to work long term for this type of content. I’m hopeful that as more people become used to paying for things like magazine subscriptions through apps simple In-App purchases should become the norm for content just as it is for in-game upgrades. My main advice would be that if you can give the app away for free then do it as your content will spread a lot further that way. One way of doing this would be to get sponsorship for the app or some other form of payment not directly from the users.

F&N: What are the essential ingredients to having a chance of making a Freemium model work?

For any app you have to provide value off the bat to have any chance at all. For example you can’t give away an app and then charge for all of the content within – You will get 1 star reviews on the store straight away. Apart from that are you offering something that someone just has to have? That is a big call in the GLAM sector but if anyone has ideas of what content that is I would love to hear about it!

F&N: I was struck by My Tours affordability compared to many other mobile tour-builders. Do you think you’ve come at the ‘mobile tours’ world from leftfield? What assumptions have you overturned by being from outside the ‘tour scene’?

When we started we didn’t really look at any other solutions (as far as I know we were working on My Tours before anyone else had a completely web based tour builder like ours). I think we also did a few things with our tour builder that are a bit different because we hadn’t come from within the tour ‘scene’. The whole idea of having to upload ‘assets’ to your ‘library’ before even getting started just seemed a bit weird and convoluted to me so we we just let people add images and audio directly to the stops as they needed them. Also opening up the tour builder to anyone without them having to sit through a sales pitch from me was a first – I see no reason why you have to qualify people before they even kick the tyres.

We also challenged the assumptions that apps were only available to those with lots of money. The internet has this amazing ability to put everyone on an equal footing and let everybody’s voice be heard. This doesn’t mean that all voices are perfect but what it does mean is that money isn’t the measure of quality. Put another way there is no reason why the Kauri Museum shouldn’t have their own app just like the MoMA. It might not have all of the bells and whistles of an app from a major museum but at the same time it won’t take a hundred thousand dollars to develop.

It is interesting to look in more detail at pricing. We approached pricing by looking at a couple of other generic app builders and also looking at what value we provide. We’ve based the value proposition on the number of downloads that most of our apps will receive. Welly Walks is doing around 30-50 downloads a week which means they are paying around 30-50 cents for each app that gets downloaded. That is great value for them. Other apps are not getting quite so many downloads. If you are a smaller organisation you may only get 10 a week and the price per app is $1.50-$2 which still seems OK.

Looking at the charging models for some other tour builders and at those same download rates over a 2 year period you’d be looking at $11 and $16 an app for 10 downloads a week or $2.50 and $3.50 for 50 downloads a week. Of course, there are other factors apart from cost per download that come into it (For example renting the devices on site) but the bottom line is “Are we getting value for money?”. We may add in different pricing tiers as we add more features but I expect this will be around how deep you want to go with customising the look and feel of the app – custom theming for example.

F&N: I was really impressed to see that you had been implementing TourML import/export.

TourML to just seems like a no brainer. To me it serves 2 purposes. 1) To enable organisations to export/backup their data from a vendors system in a known format and 2) Allow content to be easily shared between different platforms.

Now some vendors want to lock you into their system and their way of doing things and they try and make it hard to leave. Instead we started from scratch building our company based on the modern practice of monthly charging and no long term contracts. As they say, “you’re only as good as your last release” and this keeps pushing us to build a better product. And while we don’t have the TourML export in the interface yet (the standard isn’t at that stage where we feel comfortable putting all of the finishing touches on our proof of concept) we see no reason why people who want to move on should not have access to the data – after all it is theirs.

We also want to see content available on more devices and pushed out to more people. Isn’t the whole point of the GLAM sector to enable access to our cultural heritage? By having an open format it means that a tour may end up on devices that are too niche for the museums to support internally (Blackberry anyone?).

F&N: What do you think about ‘augmented reality’ in tours? Do you see MyTours exploring that down the track?

I’ve got a love/hate relationship with AR. On the one hand I really want it to work but on the other I have never actually seen it work.

I think two examples show this clearly.

On a trip to London last year I was looking forward to trying the Museum of London’s award winning Streetmuseum app which places various historical photographs around the city. But having done so I came away with a couple of nagging issues. I never once got a lock on an image actually hovering over the correct location (even at which has a wide open sky due to the construction of the new crosslink tunnel). Here’s a screengrab from my phone where you will see the photo is way off the mark.

The second unfavourable experience with Streetmuseum was less technical and more a psychological issue – I actually felt really vulnerable standing in the middle of touristy London holding up my iPhone with my pockets exposed. I was always conscious of a snatch and grab or a pickpocket.

The second example was during Museums and the Web 2011 where Azavea held a Walking tour of Historic Philadelphia.

A group of about 15-20 of us set off with the PhillyHistory.org mobile app and walked around the city looking at various sights. It only took about 10 minutes before our devices were tucked firmly back in the pocket as we couldn’t really get it to work reliably – and this is from 20 dedicated museum and mobile practitioners! Let me point out that I don’t think it was a bad implementation of the current technology (they really have a bunch of talented people working there), I just think that the technology isn’t ready. You can download a whitepaper from Azavea on the project from their website which goes into some of the issues they faced and their approach.

I think there are some opportunities around where it does make sense but the outdoor ‘tour’ space I don’t think is one of them (yet). So will we be adding AR to My Tours? Not any time soon in the traditional sense but if someone can show me something adds value down the road? Sure.

F&N:You are also really committed to open access to civic data. How do you see commercial models adapting to the changes being brought through open access?

I’m a big Open Data fan (I helped found Open New Zealand). I’m not sure where that came from but I got interested in open source in 1999 when Linux was starting to take off and I just loved the way that many people working together could build tools that in a lot of instances were better than their commercial equivalents. I’ve also worked for companies where there were a lot of manual tasks and a lot of wasted human effort. Open Data means that we can all work together to build something greater than the sum of its parts with the understanding that we can both get a shared value out of the results. It also means that people can build tools and services on top of this data to without spending days trying to get permission before they even start and can instead focus on providing real value to others. I’m really proud of the work myself and the other Open Data folk are doing in NZ. We’ve got a great relationship with those within government and we are starting to see some real changes taking place.

How will companies adapt to this? If you are charging money through limiting access to content then you will no longer have a business. When you think about it how did we ever get in a situation where businesses produced content and then licensed this under restrictive licenses back to the organisations that paid for it in the first place? If you commission an audio track then you should own it and be free to do what you like with it. Mobile? Web? CC licensed? That should all be fine. Therefore the value that the producer adds is where the business model is. For My Tours, that is in providing an easy to use platform where we take all of the hassle out of the technical side of the app development process – you don’t need a ‘computer guy’ and a server to set up a TAP instance. That is what we are experts in and that is what we will continue to focus on.

Categories
API Collection databases Conceptual Interviews Metadata

Making use of the Powerhouse Museum API – interview with Jeremy Ottevanger

As part of a series of ‘things people do with APIs’ here is an interview I conducted with Jeremy Ottevanger from the Imperial War Museum in London. Jeremy was one of the first people to sign up for an API key for the Powerhouse Museum API – even though he was on the other side of the world.

He plugged the Powerhouse collection into a project he’s been doing in his spare time called Mashificator which combines several other cultural heritage APis.

Over to Jeremy.

Q – What is Mashificator?

It’s an experiment that got out of hand. More specifically, it’s a script that takes a bit of content and pulls back “cultural” goodies from museums and the like. It does this by using a content analysis service to categorise the original text or pull out some key words, and then using some of these as search terms to query one of a number of cultural heritage APIs. The idea is to offer something interesting and in some way contextually relevant – although whether it’s really relevant or very tangential varies a lot! I rather like the serendipitous nature of some of the stuff you get back but it depends very much on the content that’s analysed and the quirks of each cultural heritage API.

There are various outputs but my first ideas were around a bookmarklet, which I thought would be fun, and I still really like that way of using it. You could also embed it in a blog, where it will show you some content that is somehow related to the post. There’s a WordPress plugin from OpenCalais that seems to do something like this: it tags and categorises your post and pulls in images from Flickr, apparently. I should give it a go! Zemanta and Adaptive Blue also do widgets, browser extensions and so on that offer contextually relevant suggestions (which tend to be e-commerce related) but I’d never seen anything doing it with museum collections. It seemed an obvious mashup, and it evolved as I realised that it’s a good way to test-bed lots of different APIs.

What I like about the bookmarklet is that you can take it wherever you go, so whatever site you’re looking at that has content that intrigues you, you can select a bit of a page, click the bookmarklet and see what the Mashificator churns out.

Mashificator uses a couple of analysis/enrichment APIs at the moment (Zemanta and Yahoo! Terms Extractor) and several CH APIs (including the Powerhouse Museum of course!) One could go on and on but I’m not sure it’s worth while: at some point, if this is helpful to anyone, it will be done a whole lot better. It’s tempting to try to put a contextually relevant Wolfram Alpha into an overlay, but that’s not really my job, so although it would be quite trivial to do geographical entity extraction and show amap of the results, for example, it’s going too far beyond what I meant to do in the first place so I might draw the line there. On the other hand, if the telly sucks on Saturday night, as it usually does, I may just do it anyway.

Beside the bookmarklet, my favourite aspect is that I can rapidly see the characteristics of the enrichment and content web services.

Q – Why did you build it?

I built it because I’m involved with the Europeana project, and for the past few years I’ve been banging the drum for an API there. When they had an alpha API ready for testing this summer they asked people like me to come up with some pilots to show off at the Open Culture conference in October. I was a bit late with mine, but since I’d built up some momentum with it I thought I may as well see if people liked the idea. So here you go…

There’s another reason, actually, which is that since May (when I started at the Imperial War Museum) it’s been all planning and no programming so I was up for keeping my hand in a bit. Plus I’ve done very little PHP and jQuery in the past, so this project has given me a focussed intro to both. We’ll shortly be starting serious build work on our new Drupal-based websites so I need all the practice I can get! I still no PHP guru but at least I know how to make an array now…

Q – Most big institutions have had data feeds – OAI etc – for a long time now, so why do you think APIs are needed?

Aggregation (OAI-PMH‘s raison d’etre) is great, and in many ways I prefer to see things in one place – Europeana is an example. For me as a user it means one search rather than many, similarly for me as a developer. Individual institutions offering separate OPACs and APIs doesn’t solve that problem, it just makes life complicated for human or machine users (ungrateful, aren’t I?).

But aggregation has its disadvantages too: data is resolved to the lowest common denominator (though this is not inevitable in theory); there’s the political challenge of getting institutions to give up some control over “their” IP; the loss of context as links to other content and data assets are reduced. I guess OAI doesn’t just mean aggregation: it’s a way for developers to get hold of datasets directly too. But for hobbyists and for quick development, having the entirety of a dataset (or having to set up an OAI harvester) is not nearly as useful or viable as having a simple REST service to programme against, which handles all the logic and the heavy lifting. And conversely for those cases where the data is aggregated, that doesn’t necessarily mean there’ll be an API to the aggregation itself.

For institutions, having your own API enables you to offer more to the developer community than if you just hand over your collections data to an aggregator. You can include the sort of data an aggregator couldn’t handle. You can offer the methods that you want as well as the regular “search” and “record” interfaces, maybe “show related exhibitions” or “relate two items” (I really, really want to see someone do this!) You can enrich it with the context you see fit – take Dan Pett’s web service for the Portable Antiquities Scheme in the UK, where all the enrichment he’s done with various third party services feeds back into the API. Whether it’s worthwhile doing these things just for the sake of third party developers is an open question, but really an API is just good architecture anyway, and if you build what serve’s your needs it shouldn’t cost that much to offer it to other developers too – financially, at least. Politically, it may be a different story.

Q – You have spent the past while working in various museums. Seeing things from the inside, do you think we are nearing a tipping point for museum content sharing and syndication?

I am an inveterate optimist, for better or worse – that’s why I got involved with Europeana despite a degree of scepticism from more seasoned heads whose judgement I respect. As that optimist I would say yes, a tipping point is near, though I’m not yet clear whether it will be at the level of individual organisations or through massive aggregations. More and more stuff is ending up in the latter, and that includes content from small museums. For these guys, the technical barriers are sometimes high but even they are overshadowed by the “what’s the point?” barriers. And frankly, what is the point for a little museum? Even the national museum behemoths struggle to encourage many developers to build with their stuff, though there are honourable exceptions and it’s early days still – the point is that the difficulty a small museum might have in setting up an API is unlikely to be rewarded with lots of developers making them free iPhone apps. But through an aggregator they can get it in with the price.

One of my big hopes for Europeana was that it would give little organisations a path to get their collections online for the first time.
Unfortunately it’s not going to do that – they will still have to have their stuff online somewhere else first – but nevertheless it does give them easy access both to audiences and (through the API) to third party developers that otherwise would pay them no attention. The other thing that CHIN, Collections Australia, Digital NZ, Europeana and the like do, is offer someone big enough for Google and the link to talk to. Perhaps this in itself will end up with us settling on some de facto standards for machine-readable data so we can play in that pool and see our stuff more widely distributed.

As for individual museums, we are certainly seeing more and more APIs appearing, which is fantastic. Barriers are lowering, there’s arguably some convergence or some patterns emerging for how to “do” APIs, we’re seeing bold moves in licensing (the boldest of which will always be in advance of what aggregators can manage) and the more it happens the more it seems like normal behaviour that will hopefully give others the confidence to follow suit. I think as ever it’s a matter of doing things in a way that makes each little step have a payoff. There are gaps in the data and services out there that make it tricky to stitch together lots of the things people would like to do with CH content at the moment – for example, a paucity of easy and free to use web services for authority records, few CH thesuari, no historical gazetteers. As those gaps get filled in the use of museum APIs will gather pace.

Ever the optimist…

Q – What is needed to take ‘hobby prototypes’ like Mashificator to the next level? How can the cultural sector help this process?

Well in the case of the Mashificator, I don’t plan a next level. If anyone finds it useful I suggest they ask me for the code or do it themselves – in a couple of days most geeks would have something way better than this. It’s on my free hosting and API rate limits wouldn’t support it if it ever became popular, so it’s probably only ever going to live in my own browser toolbar and maybe my own super-low-traffic blog! But in that answer you have a couple things that we as a sector could do: firstly, make sure our rate limits are high enough to support popular applications, which may need to make several API calls per page request; secondly, it would be great to have a sandbox that a community of CH data devotees could gather around/play in. And thirdly, in our community we can spread the word and learn lessons from any mashups that are made. I think actually that we do a pretty good job of this with mailing lists, blogs, conferences and so on.

As I said before, one thing I really found interesting with this experiment was how it let me quickly compare the APIs I used. From the development point of view some were simpler than others, but some had lovely subtleties that weren’t really used by the Mashificator. At the content end, it’s plain that the V&A has lovely images and I think their crowd-sourcing has played its part there, but on the other hand if your search term is treated as a set of keywords rather than a phrase you may get unexpected results… YTE and Zemanta each have their own characters, too, which quickly become apparent through this. So that test-bed thing is really quite a nice side benefit.

Q – Are you tracking use of Mashificator? If so, how and why? Is this important?

Yes I am, with Google Analytics, just to see if anyone’s using it, and if when they come to the site they do more than just look at the pages of guff I wrote – do they actually use the bookmarklet? The answer is generally no, though there have been a few people giving it a bit of a work-out. Not much sign of people making custom bookmarklets though, so that perhaps wasn’t worthwhile! Hey, lessons learnt.

Q – I know you, like me, like interesting music. What is your favourite new music to code-by?

Damn right, nothing works without music! (at least, not me.) For working, I like to tune into WFMU, often catching up on archive shows by Irene Trudel, Brian Turner & various others. That gives me a steady stream of quality music familiar and new. As for recent discoveries I’ve been playing a lot (not necessarily new music, mind), Sharon van Etten (new), Blind Blake (very not new), Chris Connor (I was knocked out by her version of Ornette Coleman’s “Lonely Woman”, look out for her gig with Maynard Ferrguson too). I discovered Sabicas (flamenco legend) a while back, and that’s a pretty good soundtrack for coding, though it can be a bit of a rollercoaster. Too much to mention really but lots of the time I’m listening to things to learn on guitar. Lots of Nic Jones… it goes on.

Go give Mashificator a try!

Categories
Conceptual Visualisation

Three short links – quantified self

Here’s a few short and interesting things I’ve been playing with for a little while. Each of these revolves around the idea of documenting ones own behaviour by laying down data – Kevin Kelly’s notion of the ‘quantified self‘.

I’m very interested in how this personal behavioural data can be used to better improve our own and collective relations and awareness. These also raise issues around the changing nature of privacy and the ways that we ‘produce’ our own identities (or as Michelle Kasprzak described it at Picnic 10 – our ‘forked identities’). I’m interested in this both as someone who spent a fair amount of time in a past-life researching subcultural infrastructure, but also from the perspective of how we use these to do interesting things in museums.

Mappiness is a mobile app from the London School of Economics who are doing a UK research project trying to map ‘happiness’ across the country. Now whilst the research data is only concerned with the UK, the app works internationally and I’ve been using it to ‘track my own happiness’. Not only am I submitting data – I can see my own data which is what I care most about. (If this experiment was being run by someone other than LSE I may not have participated.)

Here’s my happiness, sleepiness and awakeness graphed over the past little while.

For nearly five years now, I’ve been tracking all of my music listening through a commercial service called Last.FM. I started making the extra effort to ensure I tracked at least 95% of everything I had agency in the choices of music I was listening to was tracked once I figured that the aggregate data was actually, for me, incredibly interesting. Being someone who also has a musical alter ego, this data represents the reality of my musical identity, versus the projected music identity. (Next year I’m publishing an entire representation of five years of my listening).

Now Last.FM has had a very active community around its dataset and last week their Playground section launched LastFM’s Gender Plot. This takes your tracked listening and compares it to the aggregate of everyone else’s listening and self-presented demographic data and plots where you fit.

Apparently I have very gender-balanced listening and am a fair bit younger in my tastes than my actual age.

The third is Readermeter (via @lorcand). Readermeter is different in that it presents different ‘measures’ of impact for authors. It visually presents the H and G index of publications (citations-based) along with ‘readership’ data from the Mendeley API. I like this one, because much like Last.FM, this is all about shifting the impact data from being about ‘sales data’ to readership and use data.

Here’s a link to Creative Commons founder Lawrence Lessig’s profile on Readermeter. You can see impact of his different books from both citations and readership. Bear in mind that this data is heavily skewed towards academic-focussed publishing.

Categories
Conceptual User behaviour Web metrics

Museum implications of the Columbia report on metrics for digital journalism

Web analytics is a tricky game and often the different ways of measuring things confuse the very people they are there to help make better decisions.

For the museum sector, analytics seems even more foreign, largely because we’ve never had a very good way of generating such huge amounts of quantitative data about our visitors before.

We’re not alone in this.

As you’ve probably read in recent weeks there has been a fair bit of discussion, debate, and doomsday predictions coming out of the journalism world as it was revealed that, lo and behold, newspapers were using web analytics in their newsrooms.

This month, though, Lucas Graves, John Kelly and Marissa Gluck at the Tow Center for Digital Journalism at Columbia University, have published an excellent report on the different types of traffic metrics that news websites are confronted with.

Provocatively titled Confusion Online: Faulty Metrics & the Future of Digital Journalism the report explains history and reasons for the widely divergent figures resulting from different types of reader measurement – panel and census-based approaches.

A lot of these reasons have to do with who the figures are being generated for, and the historical role that readership figures have played in the pricing and sale of advertising. So we need to take this into account when we in the museum sector work with the same types of measurement tools.

Indeed, the resistance to shifting from the historical panel-based measurement to site-based (or as the authors call it, census-based) measurement is largely to do with the enormous commercial implications for how advertising is priced and sold that would result. (Fortunately museums cannot afford the panel-based solutions so we’re already mostly committed to census-based site analytics.)

There are two telling sections –

This is the case at both the New York Times and the Wall Street Journal, which sell most online inventory on a CPM [cost per thousand impressions] or sponsorship basis and do not participate in ad networks (other than Google’s AdSense, which the Times uses). “We sell brand, not click‐through,” declares the Journal’s Kate Downey flatly. “We’re selling our audience, not page counts.”

Marc Frons echoes the sentiment, pointing out that the Times can afford to take the high road. “For us as the New York Times, brand is important,” he says. “You really want to make the Internet a brand medium. To the extent CPC [cost per click] wins, thatʹs a bad thing.”

and

. . . the rise of behavioral targeting represents a distinct threat to publishers: By discriminating among users individually, behavioral targeting diminishes the importance of a site’s overall brand and audience profile. Suddenly the decisive information resides not with the publisher but in the databases of intermediaries such as advertising networks or profile brokers. A similar threat may be emerging in the domain of demographic targeting. As it becomes more possible to attach permanent demographic profiles to individual users as they travel the Web, the selection of outlets will matter less in running a campaign.

This is why online media outlets tend not to participate in third‐party ad networks if they can avoid it. “We donʹt want to be in a situation where someone can say, ‘I can get you somebody who reads the Wall Street Journal while theyʹre on another site that costs half as much,’” explains Kate Downey.

Museums and others in the cultural sector operate on the web as (mostly) ad-free publishers. We’ve traditionally thought of our websites as building the brand – in the broadest possible terms. In fact we don’t usually use the term ‘brand’ but replace it with terms like ‘trustworthiness’. Now we’re not ‘selling ad space’ but we are trying to build a loyal visitor base around our content – and that relies on building that ‘trustworthiness’ and that only happens over time and through successful engagement with our activities.

We invest in making and developing content other than the opening hours and what’s on information – the brochure parts of our web presences – because it builds this sense of trust and connection with visitors. This sense of trust and connection is what makes it possible to achieve the downstream end goals of supporting educational outcomes and the like.

But just as the base unit of news becomes the ‘article’, not the publication, we are also seeing the base unit of the ‘online museum experience’ reduce from the website (or web exhibit) to the objects, and in some cases to just being hyperlinked ‘supporting reference material’. This is where we need to figure out the right strategies and rationales for content aggregation, unless we do this is going to continue to cause consternation.

We also need to pay a lot more attention to the measurement approaches that best support the different needs we have to advertising supported publishers.

Categories
Conceptual Mobile

Why a touch interface matters

A shorter, more folksy interlude post – the kind I used to do more of when this blog first started nearly 5 years ago (only a few more days until the blog turns 5!).

Over dinner a few nights ago at Museums & the Web I was sitting with Kevin von Appen from the Ontario Science Centre. We were talking about the iPad and the lack of a stylus, and a possible future of voice control. We had a great chat about changing interfaces.

About a year ago I was thinking about why everyone becomes so ‘attached’ to their iPhones – and it dawned on me that the constant physical touching of the device, the stroke to unlock, the pressing, the sensual interaction, was might be a strong reason why people become so connected to them.

Sure a stylus might be more ‘accurate’ and, in the future, voice control, might offer a hands-free solution, but with a touch interface these kinds of devices become intimate and personal – not just slaves to your commands, but personal assistants and ‘friends’.

‘Intimate and personal’ matters a lot more than most of us as technologists like to think.

Categories
Conceptual Geotagging & mapping Mobile

Subject or photographer location? Changing contexts of geotagged images in AR applications

If you’ve tried the Powerhouse Museum layer in Layar in the past few days on the streets of Sydney you may have noticed some odd quirks.

Let’s say you are in Haymarket standing right here.

You open Layar and it tells you that where you are standing is the location of the following image.

Now when we were geo-tagging these images in Flickr we made a decision to locate them on the point closest to where the photographer would have stood. That seemed like a sensible enough option as it would mean that you could pan around from that point in Google Street View or similar and find a pretty close vista. This is well demonstrated in Paul Hagon’s mashup.

In the example above, if we had geotagged the subject of the image (the lighthouse) on its exact location then the Street View mashup would not function. This would be the same for many other images -the Queen Victoria Building, the Post Office, and the building in Haymarket.

However, AR applications work in the physical world and so we have another problem. If you are walking around you don’t necessarily want directions to the place where a photograph was taken, but directions to the subject of the image – especially if the camera-based heads-up-display is overlaying the image over the view of the world. This is particularly the case with historic images as the buildings have often either changed or been demolished making the point-of-view of the photographer hard to recreate. (Fortunately the Haymarket building is still there so reconstructing the view is not too difficult).

The larger the subject, the more problematic this becomes – as the photographer would stand further and further away to take the shot. Think about where a photographer might stand to photograph the Sydney Tower (or the Eiffel Tower) for example – it would be nowhere near the actual location of the subject of the photograph. Showing this on a mobile device makes far more sense if it is the subject of the photograph that is the ‘location’.

Question is, should we re-geo-locate our images? Or geo-locate both the photographer’s position and the subject’s position separately?

Either way we need to look into how people actually use these applications more – it might be that it doesn’t really matter as long as there are some obvious visual markers.