Categories
Mobile QR codes

Shortened URLs as an alternative to QR codes

The first time we did something with QR codes at the Powerhouse was in 2008 during Sydney Design festival. Last year we experimented with them on object labels with mixed results.

Now for our latest fashion exhibition, Frock Stars, we’ve replaced QR codes on labels with our new shortened URLs.

We’ll be keeping an eye on how these go.

My gut feeling is that these get around the application requirements and the scanning and light issues of QR codes – and whilst they may not attract ‘curious’ visitors, they should be obvious enough for those visitors who really do want to ‘know more’.

Categories
Mobile MW2010 User experience

First impression of the iPad (and museum possibilities)

Here’s something I wrote about the iPad on the flight back from Museums and the Web 2010. I promise a full conference rundown later.

I’ve just spent about 24 hours sitting in a confined airline seat playing with an iPad. I picked up one in New York on the day before flying out and here’s some thoughts on the experience.

The iPad is quite a lovely device – it is tactile and, whilst heavier than expected, it is far lighter than the only other device I’d try typing this out on – my laptop which is now “safely stowed in the overhead locker”. Not to mention if the guy in front of me decides to lean his seat back suddenly it won’t get crushed.

I managed to load the iPad up in the hotel with a small selection of iPad apps – Pages which I am using to type this, Scrabble for playing with my seat-mate, Instapaper for offline reading of webpages I’ve bookmarked to read later, and GoodReader for the PDFs of academic and business papers I end up with. It also transferred all my existing iPhone games happily.

As expected there were a few slight difficulties. It took me a little while to figure out how to load documents onto the device – loading them to Pages and GoodReader via the ‘Apps’ tab in iTunes isn’t the most logical place. And, to make sure I could catch up with some videos I had on my laptop I had to do some file conversion to MP4 format using the open source tool Miro.

On one single charge I’ve managed a full flight from New York to Sydney with moderate use and there’s 20% charge left. It wasn’t running all the time but I’ve done a bunch of typing, watched a couple of hours of video, listened to music, played some graphically intensive games on it, as well as about 10 rounds of Scrabble. I even managed to spend an hour on the painfully slow wifi at the LAX lounge.

I’m not a current consumer of ebooks but I do read a lot of long-ish form online content – 3000 word plus articles. Magazine articles, extensive blogposts, opinion pieces – and for this use Instapaper and the iPad is a killer combo. If I find something I want to read during my day I can just mark it as ‘read later’ with a bookmarklet in my laptop browser and then when my iPad connects to wifi it downloads these for me and I can read offline whilst in transit. The iPad version of Instapaper works very well and allows font and flow changes making for a good reading experience on the device.

In many ways the iPad fills an immediate need of mine to have something more portable than my laptop and bigger than my phone for reading this kind of content – I expect there are a fair few people who share a similar need. Does it replace these other devices? No, it simply offers a more convenient context and experience for reading. Is it a ‘lean back’ device – definitely. And there are plenty of times when I need to be able to ‘lean back’ and absorb/consume content before heading off to ‘make and do’ content elsewhere on another device.

There’s a stack of potential for these devices in the museum space. I’m not a fan of the individualizing nature of traditional museum guides and tour devices. I find the small screen and inherently singular experience of a museum guide delivered either on a ‘hired’ device or my own phone, severely compromised.

But here with the iPad (and whatever follows as a result of it changing the tablet marketplace), we finally have a light, portable, and easy to use device that allows museum tours to be enjoyed collectively – even as a family group. In fact, the development work needed to convert an existing iPhone-optimised web content into one that suits the iPad is relatively minimal.

Consider the options for visitors stopping by a showcase or a set of objects wanting to know more about them. They pull out the iPad that they have ‘hired/borrowed’ at the front desk, and flick through to the collection information about those objects, pull up the videos in which the makers are interviewed, and pass the device between family members to show each other. Better yet, if they so wish, all this content is still available online for reference when they get home or back to school.

Categories
Conceptual Mobile

Why a touch interface matters

A shorter, more folksy interlude post – the kind I used to do more of when this blog first started nearly 5 years ago (only a few more days until the blog turns 5!).

Over dinner a few nights ago at Museums & the Web I was sitting with Kevin von Appen from the Ontario Science Centre. We were talking about the iPad and the lack of a stylus, and a possible future of voice control. We had a great chat about changing interfaces.

About a year ago I was thinking about why everyone becomes so ‘attached’ to their iPhones – and it dawned on me that the constant physical touching of the device, the stroke to unlock, the pressing, the sensual interaction, was might be a strong reason why people become so connected to them.

Sure a stylus might be more ‘accurate’ and, in the future, voice control, might offer a hands-free solution, but with a touch interface these kinds of devices become intimate and personal – not just slaves to your commands, but personal assistants and ‘friends’.

‘Intimate and personal’ matters a lot more than most of us as technologists like to think.

Categories
Conceptual Geotagging & mapping Mobile

Subject or photographer location? Changing contexts of geotagged images in AR applications

If you’ve tried the Powerhouse Museum layer in Layar in the past few days on the streets of Sydney you may have noticed some odd quirks.

Let’s say you are in Haymarket standing right here.

You open Layar and it tells you that where you are standing is the location of the following image.

Now when we were geo-tagging these images in Flickr we made a decision to locate them on the point closest to where the photographer would have stood. That seemed like a sensible enough option as it would mean that you could pan around from that point in Google Street View or similar and find a pretty close vista. This is well demonstrated in Paul Hagon’s mashup.

In the example above, if we had geotagged the subject of the image (the lighthouse) on its exact location then the Street View mashup would not function. This would be the same for many other images -the Queen Victoria Building, the Post Office, and the building in Haymarket.

However, AR applications work in the physical world and so we have another problem. If you are walking around you don’t necessarily want directions to the place where a photograph was taken, but directions to the subject of the image – especially if the camera-based heads-up-display is overlaying the image over the view of the world. This is particularly the case with historic images as the buildings have often either changed or been demolished making the point-of-view of the photographer hard to recreate. (Fortunately the Haymarket building is still there so reconstructing the view is not too difficult).

The larger the subject, the more problematic this becomes – as the photographer would stand further and further away to take the shot. Think about where a photographer might stand to photograph the Sydney Tower (or the Eiffel Tower) for example – it would be nowhere near the actual location of the subject of the photograph. Showing this on a mobile device makes far more sense if it is the subject of the photograph that is the ‘location’.

Question is, should we re-geo-locate our images? Or geo-locate both the photographer’s position and the subject’s position separately?

Either way we need to look into how people actually use these applications more – it might be that it doesn’t really matter as long as there are some obvious visual markers.

Categories
Geotagging & mapping Mobile

New version of Powerhouse Museum in Layar : augmented reality browsing of museum photos around Sydney

Last year we trialled Layar for the display of historical photos of Sydney from the collection. At the time Layar was not all that stable and our content was mixed in with those of others.

Now the application is more stable and our layer in Layar is discoverable simply by searching ‘Powerhouse Museum’ in the Layer browser. You can also now view the original images in Flickr without leaving Layar making for a far better user experience.

This is still very early days and we’re thinking around the possibilities. Thanks to Rob and Alex at Mob-Labs for the development work.

What do I need?

You’ll need either an iPhone 3GS or an Android phone. It is not compatible with the iPod Touch or earlier versions of the iPhone because they lack a compass.

Then you need to install the free Layar application.

Using Layar

1. Go to the central business district of Sydney.

2. Open Layar on your mobile device then select Search.
Type ‘powerhouse museum’.

3. Select the Layar to open the browser with Powerhouse content loaded.

You may find a lot of points appear on your screen. If this happens you need to reduce the view distance in Layar.

You can switch between the ‘reality’ view and map and list views.

Interacting with Layar

Selecting a point of interest in any view will bring up a thumbnail and options to view the image on Flickr or navigation directions to reach the point if it is not where you are standing.

Can I try it if I am not in Sydney?

If you’d like to try it from outside Sydney you can do so. You’ll have to go to the Layar Preferences – under your phone settings on the iPhone – and set the ‘Use Fixed Location’ to On. The latitude you should enter is -33.878611 and longitude 151.19944.

The next version?

Soon we’ll be uploading a bunch of other points – contemporary photography from the same locations – and then adding some game elements. Stay tuned.

Thanks again to Rob and Alex at Mob-Labs for the development work.

Categories
Geotagging & mapping Mobile

Augmented reality update – using Powerhouse geocoded photographs on your iPhone 3GS with BuildAR and Layar

So you read about MOB’s implementation of the Powerhouse historical images in Layar for the Android phones . . . well, Layar is now available for the iPhone!

You’ll need a 3GS as it uses the compass for orientation but the Layar application is free from the App Store.

layar-itunes

Once you’ve installed Layar on your iPhone you need to configure it to use BuildAR as a ‘layer’.

To do this just perform a search within Layar for ‘buildar’ then select it.

Search and add the BuildAR layer

You can see here that I’ve added it to my favourite layers for easy reference along with Wikipedia and Flickr layers!

layar-faves

Then head out onto the streets of Sydney and see what you can find.

You can view objects overlaid on ‘reality’ or get a map or list view. Clicking an object presents you with a number of options including visiting the historic photograph on Flickr, on the Powerhouse site or map directions to get closer to the point at which the photograph has been geocoded.

Layar in action

layar-listview

layar-clickthru

Categories
Geotagging & mapping Imaging Mobile

Augmented reality and the Powerhouse images in the Commons (or interesting things clever people do with your data #7215)

On Saturday night at our (very rainy) Common Ground meetup in Sydney, Rob Manson and Alex Young from BuildAR demonstrated the first version of their augmented reality mobile toolkit using images from the Powerhouse’s geocoded photographs in the Commons on Flickr.

This work riffs around the early mashup from Paul Hagon where he combined the historic photos with Google’s Street View; and the ABC’s Sydney Sidetracks project.

But then makes it mobile – replacing the Street View with the actual view through the camera of your mobile phone.

I asked Rob a few questions –

F&N – What is this Augmented Reality thing you’ve built? What does it do?

The first service is BuildAR and it is a service built upon the Mobile Reality Browser called Layar.

Layar uses the GPS on your mobile to work out where in the world you are, then it uses the digital compass to work out which direction you’re facing (e.g. your orientation). From this it can build a model of the objects and places around you. Then as you hold up your mobile and pan around, it can overlay information on the live video from your camera that you see to highlight where these objects and places are.

BuildAR let’s you quickly and easily add, search and manage your own
collection of Points of Interest (POIs) to create your own Augmented Reality layer. You can do this via a standard PC web browser, or you can do it via your mobile phone. You can create a free personal account and get started straight away creating your own private POIs or you can make public POIs that other people can view too. All it takes is a few clicks and they are shared or published in real-time.

You can also use the service to create fully branded and customised layers.

We’re constantly releasing new features including groups so you can share private POIs with others, rich graphs so you can view when and how people are using your POIs and custom mobile websites that each of the POIs can link to. We can even customise layers to make them really interactive so the POIs you see are based on where you’ve been, other POIs you’ve interacted with, the time of day or any range of options. Treasure hunts are a great example of this.

How did you use the Powerhouse data?

We’re in the process of creating layers for a lot of people at the moment and another great example is with the Powerhouse images that were released into the Flickr Commons. We loaded over 400 of these images as public POIs so now you can wander around Sydney with your phone and see beautiful historic images of the local area around you. You can then just tap on the POI/photo and you get the option to go directly to the Flickr page for that image, or even better straight to the Powerhouse page with all the historic information and the original image.

I spent the afternoon with my son the other day wandering around looking at images of our local area. Neither of us knew that Bondi/Tamarama used to have an Aquarium and it has opened up a whole new world for us to explore.

How easy was it to use Layar? What are the benefits?

It was reasonably straight forward, but it was a very technical process.

That’s largely why we created BuildAR – so other people can create and manage their own POIs by just pointing and clicking, or wandering around and using their mobile.

The benefits are that it is a great system with quite an open API. They’re gaining a lot of traction and I think the “browser with layers” approach is much better than creating dedicated applications.

This is much more along the lines of how the web works.

If you want to create something then you just create a website that uses standards based HTML/CSS. It just wouldn’t make sense for you to also have to create your own browser too. That’s the old model from before the 90’s and we’ve all learned a lot and come a long way since then.

Layar are releasing some great new features soon too, like supporting 3D models and animations and support for more mobile device types. They can focus on that and we can just focus on creating great layers and tools that make it easy to create and manage layers.

What data sets were you looking to use? How easy was it to use etc?

We’re looking for either content that’s compelling or data that’s useful. The Powerhouse images are a great example of compelling information and the team at the Powerhouse made it really easy to integrate into our application (thanks Luke and Paula!).

Very soon we’ll be releasing an option that lets you upload a batch file of POIs or just point it to a GeoRSS feed and you’ll be done. Couldn’t get much easier than that!

Another great example of compelling content we’re currently working on is with Sculpture by the Sea. This is a beautiful outdoor experience and is a perfect fit for mobile Augmented Reality.

We’re also doing quite a bit of work in the Government 2.0 and Open Data movement and we’re currently working on a range of layers that utilise the really useful public data that’s being released. Our goal is to help this data become more “situated” and therefore hopefully more relevant . . . then on top of that we’re opening up layers of social interaction to add even more value.

This is a really interesting time with a lot of social change on the horizon. The combination of Augmented Reality and Open Data is something that is literally changing the way we see our world.

What platforms does it run on? Will it be easy to port to the iPhone?

At the backend BuildAR is simply a relatively open API and we implemented that all on our Linux based servers. On the Layar browser side it currently runs on Android based devices and will be released on the iPhone 3GS and some other platforms soon too. The Layar team are working hard to port and enhance this whole application and the goal is to support any phone that has GPS and a digital compass built-in.

I think in the near term future you’ll see GPS and digital compasses start to spread back onto netbooks and laptops and then the tablet computers that will be released soon.

You were demo-ing another AR application at the Web Week launch party. Tell me about it?

This was a “marker” based AR project, an ARt exhibition collaboration with Yiying Lu who created the “Fail Whale” for twitter. Basically you just hold up an illustration created by Yiying, on a postcard or a t-shirt, in front of a camera connected to an internet connected computer. The application we created then recognises the image and then projects a simple Fail Whale animation over the top of the marker.

This also loads that last 30 tweets with the #wds09 hashtag and randomly displays one of them every 45 seconds. It’s all kinda self-referential and tongue-in-cheek and is a great way to play with and interact with Yiying’s beautiful illustrations.

You can try this on your own computer too. All you need is an internet connected computer, Flash installed on your browser and a working webcam. Just visit the project website and have a play or just watch the video to see how it works.

It is still quite early days with this technology and the light levels can really impact how well it works, but AR is definitely something that has an impact when you experience it.

“You are what you tweet” Augmented Reality exhibition from Rob Manson on Vimeo.

We are obviously in the early days of mobile phone AR. How do you see it developing?

Well, I’m working on a broader research project on Pervasive Computing and I think this is a core part of that evolution. The interfaces are still quite clunky and having to hold up and wave around your phone is still quite a clumsy experience.

I think quite soon we’ll see more immersive display devices start to spread. I’m running a session on this at Web Directions South and we use this underlying theory to inform most of our business/product strategy development.

Basically the distance between the network and the user is collapsing. The distance between the display and the user is collapsing. And the distance between the physical interface (e.g. think of gestures) and the user is also shrinking. This means our overall experience of space and even who we are is changing.

This all seems a bit futuristic, but glasses with displays built-into them should start to spread quite soon, all powered by mobile devices. And there’ll be even more interesting options too. Just think how quickly iPhones and Bluetooth headsets have become common everyday objects.

The opposite side of this is the spread of wireless digital cameras.

Combine the two and you open the door to rich and immersive Augmented Reality where you can shift your perspective constantly and freely.

I think this is the start of something really fascinating!

Categories
Geotagging & mapping Mobile

Maps are all around us

I was reading Michael Chabon’s piece on childhood last week and one section popped out of the screen –

It captured perfectly the mental maps of their worlds that children endlessly revise and refine. Childhood is a branch of cartography.

Walking my daughter to school we tiptoe “past the wizard’s house” at the top of my street – a rather rundown old building full of props and what, to small people, appears very much like magic equipment. A little further up the road is where the “scary man” sleeps rough. This got me thinking about the possibilities for children’s maps of their neighbourhoods overlaid on ‘official maps’.

So how might this work? Could this work as a game? Well, it also provides an excuse for a stream of consciousness post about a few of my favourite map-related projects.

Since I saw it at MW2009 I’ve been a huge fan of the New York Public Library’s Map Rectifier project. Here historical maps are being ‘rectified’ so that they can be searched, and navigated using contemporary online mapping tools. (The current rectifier uses Open Street Map). This is an incredibly thoughtful way of ‘digitising a collection’ – where the digital copy opens the object up to new uses. I’m looking forward to future projects that emerge from this work and peeling back the layers of historical sediment as maps are laid on top of each other by year.

New maps of the city are being created all the time and here’s a new Nintendo DS game called Treasure World (article) that utilises the environment around the player – the mutlitude of WiFi points around a coty to be precise – as a key element in the game. Players collect in-game content as they explore the city’s WiFi points around them. This is almost invisible ‘augmented reality’ gaming – and I’d wager that many players won’t comprehend that the city around them is the game itself (indeed, the point is that they don’t need to).

Similarly revealing of the digital sediment around us, Flickr’s mobile ‘near me‘ (open on your iPhone) brings to the mass market mobile web what the iPhone application Darkslide (formerly Exposure) had as an ‘extra feature’. With Near Me the mobile Flickr website now can make a call to your location and then return other people’s photos ‘near you’. This creates an uncanny experience of being able to – in place – view the world through the eyes of those who have been there previously. Or, ‘near my home’, it shows me the rather debauched parties that happen in some of my neighbours’ houses (perhaps that’s just my neighbourhood!).

Conversely I’ve been fascinated by a number of art projects that reveal the parts of the world still unmapped by photosharing websites – “the no-go zones of the technorati”.

There’s sonic maps emerging too – the BBC’s Save Our Sounds – and the University of Salford’s Sound Around You have both been in the news recently, and Audio Boo has been around since the beginning of the year too.

One of my friends and sound artist, Richard Fox, has just launched a new augmented reality game in Sydney based on the razor gangs of the early 20th century in Darlinghurst. Called Razorhurst, and adapted from a historical book, Razor, it uses GPS-enabled PDAs (running Windows Mobile and built with MScape) to recreate the period. It runs to the end of July (sponsored by dLux Media Arts) and you can collect your PDA for the game from the East Village Hotel – the significance of which is crucial to the story.

Mscape has been around for a little while to author these sorts of games, and the historical assets are starting to become a little more widely available. It is a pretty easy authoring environment even if it is the equivalent of the CD ROM age – only playable on some devices, closed system etc.

Someone – yes, you dear reader – should probably go and build a Mscape game out of the geotagged content in the Flickr Commons – I would if I had 100 hours.

Of course, there’s another reason why I’ve been really interested in maps but I’ll tell you about that in a week or two . . .

Categories
Mobile QR codes User experience

A quick QR code update

As regular readers know, we’ve been trialling QR codes and a little while back rolled them on a small selection of object labels in a Japanese fashion display.

I’ve been keep an eye on their usage and some of the continuing problems around lighting, shadows, and low-resolution mobile phone cameras like the current iPhone 3G. So far usage has been, as expected, low. Firstly, the target audience for the exhibition content has, not surprisingly, not been very tech-savvy. Secondly, the ‘carrot’ isn’t clear enough to cause the audience to respond to the call to action.

More critically, one thing we still haven’t quite gotten right is the image size and error correction.

Shortly after the last post we upped the error correction in the codes to 30% (meaning that up to about 30% of the image can be obscured and it still scans – although it is isn’t evenly spread). This alone wasn’t enough.

With the long URLs encoded in the codes plus the error correction the resulting QR codes were even more ‘dense’ and hard to scan with 2 megapixel cameras. We’ve now done another set of codes with our own version of TinyURLs that generate locally. This has reduced the encoded characters from nearly 70 to around 25 characters – thus a far less dense code.

Even so, 2 megapixel cameras have patchy results when obscured by lens flare or shadow so our current thinking is that in the future the codes may need to be as much as 50% bigger.

Categories
Mobile QR codes

QR codes in the museum – problems and opportunities with extended object labels

I think QR codes have a lot of potential – potential that hitherto has not been realised. The underwhelming uptake of the codes outside of Japan has a lot to do with the poor quality marketing campaigns so far run with them. If I am going to have to install or worse still, find on my phone, a QR code reading application then the reason I am going to all this trouble has to be really really worthwhile.

I am yet to see a commercial campaign that delivers that compelling reason to install the reader.

On the otherhand, quite a few local artists are experimenting with them in interesting ways. If you are a Melbourne reader then maybe you spotted a guerilla art installation at Federation Square by Radical Cross Stitch!

Now QR codes are probably best seen just as mobile-readable URLs. If these URLs are just going to send me to a website that isn’t tailored for my context and device then they are going to be just a gimmick. But if, on the otherhand, they can deliver timely, mobile-formatted content to me that addressed my specific ‘need’ at the time then they might just work. I know there’s no way I am going to bother typing an URL into my phone whilst I stand in front of an advertisement. Even on the iPhone, typing of URLs is more painful than it should be (in fact I’d wager that most iPhone users follow links from other applications – Twitter, email etc – or use their bookmarks – anything to avoid typing URLs). On a standard numeric keypad mobile, forget typing URLs.

Now regular readers will remember our experiment with QR codes in August last year. We learnt a lot from that and now we’ve rolled out an experiment in a new display on the floor of the Museum.

As part of the Gene Sherman Contemporary Japanese fashion display each object label is now augmented with both a QR code and a longform object URL (just in case you can’t use the QR code).

Here’s a quick breakdown of the process.

Generating the codes

Once again we did this in-house – the main reason being that every mistake made internally helps us learn and grow. Sure, we could outsource the mistakes but in so doing we outsource the learning. And that’s not a good long term idea.

Problem #1 – All QR codes are not the same

Perhaps you thought that there was just one standard type of QR code? Well that’s not exactly true. QR codes can be generated at a number of ‘sizes’ (actually more like density than fixed dimension), with different percentages of error correction (in case a scan is blurred or partial), and the content can be stored in a number of ways. The first pass we made at generating codes for each object ended up working on most but not all of the QR code readers we tried. Finally we generated a series of codes that worked on all the readers we could find.

Problem #2 – Inconsistent size

One issue with QR codes is that they do change size as the content the are encoding increases. This is inrrespective of the density that you choose. A medium density encoding of “The Powerhouse Museum” is going to be smaller in size than one that says “The Powerhouse Museum is making QR codes”. Add higher error correction (tolerance) and they get bigger still. Now this isn’t usually a going to be a problem when single codes are going to be printed but when they need to go on object labels then, rightly, the exhibition designers want to have a standard size for the whole exhibit. This meant finding the longest possible code and designing for it.

Getting the content ready

Problem #3 – Making the mobile site

As we found in our initial QR trial last year one of the key failures was that we never built the encoded URL as mobile-friendly. This time we’ve changed large parts of our website and especially the collection database, to which the QR coes point, to be mobile-ready.

Installing the codes

Problem #4 – Perspex

So we now have QR codes that can be read by a variety of readers on a variety of phones with 2 megapixel to 5 megapixel cameras, and we have a website that is going to work on a phone. The next hurdle to be crossed was physical. At the Powerhouse we put our object labels behind 5mm thick perspex. This stops visitors from writing things on our labels (oh, the trust!) and means they last a lot longer in the galleries.

Another round of testing was required to work out the minimum size at which the QR codes could still be scanned with a 2 megapixel phone camera through the 5mm perspex.

Problem #5 – Shadows

And so off the labels went to be printed.

Installation day rolls around and I was in the gallery with my phone looking at the QR codes being installed below the written labels and thinking to myself “finally we have codes in the galleries!”.

Then I noticed the lights. Not just one light but multiple lights shining on the objects from behind where the visitor would stand. With this set up dark shadows were cast over exactly where the QR codes were being placed meaning that although the codes could be photographed, the shadows interfered with the ability to decipher the data in the codes.

Lights have been moved around a bit and now we have a better situation.

We are keeping an eye on usage and will report back once the display ends.

If you are in Sydney, come in and give it a go.

I am recommending free QR code reader application called BeeTagg mainly because it has different versions available for a range of phones – Symbian, Palm, Blackberry and iPhone.