Categories
dConstruct08 User experience

dConstruct ‘Designing the social web’ 5 Sept 2008, Brighton UK

dConstruct in Brighton this year was held in the full glory of an English beachside summer – sleeting rain and gusty winds. Inside the Brighton Dome, 700 web geeks gathered to hear what ended up being quite a mixed bag of presentations from speakers about various aspects of ‘designing the social web’. Light on the technical detail, all but Tantek Çelik, focussed predominantly on the psycho-social aspects of the social web.

The conference opened with Steven Johnson (Everything Bad Is Good For You) comparing the similarities in the way in which data visualisation combined with hyperlocal amateur knowledge helped prove the source of the 1854 cholera outbreak in Soho, London with the new opportunities that are emerging with social mapping and visualisation technologies in the present. Johnson ‘s 2006 book The Ghost Map is a detailed look at the cholera outbreak and the second half of his talk focussed on his own social mapping project Outside.In. Currently US-only, it is very similar in style to Everyblock, bringing a personal hyperlocal focus to news, people and events in local communities. As Johnson says, usually you care most about things that happen within a small radius of where you are right now – 1000 feet/350 metres – and so Outside.In and its services like Radar use geolocation to deliver this information to you as it happens. He contrasted his service, which parses social media like Twitter and place-centric blogs, with the more obvious (and I would add, easily ‘monetized’) geolocation services which already exist around restaurant reviews and local businesses. Johnson was an engaging speaker but his US-centrism/universalism did emerge at times – and his postulating that Brooklyn contains four of the top ten US local blogging neighbourhoods also seemed to colour his perspective (Johnson lives in Brooklyn).

Next up was Guardian columnist and gaming academic Aleks Krotoski. Aleks was very engaging and energetic, pacing up and down the stage, waving her arms animatedly whenever possible as she explored some of the things that web designers could learn from game designers and vice versa. Her social psychology research around gaming has looked at the different models that game designers use to keep players engaged over long periods of time – the ‘stickiness’ that web designers long for. Of course, as she pointed out, game designers can play the niches and their proven publishing-style business model gives them a distinct advantage over the web where most web applications needs to appeal to a far broader audience and have very few proven sustainable business models (advertising and . . . ). In the controlled systems of games players can be given ongoing ‘carrots’ to keep them engaged and willing to move on to the next level/challenge, and even in immersive sandbox environments like Grand Theft Auto the player is made to ‘invest’ significantly in their game experience – enough to keep them playing for long and repeated periods. Games also can operate as ‘enabling systems’ whereby social value is emergent through community building, storybuilding, and even extends to the obvious player communities around World of Warcarft or in simpler terms the creation of game FAQs, walkthroughs etc. Closely related are what Krotoski termed ‘psychological systems’. These operate around relationship building in-game and also frequently leverage the ‘collecting urge’, and increasingly the generation of in-game assets (see Second Life). Krotoski closed by postulating that a lot of what we see in the next generation of games around at the moment – the social gaming of the Wii and NDS, the online/offline mix that Little Big Planet and Spore are going to offer – might have had their genesis in the long-lost Sega Dreamcast and for reasons of platform competitiveness and industry secrecy, many of the opportunities that the web has explored have been late to rise in the gaming world. On the flipside, the web could learn a lot from the engagement models of the gaming world.

Aleks was followd by Joshua Porter. Josh’s recent book Designing for the Social Web is a quick reference set of design patterns that explain some proven methods for building engagement and community when designing social sites and applications. Josh’s presentation was a little disappointing in that although he presented a series of design patterns and techniques to exploit users’ cognitive biases, he was light on evidence, and like Steven Johnson made some terrible US-centric/universalist statements about behaviour. The biases he focussed on were;

– representation bias – making highly visible the behaviour you wish other users to have on your site even if this is not typical of other users. eg. Yelp’s ‘featured reviews’ and Freshbooks’ ‘what our users are saying)
– loss aversion – the couching of desired behaviour in terms that avert loss or risk.
– ownership bias – reminding users that they should care because it is ‘their’ stuff. eg. Flickr’s use of ‘your’ in all their UI

His presentation drew extensively on the June 2006 article Eager Sellers, Stony Buyers by John Gourville, which explores techniques used to convince customers to change their behaviours etc.

As the first question from the audience asked, only half jokingly, “Isn’t this evil?”.

In a similar vein, Daniel Burka from Digg and Pownce, presented a series of slides that explored the methods that Digg and Pownce use to encourage users to firstly sign up to their services, and secondly, participate in positive ways. Whilst visitors to Digg can always use Digg without creating an account (much like the 95%+ of visitors to Amazon who use the site for product research and as an image library for their iPod), Digg’s aim is to sign up as many people as possible. In order to do this it needs to ‘go beyond altruism’ and offer real benefits to those who do sign up, as well as significantly reduce barriers to entry, and in the case of Pownce, allow logins with accounts from other services (cf Opensocial). Burka cited Geni.com as a best practice example of encouraging sign ups – it not only shows users what they can get from the site, it also starts them off in the process of creating their family tree, and makes it very easy for them do complete their signup with a minimum of information.

Encouraging positive behaviour and deterring trolling and gaming of the system is the next challenge. Burka outlined the benefits of using personal profiles with photos to build trust amongst users, as well as tweaking text copy to break through ‘tension points’. He pointed to Get Satisfaction‘s use of emoticons as a good example of conveying mood accompanying messages as a way of reducing the chance of user comments being taken in the ‘wrong spirit’.

Tantek Çelik followed with a detailed presentation on using microformats, specifically hCard to explore social network portability. The presentation and its information about implementing social network portability with hCard is available through the microformats wiki.

The final two sessions were more conceptual and were fun. Matt Biddulph and Matt Jones of Dopplr gave an initially grating but finally witty and funny presentation on, well, Dopplr. It was much more than Dopplr, but they used Dopplr as a case study and set of examples for how it is not only possible but also highly desirable to build web applications that are about slotting into and contributing to the ‘coral reef’ of the web, rather than trying to work as a walled garden or honey pot. They paid special attention to the notion of ‘delighters’ or in their world, ‘data toys’ – surprises that make their service pleasurable and fun to use. The last session was from Jeremy Keith. In a lovely and somewhat laconic presentation, Keith exploded the notion of predictability in scale free networks , drawing on sci-fi and pop-sci respectively. It was a fitting way to end the day.

Then it was back out into the rain to the afterparty – which our party decided was a veritable ‘bbq’.

Categories
Interactive Media Semantic Web User experience

More powerful browsers – Mozilla Labs Ubiquity

Mozilla Labs has released Aza Raskin’s Ubiquity in an early alpha form. This is a glimpse into a future world of browser technology which brings notions of the semantic web directly into the browser and connects the dots between websites – not from a provider perspective, but from a user perspective.


Ubiquity for Firefox from Aza Raskin on Vimeo.

Categories
Imaging Interactive Media User experience

Next generation of Photosynth-style image interaction – Bundler

Last year there was a lot of buzz around the first demos on Microsoft’s Seadragon and Photosynth, now from SIGGRAPH08 comes this rather splendid update to underlying technologies and concepts.

There is now a lot more ability for users to navigate and tweak their experience of interacting and browsing a 3D scene using miscellaneous 2D images. I was particularly impressed by the notion of using other people’s photos (from Flickr) to act as the intermediaries in scene reconstructions from your own photos; and the very final simple demo of creating a 3D model of an object by processing a series of handheld 2D images – this would greatly reduce the costs of 3D digitisation for museums.

The toolset used in this new version of the underlying technologies, Bundler, has also been released, so if you have some computer science graduates working in your team you could feasibly give it a burl.

Bundler takes a set of images, image features, and image matches as input, and produces a 3D reconstruction of camera and (sparse) scene geometry as output.

Categories
Developer tools Tools User experience

Usability and IA testing tools – OptimalSort, ClickDensity, Silverback

As the team has been working on a large array of new projects and sites of late we’ve been exploring some of the newer tools that have emerged for usability testing and ensuring good information architectures. Here’s some of what we’ve been exploring and using –

We’ve started using Optimalsort for site architecture – especially the naming and content of menus. Optimalsort is a lovely Australian-made web product that offers an online ‘card sorting’ exercise. In our case we’ve been using it as a way of ensuring we get a good diversity of opinions on how different types of content (‘cards’) should be stacked together (in groups) under titles (menus). Optimalsort lets you invite people to come and order your content in ways that make sense to them and then presents you with an overall table of results, form which you can deduce the best possible solution.

We’re also back using Clickdensity which is great for tracking down user interface problems on live sites. We used this when it first was released by Box UK and it revealed some holes we quickly fixed on a number of our sites. Whilst it still has issues working properly in Safari and, surprisingly, sometimes on Firefox, Clickdensity lets you generate heatmaps of your visitors’ clicks and mouse hovers. Armed with this you can quickly discover whether your site visitors are trying to click on images thinking that they are buttons or links; or choosing certain navigation items over others.

Sliverback is another UK product, this time from Clearleft. We’re gearing up to use this with some focus groups to record their interactions (and facial expressions!) as they use some of our new projects and products. Silverback is Mac only (which suits us fine) and records a users’ interactions with your application whilst using the Mac’s built in camera and microphone to record the participant (hopefully not swearing, cursing and looking frustrated). This should be perfectly geared for small focus groups with targetted testing.

Categories
Search User experience

SEO (search engine optimisation) basics and museums

One of the most common questions asked over the past few years has been “how do I get the best out of SEO for my museum?”. This comes up in casual conversations and without fail at conferences. We are all becoming increasingly aware of the higher and higher proportion of our traffic coming via search, and that as content on the web grows exponentially the chance of our content lying buried deep in search engine results increases.

Often the problem for museums with search relates to the diversity of their web presence. Other than our brand name, our content, especially those held in collections, is often very diverse and our exhibitions equally so. I’ve previously written about the need to tackle exhibition naming so that at least on the web exhibition titles are more ‘search-friendly’, but this is very tricky to apply to collection and education content.

The news media have taken to rewriting headlines for search – knowing that timeliness and findability are crucial to their success of their content – Scott Gledhill’s fantastic SEO presentation from Web Directions South 2007 is an eye-opening look at how News Limited journalists in Australia are maximising the reach of their articles (link is to a full Slidecast).

Is this possible with museum content?

Should (and can) curators, education staff, marketing staff, get a quick dashboard that reports the web performance of the content they are creating? Should (and can) they iterate their content, improving it, guided by real world performance? If museums are ‘slow media’, then is performance-guided content creation even a desirable outcome? (Update: do we really want to get to a situation like this parodied in the Slate?)

Maybe you need to tackle the basics first – getting your key content more visible. So where do you start?

Fortunately there are plenty of great SEO resources on the web and plenty of ways of testing SEO performance for free or very low cost. Last month Web Designers Wall posted a simple introduction to SEO which is worthwhile reading for the very basics. This along with Scott’s presentation should provide a good start point.

Categories
Search User experience

User experience is all that matters – a reminder about content, search and users

Scott Karp over at Publishing 2.0 has been griping about his experience using his local newspaper website which just so happens to be the Washington Post. Driven by a desire to find out about power cuts as a result of storm, Karp was unable to quickly find what he wanted, and thus turned to other websites, finding them through Google.