Categories
Tools User experience

That popup survey tool for Fresh & New feedback

One of the nice finds of the past few months has been Kiss Insights. You’ve probably noticed a little pop up survey on this blog – and maybe you’ve even answered it – well, that’s Kiss Insights doing its magic.

Easily deployed to a website, Kiss Insights is a bit of Javascript code that calls a remote survey form which has a maximum of two questions.

There are quite a few of these mini-survey tools around at the moment – all based on the solid user experience notion that surveys work best on the web when they are very very short, and minimally intrusive. Deploying multiple, regular short surveys, the logic goes, will always give you better data and a higher number of respondents than single, lengthy ones – the sort that are traditionally popular in museums and ported from the paper-based world.

There are variables to make the short survey form pop up for only new visitors, or those who spend a certain time on site or look at more than a certain number of pages. You can even tailor it to pop up only when visitors come in from a particular keyword search, and of course you can tailor the parts of your site on which it appears.

Results can be exported as CSV or browsed through online in various ways.

The only missing feature that I’d find really valuable would be the ability to display the survey only to visitors from a particular geographic location. (As it is, you need to do a reverse geo-IP lookup on the results to gather city/country data).

We managed to quickly gather nearly 1000 responses from visitors to our children’s website and on this blog repeat visitors have been answering with a pretty good 10% response rate.

In case you were curious, I’ve discovered that of my repeat visitors –

22.1% have been reading the blog for over 3 years
27.9% between 1 and 3 years
and the rest under 1 year.

I’ve also got some lovely feedback and good suggestions for future posts – including one that asked about the tool I was using that prompted this very post!

Categories
API Collection databases Developer tools Museum blogging Tools

Powerhouse Museum collection WordPress plugin goes live!

Today the first public beta of our WordPress collection plugin was released into the wild.

With it and a free API key anyone can now embed customised collection objects in grids in their WordPress blog. Object grids can be placed in posts and pages, or even as a sidebar widget – and each grid can have different display parameters and contents. It even has a nice friendly backend for customising, and because we’re hosting it through WordPress, when new features are added it will be able to be auto-upgraded through your blog’s control panel!

Here it is in action.

So, if you have a WordPress blog and feel like embedding some objects, download it, read the online documentation, and go for it.

(Update 22/1/11: I’ve added a new post explaining the backstory and rationale for those who are interested)

Categories
Tools

Moving out in to the cloud – reverse proxying our website

For a fair while we’ve been thinking about how we can improve our web hosting. At the Powerhouse we host everything in-house and our IT team does a great job of keeping things up and running. However as traffic to our websites has grown exponentially along with an explosion in the volume of data we make available, scalability has become a huge issue.

So when I came back from Museums and the Web in April I dropped Rob Stein, Charles Moad and Edward Bachta’s paper on how the Indianapolis Museum of Art was using Amazon Web Services (AWS) to run Art Babble on Dan, our IT manager’s desk.

A few months ago a new staff member started in IT – Chris Bell. Chris had a background in running commercial web hosting services and his knowledge and skills in the area have been invaluable. In a few short months our hosting set up has been overhauled. With a move to virtualisation inside the Museum as a whole, Chris started working with one of our developers, Luke, thinking about how we might try AWS ourselves.

Today we started our trial of AWS beginning with the content in the Hedda Morrison microsite. Now when you visit that site all the image content, including the zoomable images, are served from AWS.

We’re keeping an eye on how that goes and then will switch over the entirety of our OPAC.

I asked Chris to explain how it works and what is going on – the solution he has implemented is elegantly simple.

Q: How have you changed our web-hosting arrangements so that we make use of Amazon Web Services?

We haven’t changed anything actually. The priorities in this case were to reduce load on our existing infrastructure and improve performance without re-inventing our current model. That’s why we decided on a system that would achieve our goals of outsourcing the hosting of a massive number of files (several million) without ever actually having to upload them to a third-party service. We went with Amazon Web Services (AWS) because it offers an exciting opportunity to deliver content from a growing number of geographical points that will suit our users. [Our web traffic over the last three months has been split 47% Oceania, 24% North America, 21% Europe]

Our current web servers deliver a massive volume and diversity of content. By identifying areas where we could out-source this content delivery to external servers we both reduce demand on our equipment – increasing performance – and reduce load on our connection.

The Museum does not currently have a connection intended for high-end hosting applications (despite the demand we receive), so moving content out of the network promises to not only deliver better performance for our website users but also for other applications within our corporate network.

Q: Reverse-proxy? Can you explain that for the non-technical? What problem does it solve?

We went with Squid, which is a cache server. Squid is basically a proxy server, usually used to cache inbound Internet traffic and spy on your employees or customers – but also optimise traffic-flow. For instance, if one user within your network accesses a web page from a popular web site, it’s retained for the next user so that it needn’t be downloaded again. That’s called caching – it saves traffic and improves performance.

Squid is a proven, open-source and robust platform, which in this case allows us to do this in reverse – a reverse-proxy. When users access specified content on our web site, if a copy already exists in the cache it is downloaded from Amazon Web Services instead of from our own network, which has limited bandwith that is more appropriately allocated to internal applications such as security monitoring, WAN applications and – naturally – in-house YouTube users (you know who you are!).

Q: What parts of AWS are you using?

At this stage we’re using a combination. S3 (Simple Storage) is where we store our virtual machine images – that’s the back-end stuff, where we build virtual machines and create AMIs (Amazon Machine Images) to fire up the virtual server that does the hard work. We’re using EC2 (Elastic Cloud Compute) to load these virtual machines into running processes that implement the solution.

Within EC2 we also use Elastic IPs to forward services to our virtual machines, which in the first instance are web servers and our proxy server, but also allows us to enforce security protocols and implement management tools for assessing the performance of our cache server, such as SNMP monitoring. We also use EBS (Elastic Block Store) to create virtual hard drives which maintain the cache, can be backed up to S3 and can be re-attached to a running instance should we ever need to re-configure the virtual machine. All critical data, including logs, are maintained on EBS.

We’re also about to implement a solution for another project called About NSW where we will be outsourcing high bandwidth static content (roughly 17GB of digitised archives in PDFs) to Amazon CloudFront.

Q: If an image is updated on the Powerhouse site how does AWS know to also update?

It happens transparently, and that’s the beauty of the design of the solution.

We have several million files that we’re trying to distribute and are virtually unmanageable in a normal Windows environment. trying to push this content to the cloud would be a nightmare. By using the reverse proxy method we effectively pick and choose – and thereby pull the most popular content and it automatically gets copied to the cloud for re-use.

Amazon have recently announced an import/export service, which would effectively allow us to send them a physical hard-drive of content to upload to a storage unit that they call a “bucket”. However, this is still not a viable solution for us because it’s not available in Australia and our content keeps getting added to – every day. By using a reverse proxy we effectively ensure that the first time that content is accessed it becomes rapidly available to any future users. And we can still do our work locally.

Q: How scalable is this solution? Can we apply it to the whole site?

I think it would be undesirable to apply it to dynamic content in particular, so no – things such as blogs which get changed frequently or search results which are always going to be slightly different depending on the changes that are effected to the underlying databases at the back end. In any case, once the entire site is fed via a virtual machine in another country you’ll actually experience a reduction in performance.

The solution we’ve implemented is aimed at re-distributing traffic in order to improve performance. It is an experiment, and the measurement techniques that we’ve implemented will gauge its effectiveness over the next few months. We’re trying to improve performance and save money, and we can only measure that through statistics, lies and invoices.

We’ll report back shortly once we know how it goes, but go on – take a look at the site we’ve got running in the cloud. Can you notice the difference?

Categories
Tools Web metrics

ROI Revolution’s Google Analytics Report Enhancer

Anyone who attended my double web analytics workshops today at the Transforming Cultural and Scientific Communication conference in Melbourne today saw this lovely little Greasemonkey script in action.

And I thought I better link it for everyone who is not already using this to install.

What GARE does, amongst other things is go some way towards addressing the ‘time on site’ problem that is inherent in most if not all web analytics packages. In short this problem is that single page visits to a website are counted as having zero time spent on them and count this zero figure when creating the ‘average time on site figure. Similarly the time spent on the final page of a visit is left at zero. Blogs are especially susceptible to low time on site figures as most readers visit only one, albeit long, page before leaving.

With GARE installed you are presented with the standard ‘average tiem on site’ as well as a ‘true time on site’ which removes these single page visits from the average calculation. GARE also adds a number of other nifty user interface fixes to make your use of Google Analytics even better.

(My longer paper on web metrics from last year is available at Archimuse and the next web metrics for cultural institutions workshop happens in Indianapolis at MW09 – or on request of course!)

Categories
Tools User experience

Readability – reducing clutter with a bookmarklet

I’ve become a fan of a bookmarklet tool called Readability.

What it does is remove the clutter from a content-rich webpage and optimise it for ‘readability’ (which of course, itself can be customised). Now museums tend to be serial offenders on text-heaviness – we love long text and I’m not one to argue that we should shorten it.

So whilst everyone emulates the ‘Print Version’ stylesheets that newspaper websites have these rarely make content more readable on-screen – that’s not their point. What Readability does is leaves the ‘Print Version’ to the end-user’s discretion and re-renders the content in a form that is immediately more readable on-screen.

To check it out install the bookmarklet in your browser bar then visit a content rich page, click the bookmarklet and voila, a more readable version!

It works on most browsers and seems to do a good job on most websites.

Here’s what happens to our very own collection records.

Before

After

Categories
Developer tools Tools User experience

Usability and IA testing tools – OptimalSort, ClickDensity, Silverback

As the team has been working on a large array of new projects and sites of late we’ve been exploring some of the newer tools that have emerged for usability testing and ensuring good information architectures. Here’s some of what we’ve been exploring and using –

We’ve started using Optimalsort for site architecture – especially the naming and content of menus. Optimalsort is a lovely Australian-made web product that offers an online ‘card sorting’ exercise. In our case we’ve been using it as a way of ensuring we get a good diversity of opinions on how different types of content (‘cards’) should be stacked together (in groups) under titles (menus). Optimalsort lets you invite people to come and order your content in ways that make sense to them and then presents you with an overall table of results, form which you can deduce the best possible solution.

We’re also back using Clickdensity which is great for tracking down user interface problems on live sites. We used this when it first was released by Box UK and it revealed some holes we quickly fixed on a number of our sites. Whilst it still has issues working properly in Safari and, surprisingly, sometimes on Firefox, Clickdensity lets you generate heatmaps of your visitors’ clicks and mouse hovers. Armed with this you can quickly discover whether your site visitors are trying to click on images thinking that they are buttons or links; or choosing certain navigation items over others.

Sliverback is another UK product, this time from Clearleft. We’re gearing up to use this with some focus groups to record their interactions (and facial expressions!) as they use some of our new projects and products. Silverback is Mac only (which suits us fine) and records a users’ interactions with your application whilst using the Mac’s built in camera and microphone to record the participant (hopefully not swearing, cursing and looking frustrated). This should be perfectly geared for small focus groups with targetted testing.

Categories
Metadata Tools

A web citation tool – dealing with impermanent references

We’re all working hard to ensure that our own content is identified with persistent URLs – a referrer that will stand the test of time – but often when we are writing a paper we need to refer to someone else’s URL, most of which are not designed to be permanent.

Traditionally when we reference something on a website we put ‘accessed on X date’ but that is of little use to a reader who follows up a reference only to find the original has moved or gone.

That’s where WebCite comes in. WebCite is a bit like TinyUrl or any number of URL shortening services, a social bookmarking tool like Del.icio.us, combined with a snapshotting tool. It provides a ‘shorter’ URL and it also keeps a copy of the entire page you have cited in its archive. This means that readers can read the exact same page, as it was when you were referencing it, at any time into the future – even if that page changes regularly (like the front page of a newspaper website).

You can also add custom DC metadata.

Here’s a WebCite capture of the Sydney Morning Herald’s front page as it was at the time of this post. http://www.webcitation.org/5ZAbxFdgI

As you can see there are some problems in that it has been unable to capture the CSS to lay out the page properly, but for references to the text contained in a page it does a pretty good job.

Here’s a capture of an article from an online journal, D-Lib, which being predominantly text, works better. http://www.webcitation.org/5ZAcGpnPz

There’s even a bookmarklet to add to your browser toolbar to make capturing even easier. Otherwise use the service manually via their archiving submission page. A submission takes about 20 seconds to capture.