A couple of minor new things to report on our collection database. A few minor additions to our collection database have been implemented today. These have been on the ‘to-do’ list for a long time!
Ever since OPAC2.0 launched we have been hiding multiple images of objects. Now they are all publicly accessible by clicking the numbers on the bottom right below the zoomable image. If no numbers appear then there is only the main image available.
Here are a few examples where you can now get different views of the same object record.
There are plenty more.
We have also implemented captions for these images where they exist.
The impetus, other than the availability of some spare time in which to do it, was a new internal kiosk for the Transport Gallery that uses the same backend database as the OPAC and required multiple images. The OPAC kiosk launched at the Museum on December 20 as part of a sound and light show called Further, Faster, Higher.
Also as part of creating a simple image grid layout for the kiosk we were able to quickly implement a visual object browser based on date of acquisition.
Users can now view our latest acquisitions as they are catalogued by year. This gives a quick entry point into the collection.
By the end of the month we will have served up 6 million object records since launch in June.
Of these –
~915,000 have been discovered via text searches (23,000 unique search terms),
~947,000 via tag cloud/user keywords (3,500 keywords added),
~330,000 via subject keywords,
~200 via OpenSearch.
This leaves 3.8 million records (63%) found by direct discovery – either via hyperlinking from other parts of our website (or other websites), or (probably primarily) via Google and other search engines.
Our specialist design portal, Design Hub which uses the same backend object database has also served up 185,000 design-related objects via searches on its site since its launch in August.
The Tagometer is a simple widget to enable visitors to your site to ‘del.icio.us it’ as well as showing the number of people who have tagged your site and the most popular tags for it. As many commentators have said, this is long overdue and perhaps shows a renewed effort on Yahoo’s behalf to put some more work into del.icio.us.
Typical – the day I go on internet-free holidays is the day Clay Shirky posts on Second Life.
Shirky’s examination of Second Life bores through the hype generated by ever increasing media coverage (yes, even in Australia) of Second Life. He asks, pertinently, what is the churn rate of users – that is, how many people try and then never log back on? Comparing churn rates is the secret metric that is never discussed enough by those on the outside of social sites like Second Life (or MySpace or Last.fm or whatever). Those on the inside, that is the investors and business owners work hard to talk about users, sign ups and those sort of ever-increasing figures, whilst churn lies buried and undiscussed.
Someone who tries a social service once and bails isn’t really a user any more than someone who gets a sample spoon of ice cream and walks out is a customer.
So here’s my question — how many return users are there? We know from the startup screen that the advertised churn of Second Life is over 60% (as I write this, it’s 690,800 recent users to 1,901,173 signups, or 63%.) That’s not stellar but it’s not terrible either. However, their definition of “recently logged in” includes everyone in the last 60 days, even though the industry standard for reporting unique users is 30 days, so we don’t actually know what the apples to apples churn rate is.
At a guess, Second Life churn measured in the ordinary way is in excess of 85%, with a surge of new users being driven in by the amount of press the service is getting. The wider the Recently Logged In reporting window is, the bigger the bulge of recently-arrived-but-never-to-return users that gets counted in the overall numbers.
I suspect Second Life is largely a “Try Me” virus, where reports of a strange and wonderful new thing draw the masses to log in and try it, but whose ability to retain anything but a fraction of those users is limited. The pattern of a Try Me virus is a rapid spread of first time users, most of whom drop out quickly, with most of the dropouts becoming immune to later use. Pointcast was a Try Me virus, as was LambdaMOO, the experiment that Second Life most closely resembles.
He also problematises the whole idea of 3D environments which danah boyd picks up in inimtable fashion (meatspace! so 90s!).
I have to admit that i get really annoyed when techno-futurists fetishize Stephenson-esque visions of virtuality. Why is it that every 5 years or so we re-instate this fantasy as the utopian end-all be-all of technology? (Remember VRML? That was fun.)
There is no doubt that immersive games are on the rise and i don’t think that trend is going to stop. I think that WoW is a strong indicator of one kind of play that will become part of the cultural landscape. But there’s a huge difference between enjoying WoW and wanting to live virtually. There ARE people who want to go virtual and i wouldn’t be surprised if there are many opportunities for sustainable virtual environments. People who feel socially ostracized in meatspace are good candidates for wanting to go virtual. But again, that’s not everyone.
If you look at the rise of social tech amongst young people, it’s not about divorcing the physical to live digitally. MySpace has more to do with offline structures of sociality than it has to do with virtuality. People are modeling their offline social network; the digital is complementing (and complicating) the physical. In an environment where anyone _could_ socialize with anyone, they don’t. They socialize with the people who validate them in meatspace. The mobile is another example of this. People don’t call up anyone in the world (like is fantasized by some wrt Skype); they call up the people that they are closest with. The mobile supports pre-existing social networks, not purely virtual ones.
Quite a few very experienced people have made a strong case for museums in Second Life and with a flythrough demo it is easy to get seduced. But I do wonder about the churn factor that Shirky focuses on, and I agree with boyd about the actual use of social technologies.
My team here at the Powerhouse Museum has been toying with the idea of a Second Life trial too – we’ve had quite a bit of experience with 3D environments and reconstructions in the past. But a museum is unlikely to have the resources of a Dell or IBM to do a media friendly product launch type event quickly enough in SL to make a significant splash – these things in the museum sector take months (if not years) to develop properly and by the time they are done (maybe) the hype will have moved on.
Swivel is the web2.0 version of Nationmaster. Swivel lets anyone upload a dataset and visualise it, and mash it up with other people’s datasets, and plenty more.
[old post content – The only troubling thing is their license agreement which may limit what some people might want to do with it. I should clarify that most web 2.0 companies have similar license agreements with users as everyone tries to figure out how to make money out of user-created data.]
Brian at Swivel and I had a bit of an email conversation following my initial blog post in which I suggested they might consider a Creative Commons approach especially when dealing with data supplied by the non-profit and government sectors. They have now updated their legal conditions following a meeting with CC. I’m very pleased that Swivel have done this! (Not to mention slightly excited that this blog has had such an impact – its readership grows and grows beyond the museum sphere)
We are conducting the first comprehensive survey looking at museum blogs and blogging practices. If you write for, or operate a museum or museum-related blog, please fill out the survey on the Museum Blogs website.
Seb Chan (Powerhouse Museum) and Myself (Ideum) are the conducting the survey. The results will be presented in a session, Radical Trust: The state of the museum blogosphere at the Museums and Web Conference in San Francisco in April 2007. We will also link to our paper from both the Ideum blog and the Powerhouse’s fresh + new blog.
The purpose of the survey is to capture a snapshot of the technologies, aims, policies, uses, and impact of blogging in the museum sector. 2006 has been an amazing year for the field, what were 20 blogs back in January is now a community of nearly 100 museum-related blogs. The results from the survey will help organizations plan and justify future projects utilizing blogs and other social technologies. Please feel free to repost or otherwise pass this on.
danah boyd’s latest article, Friends, friendsters, and top 8: Writing community into being on social network sites in First Monday is a good examination of the nature of ‘friend-ing’. Like many people who actually use social networking sites themselves, boyd is frustrated that a lot of people talking about these sites seriously misunderstand how they are used, particularly by young people. These misunderstandings lead to, at one extreme, a paranoia about stranger danger, and at the other extreme, an overestimating of the real-world ‘value’ of ‘lots of friends’.
While some participants believe that people should only indicate meaningful relationships, it is primarily non-participants who perpetuate the expectation that Friending is the same as listing one’s closest buddies. Failing to understand the culture of Friending that has emerged in social network sites contributes to the fear of the media and concerned parents over how they envision participants to be socializing.
By examining what different participants groups do on social network sites, this paper investigates what Friendship means and how Friendship affects the culture of the sites. I will argue that Friendship helps people write community into being in social network sites. Through these imagined egocentric communities, participants are able to express who they are and locate themselves culturally. In turn, this provides individuals with a contextual frame through which they can properly socialize with other participants. Friending is deeply affected by both social processes and technological affordances. I will argue that the established Friending norms evolved out of a need to resolve the social tensions that emerged due to technological limitations. At the same time, I will argue that Friending supports pre-existing social norms yet because the architecture of social network sites is fundamentally different than the architecture of unmediated social spaces, these sites introduce an environment that is quite unlike that with which we are accustomed. Persistence, searchability, replicability, and invisible audiences are all properties that participants must negotiate when on social network sites.
Museums need to be careful to understand the nature of use before they head to deeply into either colonising or building their own social networking sites. The LA-MOCA MySpace page I mentioned a few days ago from Jim Spadaccini’s talk at the NDF indicates they have (at last count) 6,375 ‘friends’ – but what does this actually mean?
Similarly, a few days ago a new user ‘friended’ me on Last.fm as a result of my NDF paper. What was interesting about this friend-ing was that they explicitly wrote –
Hi,I am not really sure how this works haha but I love ur
music taste and all that…not sure how ading friend thing would do…but just don’t want to forget ur page.Thank you:)
“Bulkeley explains how the photographic film industry, encyclopedia publishers, the music industry, and the advertising industry feasted on buyers by forcing them to purchase things they didn’t want – prints of all 24 shots from their camera or a whole album to secure one favorite song, for example. “The business models required customers to pay for detritus to get the good stuff,” Bulkeley writes. But digital cameras, the Web, iTunes, and search-related advertising have stripped those industries of their power to charge for detritus.”
I’ve been thinking a lot about museum collections online and those who’ve heard me talk know I keep coming back to the idea of content atomisation which is pretty much the same thing as disaggregation. Whilst in the physical museum space our audiences are shepherded through exhibition spaces and our collections along either closed or semi-closed paths created by curators and exihibtion designers, the online museum space offers an opportunity for users to disaggregate our objects, collections, knowledge and information to suit themselves.
Even if you haven’t put your collection online in the same way that we have you will still know by looking at your web statistics that only a small proportion of your web visitors enter via your home page (I’d guesstimate under 20% across the board) and that a large proportion get to your site via a search engine (again, I’d guesstimate greater than 40% if not much higher).
Still, when presenting web content museums like to bring themselves back to the notion of an expert narrative. Some go further and lock their content in bundles by using Flash or Director to effectively prevent unbundling. I remember speaking to Dana Mitroff and Peter Samis a few years ago at SFMOMA about their work in creating workarounds for users to be able to get into their Making Sense Of Modern Art site without going in the front door. Their driver was Google – which until they unbundled couldn’t spider the rich content held in the MSOMA project.
One of the themes I am working on at the moment is the notion of user narratives – or the individual narratives that users create as they self-navigate the infosphere. You do this every day yourself when you create paths through Google searches and results, RSS feeds and more.
How does this work in the context of a museum collection?
Does disaggregation/atomisation really mean that users will just dip in and out of your site quickly and not stick around at all? (Don’t they do this already when they can’t find what they want?) Or, can you, with tools similar to our collection database’s serendipity features, actually reach out to more and new users and at the same time increase the stickiness of your site despite unbundling?
In other words, like old photo processing, do we still need to force users to get 23 bits of information they have no interest in to get the 1 piece they really wanted, and knew they wanted? Or could they get directly to that one piece and be so encouraged by their experience to actually want to look at some of the other 23?
How does this impact upon physical visitation? Can a visitor know too much from the online experience to no longer want to visit?
WikiMatrix is a very eat way of comparing the difference features of the ever growing multitude of wiki solutions on offer. You can compare technical features, server requirements, license terms and much more.
One of the biggest problems for those implementing wikis in their organisations is the unfamiliarity of the language used to write wiki posts – especially for non-technical users. Thus, WikiMatrix’s ability to compare the synatx used for common functions such as hyperlinking and text formatting is a really nice comparison feature. After all, at the end of the day with social software there is not much point implementing a technically sophisticated, open source and technically flexible solution if your users have no idea how to do the simplest data entry on the resulting site.
At the Powerhouse Museum we have been experimenting a bit with wiki software and are possibly going to be trialling a solution for our intranet – but the sticking point has been the data entry language more than anything else.
We will be watching the Horowhenua Library Trust’s Kete project with interest to see how their community takes up their wiki-styled solution.
Have you implemented a successful wiki solution? Tell us in the comments as we’d love to hear from you.
(updated 29/12/06 with streaming media of all presentations)
Jim Spadaccini from Ideum has blogged extensively about the NDF and the presentations so I’m just going to add a few comments of my own rather than recap the whole event.
Jim started proceedings with the opening keynote address that gave a broad overview of how museums are adapting and implementing the core ideas of web 2.0. It was a dynamic presentation and offered a lot of food for thought for the audience and there was an audible gasp when he posed the problem of “if you don’t then they will” – referring to the plethora of ‘intelligent design’/Creationism web 2.0-enabled sites. Jim focussed on the easy to do things – museum blogging and its continuing rise – as well as what smaller organisations without web or IT teams can do, namely colonise existing sites and services. Although this can be problematic, Jim made a strong case for doing so when appropriate, especially for organisations targetting youth audiences. He used the example of the LA Museum of Contemporary Art’s MySpace page which taps into the existing audience for their night-time musical activities which are barely visible on their main website. It is not surprising that amongst their featured ‘friends’ are Z-Trip and Crystal Method. The lure to these existing social networks and online communities is that they represent, on the whole, a demographic that is otherwise absent from many museum websites. Further, the more particular communities such as MySpace with their walled garden approach monpolise the times and attention of this demographic, the less opportunity there is for other sites.
Sometimes, as with a photographic collection from the Maxwell Museum, Ideum has found technical solutions in existing services. The project with the Maxwell uses Flickr and their open API to store and present images in a way that would have been well beyond the project budget for such a small museum. Ideum has built a Flickr mashup to complete the project. Interestingly, even before the project launches, but because they have started seeding the photographic collection images to Flickr, other Flickr users have already been discovering the collection and interacting with it, commenting and recommending. In so doing, Ideum has managed not only to solve the issue of a limited project budget, but also reached out to a large community of users through the solution (Flickr) that otherwise would have been unlikely to have come across the finished site at Maxwell.
Jim cautioned that out of every hundred 2.0 startups perhaps as few as 2 will survive – and if you make the wrong choice then your data will disappear when they do.
I’ve had the good fortune to spend some considerable time with Jim whilst in NZ, over the fine cakes, hot chocolates and coffee of Wellington – discussing the world of museums and technology. We are conducting a joint survey on museum blogging (more on that shortly) but most interestingly he has been pushing the idea of museum widgets. Jim is a strong advocate of museum widgets and Ideum has been a pioneer in the museum world. Widgets are micro-applications that can run on desktops and on web pages, and simply provide a browser-less interface to data. These can be very simple – Ideum’s solar image of the day widget draws in an image from NASA; or an RSS feed – or they can be far more complicated including micro-games. Ideum’s current work with widgets has seen over 100,000 downloads and substantial referring traffic from important catalogue sites such as Apple’s widget gallery. Whilst some widgets are little more than gimmicks, others provide extremely useful or interesting services. The Rijiksmuseum in Amsterdam has had an ‘image of the day’ widget on their site for a long time now and it is one of the most popular widgets on the scene. Any museum or collection could and probably should be emulating them – if only for exposure.
Day two opened with Toby Travis from the V&A. It has also been great to meet Toby – one off two developers at the V&A. Their online work often seems buried on their site and tends to surface only around promotional activities. There have been some fascinating projects around user-generated content at the V&A and many have had developed communities beyond the expectations of the museum. So much so that later in the year they will be launching a MyGallery-style site which will allow users to aggregate their own and others’ user-generated content from the V&A site into a Flickr-style interface. I’ll be very interested to see how much this is used. Jim covers Toby’s talk in detail in his blog posts.
Also on day two, Joanna Ransom from the Horowhenua Library Trust presented on their Kete project. This is really amazing stuff – an open source community cultural wiki built from the community upwards. It was very inspiring and I think there is a lot to be said for this approach. That it has been done on the smell of an oily rag, and done so well is a testament to the trust they have from their community. The site launches publically in March 2007 and when completed will seriously challenge similar projects set up by infinitely larger organisations and companies. This is perhaps on of the first broadsides in Community 2.0.
My presentation on emerging technologies and the Powerhouse Museum’s collection database, along with all the others is streamable from the NDF.
Apologies for the lack of photos – I had intended to finish my presentation with a pic of the audience taken form the stage but foolishly left my camera on the table!
Thank you to Te Papa and the NDF team for making this event possible. It really was a marvellous gathering, full of interesting people (about 400!) from the NZ museums, libraries, archives and galleries sector, and with high calibre presentations from all involved. New Zealand is very forward thinking and proactive holding this event annually.