Categories
Collection databases Copyright/OCL Developer tools Interactive Media Metadata Social networking UKMW07 Web 2.0

UK Museums on the Web 2007 full report (Leicester)

Museums on the Web UK 2007 was held at the slightly rainy and chilly summer venue of the University of Leciester. Organised by the 24 Hour Museum and Dr Ross Parry with the Museums Computer Group the event was attended by about 100 museum web techies, content creators and policy makers.

As a one day conference (preceded by a day long ‘museum mashup’ workshop) it was very affordable, fun and entertaining (yes, in the lobby they had a demo of one of those new Phillips 3D televisions . . . disconcerting and very strange).

Here’s an overview of the day’s proceedings (warning: long . . . you may wish to print this or save to your new iPhone)

The conference opened with Michael Twidale and myself presenting the two conference keynote addresses. I presented a rather ‘sugar-rush, no-holds barred view from the colonies’ of why museums should be thinking about their social tagging strategies. (I’ll probably post my slides a little later). I had been quite stressed about the presentation coming off very little sleep and a long flight from Ottawa to London the night before. But I’ve been talking about these and related topics almost non-stop for the past two weeks so it was actually a good feeling to get it done right at the beginning.

After my presentation Michael Twidale from the University of Illinois reprised the joint presentation about museums making tentative steps into SecondLife that his colleague and co-author Richard Urban had presented at MW07 in San Francisco. Michael (like Richard before) certainly peaked the interest of some in the room who I had the feeling had barely thought about Second Life before – although I notice that the extremely minimally staffed Design Museum in London has just been doing an architecture event and competition in Second Life (see Stephen Doesinger’s ‘Bastard Spaces’).

Mike Ellis from the Science Museum followed the tea break with a presentation that looked at the outcomes of letting a small group of museum web nerds loose for a day without the pressures of a corporate inbox. Using a variety of public feeds the outcomes of such a short period of open-ended collaborative R&D were quite amazing. In many ways Mike’s presentation ended up challenging the audience to think about new ways of injecting innovation and R&D into their museum’s web practices. Amongst the mashups were a quick implementation of the MIT Simile Timeline for an existing project at the Cambridge University Museum tracking dates; a GoogleMaps mashup of all known museum locations and websites in the UK (something that revealed that current RSS feeds of this data are missing the crucial UK postcode information); a date cleaning API to allow cross-organisational date comparison built by Dan Z from Box UK; and an exciting mashup using Spinvox‘s voice to text service to allow museum visitors to call a phone number and be SMSed back information about locations, services or objects.

These were all really exciting prototypes that had come out of a very small amount of collaborative R&D time – something every museum web team should have. Apart from this a couple of problems facing museum mashups were revealed – stability issues and reliance on other people’s data – but as Mike pointed out how does this really compare to the actual stability of your existing services?

Nick Poole from MDA presented Naomi Korn’s slides on rights issues (moral, ethical and Copyright) involving museums implementing Web 2.0 applications. Nick presentation was excellent and had two main points to make. The first being that the museum sector is already going the way of increased audience focus and interaction in real world policy and has been for at least the past decade so why should the web be any different? Further that the recent political climate in which museums in teh UK exist has focussed on the cultural sector being a lead in enhancing social cohesion and the sharing of cultural capital. Secondly, Nick emphasised that as museums “we have a social responsibility to the population to exploit any and all methodologies which makes it easier for them to engage with and learn from their (cultural) property”, concluding that despite the potential legal issues, Web 2.0 offers a “set of mechanisms by which we can enhance accountability and effectiveness in a public service industry”. Excellent stuff.

Alex Whitfield from the British Library then presented an interesting look at an albeit extreme example of the tensions with implementing Web 2.0 technologies with certain exhibition content. Alex demonstrated some of the website for the Sacred exhibiton which shows some the key religious manuscripts from the faiths – Christianity, Islam, and Judaism. The online exhibition shows 66 of 152 texts and includes a GoogleMaps interface, expert blogs, podcasts and some nice Flash interactives (yes, I did ask why Flash? apparently because it was a technology choice encouraged by the IT team). Alex then proceeded to look at a few examples of where tagging and digital reproduction can cause community offence or at the very least controversy, before closing referencing from Susan Sontag’s ‘On Photography’ where Sontag claims that there is a reduction of ‘the subject’. (see an interview with Sontag where she explains this concept). Alex’s example was certainly provocative and reminded me, again, that the static web and the participatory web both carry their own particular set of implicit politics (individualistic, pro-globalisation, and pro-democracy although to differing depths of democracy).

After a light lunch Frances Lloyd-Baynes from the V&A gave an overview of some of the work they have been doing and some of the challenges ahead. She reported that the V&A has 28% of their collection online but that the figure reduces to 3% once bibliographic content is excluded. Of course they have been working on other ‘collections’ – those held by the community – for quite a while as evidenced by their Every Object Tells A Story and the new Families Online project.

She also mentioned the influence of the MDA’s ‘Revisiting Collections‘ methodology which focuses on making a concerted effort to engage audiences and bring user/public experiences to museum collections content. This and other concepts have become a key part of the V&A’s strategic policy.

In terms of user-generated content she highlighted problems that manyof us are starting to face. What UGC gets ‘kept’? How long, how much? What should be brought into the collection record? Should it be acknowledged? How?How should museums respond, mediate and transform content? Or should they remain unmediated? And how do we ensure that there is a clarity and distinction between voice of the museum and voice of the user.

Fellow Australian, now ex-pat who works as a database developer at the Museum of London, Mia Ridge, gave a practical overview of how Web2.0 can be implemented in museums. She covered topics like participation inequality, RSS and mashups, and the need to be transparent with acceptable use and moderation policies. it was a very practical set of recommendations.

Paul Shabajee from HP Labs then gave a very cerebral presentation on the design of the “digital content exchange protoype” for the Singapore education sector. The DCX allows for the combination of multiple data and metadata spread across multiple locations and sources, as well as faceted browsing and searches for teachers and students allowing for dynamic filtering by type, curriculum subject area, format, education level, availability, text search, etc. It was a great example of the potential of the Semantic Web. He then went on to explain the CEMS thesaurus model of curriculum and the taxonomies of collection, and how actual users wanted to do things in a more complex way such as finding topic for a class then find real world events and map them against topics. And because everything had been semantically connected, building new views in line with user needs did not mean massive re-coding. More information ont eh project can be gleaned from Shabajee’s publications.

Then after some very tasty micro-tarts (chocolate and raspberry, of which I must have partaken in five or six . . ), we moved on to the closing session from Brian Kelly of UKOLN. Brian is a great presenter although his slides always seem so lo-fi because of his typographic choices. Brian managed to make web accessibility for Web 2.0 are compelling topic and his passion for reforming the way we generally approach is ‘accessibility’ is infectious.

Brian is a firm believer that ‘accessibility is not about control. rules, universal solutions, and an IT problem’. Instead he asks what does accessibility really mean for your users? And rather cheekily ‘how can you make surrealist art accessible’? Accessibility, for Brian, is about empowering people, contextual solutions, wideing participation, blended solutions, all the things that Nick Poole and Frances Lloyd-Baynes (and the rest of us) were pushing for earlier in the day.

Brian has come up with a model of approaching accessibility that uses as a metaphor the tangram puzzle (for which there is no single ‘correct’ solution) rather than a jigsaw. He advised that we should focus on content accessibility because a mechanistic approach doesn’t work. How do you make an e-learning resource 3d model? It is just not possible and instead we should be focussing on making the learning objectives/outcomes accessible instead. If we see things in this way then there is no technical barrier for doing museum in projects in say, Second Life, citing the reasons that it isn’t ‘accessible’ by some disabled users, but that we should focus on providing alternatives as well that achieve or demonstrate similar outcomes for other users. Michael Twidale also provided the example of the paralysed Second Life user who can, in his virtual world, fly when in the real world he cannot walk.

Brian closed by advising that at a policy level we should be saying things like “museum services will seek to engage its auidences, attract new and diverse audiences. The museum will take reasonable steps to maximise access to its services”. By applying principles of accessible access across the whole portfolio of what the museum offers (real and virtual) we can still implement experimental services rather than using accessibility as a preventative tool. After all, as he points out the BBC has a portfolio of services for impaired users rather than ensuring access on every service.

Categories
Collection databases Digitisation Imaging Interactive Media Metadata Web 2.0

Hyperlinking collectively shared images – Seadragon/Photosynth

There’s been a lot of discussion on the web about Microsoft’s Photosynth but this demonstration from TED really reveals the real possibilities. The image navigation opportunities offered by Seadragon are quite amazing but as Blaise Aguera y Arcas points out in the short demonstration, what a collective Photosynth experience offers is the ability for one user/contributor’s content to benefit from the metadata associated with everyone else’s content that is visually related (around the 6:10-6:30 mark).

If the cultural sector contributed images, or made use of this sort of application our very rich contextual metadata could be added to the common pool allowing for holiday snaps to be explored with deep connections to cultural collections and other people’s snapshots. And, again as Blaise Aguera y Arcas makes clear, the other side effect is the ability to generate rich virtual reconstruction works as well.

The BBC has already been exploring these possibilities.

Categories
Collection databases Folksonomies Metadata Web 2.0

M&W07 – Day two: Rjiksmuseum & CHIP

The Rijksmuseum in Amsterdam and several Dutch universities have been working on an exciting collection project which uses ratings and user profiles to recommend art to users. Whilst I was a little sceptical of their ‘ratings’ (1 to 5 stars) as a means of describing art, the recommendation tools and prototype interface were fascinating. Also exciting was the means by which they exposed the ‘recommendations’ – ‘You are recommended these because . . . ” is very reminiscent of Amazon’s additions of the last few years.

Most of all, though, the most striking thing about the CHIP was the ability for the user to generate a printable/downloadable map customised to show them their favourite and recommended artworks. This high level of integration between the onsite recommendations and the gallery floor is something we are thinking a lot about at the Powerhouse Museum in our OPAC project – especially for use at our Castle Hill open storage facility.

Categories
Collection databases Metadata

Linden on ‘end of federated search?’ and Google

Greg Linden speculates that Google is pulling back from the notion of federated search. (via O’Reilly)

Google instead prefers a “surfacing” approach which, put simply, is making a local copy of the deep web on Google’s cluster.

Not only does this provide Google the performance and scalability necessary to use the data in their web search, but also it allows them to easily compare the data with other data sources and transform the data (e.g. to eliminate inconsistencie and duplicates, determine the reliability of a data source, simplify the schema or remap the data to an alternative schema, reindex the data to support faster queries for their application, etc.).

Google’s move away from federated search is particularly intriguing given that Udi Manber, former CEO of A9, is now at Google and leading Google’s search team. A9, started and built by Udi with substantial funding from Amazon.com, was a federated web search engine. It supported queries out to multiple search engines using the OpenSearch API format they invented and promoted. A9 had not yet solved the hard problems with federated search — they made no effort to route queries to the most relevant data sources or do any sophisticated merging of results — but A9 was a real attempt to do large scale federated web search.

If Google is abandoning federated search, it may also have implications for APIs and mashups in general. After all, many of the reasons given by the Google authors for preferring copying the data over accessing it in real-time apply to all APIs, not just OpenSearch APIs and search forms. The lack of uptime and performance guarantees, in particular, are serious problems for any large scale effort to build a real application on top of APIs.

Google has put its energies into Google Co-Op which allows users to create their own sub-Google search engines using the Google database as the datasource. This has the effect of encouraging traditionally deep web databases like museum collection databases to become spiderable, indexed and cached by Google. For individual end users this makes sense – they probably already go to Google first, but does it make sense for content providers?

Try this example.

Here is a search for ‘heater’ using the Powerhouse’s own collection search.

Top five –

B1431 Solar heater, plus base, wood/metal, Lawrence Hargrave, Australia, [1870-1915]
K693 Immersion water heater, electric, made in Australia, late 1930s (OF).
93/176/15 Light globe, heater lamp, glass/metal, British Thompson Houston, England, 1920
93/176/16 Light globe, heater lamp, glass/metal, Osram, England, 1950
85/69 Brochure, Instruction and Operating Chart for Emmco Fryside heater

Here is the same search for ‘heater’ using a Google Coop search of the same data within the same collection (using a Coop search I created).

Top five –

86/676 Gas heater – Malley’s No. 1, copper, Metters, Australia …
97/331/1 Convection heater, domestic, portable gas, metal/paint …
H7061 Water heater, “The Schwer”, constructed of copper & can be …
B1538 Water heater model, steam, “Friar”, [Australia or UK]; A A …
95/117/1 Kerosene water heater and instruction sheet, Challenger …

So which is more accurate?

Google’s Coop bases it results on a number of different factors, all of which are unknown to the searcher, and most of which are unknown to the content provider. At least with our internal search we can tweak the ordering and relevance of results using our own known variables.

Categories
Metadata Web 2.0

More on Opensearch (and competitors)

Lorcan Dempsey points to a succinct PDF article comparing Opensearch 1.0, 1.1, SRU and MXG.

Categories
Metadata Web 2.0

Another plug for Opensearch

As I’ve been speaking to other institutions both here in Australia and overseas I’ve started to realise that more of us should be using Opensearch to allow others (or ourselves) to aggregate our deep content – whilst still retaining full control of said content.

I blogged about this ages ago but I think everyone was caught up in getting their collections online and searchable to begin with.

The library sector has been debating its implementation for a while and their arguments for and against Opensearch are covered here.

OpenSearch is . . . a discovery mechanism. It allows a site to quickly expose vast amounts of data to end users in a detailed enough format that it elicits click-throughs. It is a way for end users to search a variety of sources, and source types, and to quickly grab the useful bits from each source, and to dig deeper for more detail when they find something of interest.

More to the point, though, since everyone must implement their opensearch results in exactly the same way every OpenSearch source is guaranteed to work with every OpenSearch client. Instant interoperability.

Now with both Firefox 2.0 and IE7 supporting Opensearch there really is no reason not to.

Imagine if your collection or your deep/dark web databases that you have already connected up to your website could be easily searched by a centralised search portal? And any interested searchers who clicked on a result would be redirected immediately to your site? And you didn’t need to implement anything complicated to make this possible?

Here is a very simple tutorial for a standard website.

Here is the Powerhouse Museum’s collection search for ‘chair’ delivered via the A9 portal.

And here is the raw XML result which anyone can aggregate to their site (allowing others to deliver traffic back to us).

If you have multiple databases on your site that all have their own esoteric search engines, then you could create your own cross database search simply by creating a Opensearch feed for each and then a search page that aggregates each feed.

If you DO add Opensearch to your site then please tell us!

Categories
Metadata Web 2.0

Pay-for-answers : AQA and paid research

AQA (Any Questions Answered)offers a pretty unique service where uses can text (SMS) a question to a group of researchers. How it works is detailed in an interview with its founder Colly Myers in The Register by web realist/skeptic Andrew Orlowski. Colly Myers offers his views on the future of general web searching (falling away as it succumbs to data entropy), Wikipedia, and virtual sweatshops.

AQA served its 3 millionth answer recently, notching up the last million in four months. The previous million took seven months, and the first million took 19 months, which gives some indication of its growth ramp.

AQA’s owner IssueBits has been profitable since last October, says Myers, and he thinks the market is young and there’s plenty of opportunity to grow. AQA doesn’t have the field to itself – 82ask also caters to the curious texter – but it is in pole position.
Myers seems particularly proud of the infrastructure: AQA uses around 500 researchers to answer double the volume of queries it did before (the actual composition of the research staff varies, as they drop in and out of work)..

If AQA is correct and the value of Google and other general search tools drops markedly as users move to silo-searches (as the article describes teenagers are doing within MySpace) and entropy sets in, then there is a returning role for specialist research done by professional researchers in libraries and museums. And it is a role that if AQA indicates anything, is willingly paid for if the price is low enough and the requests broken down into simply separate questions.

Categories
Interactive Media Metadata

Search the Powerhouse Museum collection via A9/Opensearch

We’ve hooked our collection search to A9’s Opensearch.

So now you can ‘subscribe’ to a search result via RSS.

Here is an example search for ‘3830’.

http://a9.com/3830?a=sB000813VX4

Categories
Digitisation General Metadata

Meta-Media

There’s a really interesting article here from ctheory.net written by our old mate Lev Manovich that looks at ‘understanding meta-media’ and examines “what new media does to old media?” focusing particularly on the idea of simulation.
The article references some great new media works that explore the concept of ‘mapping’ as key framework for undertsanding the intersection.

“This is not accidental. The logic of meta-media fits well with other key aesthetic paradigms of today — the remixing of previous cultural forms of a given media (most visible in music, architecture, design, and fashion), and a second type of remixing — that of national cultural traditions now submerged into the medium of globalization. (the terms “postmodernism” and “globalization” can be used as aliases for these two remix paradigms.) Meta-media then can be thought alongside these two types of remixing as a third type: the remixing of interfaces of various cultural forms and of new software techniques — in short, the remix of culture and computers”

Categories
General Metadata

Protected Wiki

Intersting development in Wiki security/authentication here.