Categories
Folksonomies Interactive Media Web 2.0

Powerhouse Museum launches Web 2.0-styled collection search

Today we made live our ‘OPAC 2.0’ project.

OPAC 2.0 has been developed by the Powerhouse Museum’s Web Services team in conjunction with Registration and Curatorial Departments. OPAC 2.0, as the name might suggest, represents the next generation of online collection browsing.

Using technology developed in-house, OPAC 2.0 allows users to browse nearly 62,000 current object records from Emu. Whilst some of these were previously viewable on AMOL/CAN, OPAC 2.0 now makes these available through the Powerhouse’s own website – and keeps them current and updated.

Improving on previous collection search tools, OPAC 2.0 tracks and responds to user behaviour recommending ‘similar’ objects increase serendipitous discovery and encouraging browsing of our collection. It also keeps track of searches and dynamically ranks search results based on actual user interactions. Over time, this artificial intelligence will improve as it learns from users, and will allow for dynamic recommendations.

OPAC 2.0 also incorporates a folksonomy engine allowing users to tag objects for later recall by themselves or others.

In keeping with the nature of a collection database OPAC 2.0 is designed to be in a state of perpetual development with new features and tweaks being added periodically. As object records are edited, added or changed in our collection management system, Emu, OPAC 2.0 will periodically add them to its database. In early August the available images of each object will greatly improve as Emu is merged with our current image database FirstFoto.

Likewise, new features already planned to be added include exhibition locations, and the ability for teachers and educators to ‘bundle up’ personalised selections of object choices for use in teaching situations.

OPAC 2.0 would not have been possible without the hard work of the Web Services team, the registrars and curators at the Museum, and all the international beta testers who gave feedback on early versions from around the world.

Categories
Folksonomies Interactive Media Social networking Web 2.0

Wikimapia (Wiki + Google Maps)

WikiMapia is a project to describe the whole planet Earth. The developers have combined a wiki with Google Maps to create this amazing resource that allows you to highlight any spot on earth and describe it in your own words. You can even tag these locations so people can find them using keywords. Try doing a search for “Powerhouse”.
http://www.wikimapia.org/

Categories
Folksonomies Social networking Web 2.0

Collective knowledge, South Korea, Google

Very interesting piece from the Baltimore Examiner.

Google is not the dominant search tool in South Korea. Apparently a local company called Naver which uses a collective knowledge, community-based question and answer service is. This is an interesting parallel to something like Wikipedia – and very clearly demonstrates the impact of local culture on the net usage patterns.

The Korean slice of the Web is relatively small compared to the English-language chunks of cyberspace. Koreans often come up short when trying to find information in their native tongue.

To remedy the situation, Naver – which is more like a Yahoo-esque portal than a mere search engine – came up with what it calls Knowledge iN, where users post questions that are answered by other users – creating a database that now totals more than 41.1 million entries. A search on the site brings up typical Web results along with the Knowledge iN database and news and blog sites.

“I don’t know whether they expected it before or not, but it was actually a very good match for Korean culture,” Wayne Lee, an analyst at Woori Securities, said of Naver’s service. “Korean netizens like to interact with other people, they want to answer questions, they want to reply.”

The most popular questions clicked on Naver’s site focus on love, dieting or eradicating computer viruses. The queries that have garnered the most answers range from how dinosaurs are named to getting rid of pimples, and even musings on why telephone poles are spaced 165 feet apart.

Google relies on its computers to troll the Web and see which sites are linked most often by other sites, creating a ranking system based on how often a page is referenced. Compared to Naver’s people-created database, Google doesn’t “have a system to combat that,” said Danny Sullivan, editor of industry newsletter Search Engine Watch.

(via Bubblegeneration)

Categories
Folksonomies Web 2.0

Steve.museum update from M&W2006

Read the latest about the collective museum folksonomy project, Steve.museum from Museums & The Web 06. Our Electronic Swatchbook projects gets a mention.

Social tagging applications such as flickr and del.icio.us have become extremely popular. Their socially-focussed data collection strategies seem to have potential for museums struggling to make their collections more accessible and to build communities of interest around their holdings. But little is known about the terminology that visitors to museum sites might contribute or how best to obtain both useful terms and on-going social involvement in tagging museum collections. In the steve.museum project, a number of art museums are collaboratively researching this opportunity. These research questions and an architecture for a prototype research application are presented here. Prototypes created to date are discussed and plans for future development and term-collection prototype deployment are presented. We discuss the potential use of folksonomy within museums and the requirements for post-processing of terms that have been gathered, both to test their utility and to deploy them in useful ways.

Categories
Folksonomies

Electronic Swatchbook Tagging

If you have an hour of spare time that you’d like to donate to us then drop me an email – sebc [at] phm [dot] gov [dot] au.

I’m looking for a few people to help us complete the tagging of the Electronic Swatchbook swatches.

The main thing we’re trying to get done is the COLOUR search. Obviously the swatches are made up of a lot of different colours and real time image analysis tools just don’t cut it. So we implemented a tagging feature where users can ‘describe’ the swatches. And a lot of people have done so.

But its not enough.

We need your help to finish them off. And we’ve built a quick and dirty bulk tagging interface so volunteers can zip through the swatches in order tagging the ones that are currently untagged.