Categories
Social networking Web 2.0

Who participates? The many and the few

Who is really participating on so-called ‘social media’ websites? Research at both the academic and the market research level continues to probe this question.

Firstly, there is a fascinating paper titled ‘Power of the Few vs. Wisdom of the Crowd: Wikipedia and the Rise of the Bourgeoisie ‘ by Kittur, Chi, Pendlteon, Suh and Mytkowicz presented at SigGraph 2007 that looks at the changes in contribution trends to Wikipedia and charts the declining influence of an elite cabal of admins, and then compares this to behaviour on del.icio.us.

In this paper, we show that the story is more complex than explanations offered before. In the beginning, elite users contributed the majority of the work in Wikipedia. However, beginning in 2004 there was a dramatic shift in the distribution of work to the common users, with a corresponding decline in the influence of the elite. These results did not depend on whether work was measured by edits or by actual change in content, though the content analysis showed that elite users add more words per edit than novice users (who on average remove more words than they added).

This paper’s research contributes to Wilkinson and Huberman’s ‘Assessing the value of cooperation in Wikipedia‘ at First Monday (from April) which asserts that there is a significant correlation between article quality and number of edits.

Then over at the McKinsey Quarterly this month (requires a free registration), McKinsey presents their comparatively small-scale research (573 users) from Germany to examine some of the potential lessons from why Germans upload content to video sharing sites.

Few companies, however, have a clear understanding of what inspires users to contribute to such sites. Executives might start by looking to the world of online video sharing, another fast-growing test bed for participation. McKinsey research conducted in Germany finds that motives such as a desire for fame and a feeling of identification with a community encourage collaboration and participation. Such findings, we believe, offer insights into the way companies might tailor their Web 2.0 offerings.

Categories
Conceptual Social networking Web 2.0 Web metrics Young people & museums

Social production, cut and paste – what are kids doing with ‘your’ images?

It has been one of the worst kept secrets of web statistics – deep linked image traffic. While this has been going on for years, since the beginning of the WWW actually, it has increased enormously in the past few years. On some cultural sector sites such traffic can be very substantial – a quick test is to look at exactly how much of your traffic is ‘referred’ from MySpace. It is also one of the main reasons why Photobucket has traditionally reported traffic so much higher than Flickr is – its deep linking and cut and paste engagement with MySpace. With the move away from log file analysis to page tagging in web analytics, some, but not all of this deep linking traffic is fortunately being expunged from analytics reporting.

Two Powerhouse examples include a Chinese news/comment portal that deep linked a Mao suit image (from an educational resource on our site), sending us 51,000 visits in under 24 hours in August 2005, and an A-grade Singaporean blogger who deep linked an image of Golum (from our archived Lord of the Rings exhibition pages) to use to describe an ugly celebrity which generated over 180,000 visits over 8 days In January 2007. (In both of these examples the visits were removed from the figures reported to management and funders.)

What is going on here sociologically?

At the recent ICA2007 event in San Francisco danah boyd and Dan Perkel presented an interesting look at the subcultural behaviours that are, in part, producing this effect. Although they look specifically at MySpace there are threads that can be drawn across many social sites from forums to blogs. Drawing on the work of many cultural theorists, they argue that on MySpace what is going on is a form of ‘code remix’. That is, young people’s MySpace pages are essentially ‘remixes’ of other content – but unlike a more traditional remix in audio and video cultures, these code remixes occur through the simple cut and paste of HTML snippets. By ‘producing’ both their MySpace pages as well as their online cultural identity in this way, they are reshaping concepts of ‘writing’ and digital literacy. They are also, importantly, not in control of the content they are remixing – a deep linked image can easily be changed, replaced or removed by the originating site.

There are plenty of examples – boyd and Perkel give a few – where the content owner changes the linked image to disrupt the deep linker. In the case of our Singaporean blogger we renamed the linked image to prevent it from appearing on her site (and in our statistics).

Revealingly, Perkel’s research is showing that many MySpace users have little, if any, knowledge or interest in website production – that is CSS and HTML. Instead, what has formed is a technically simple but sociologically complex ‘cut and paste’ culture. This is what drives the ‘easy embedding’ features found on almost any content provider site like YouTube etc – it is in the content providers’ interest to allow as much re-use of their content (or the content they host) because it allows for the insertion of advertising and branding including persistent watermarking. Of course, the museum sector is not geared up for this – instead our content is being cut and pasted often without anyone outside the web team having a deep understanding of what is actually going on. There are usually two reactions – one is negative (“those kids are ‘stealing’ our content”) and the other overly positive (“those kids are using our content therefore they must be engaging with it”). Certainly Perkel and others research deeply probelmatises any notion that these activities are in large part about technical upskilling – they aren’t – instead those involved are learning and mastering new communication skills, and emerging ways of networked life.

One approach that some in the sector have advocated is the widget approach – create museum content widgets for embedding – to make repurposing of content (and code snippets) easier. There have been recent calls for museum Facebook apps for example. But I’m not sure that this is going to be successful because a great deal of embeds are of the LOLcats variety – perhaps trivial, superficial, but highly viral and jammed full of flexible and changing semiotic meaning. Whereas our content tends to be the opposite – deep, complex and relatively fixed.

Categories
Social networking Web 2.0 Web metrics

Social media measurement – brand awareness and trust in the cultural sector

There has been a flurry of activity amongst web analytics companies and in the marketing world to devise complex ways of measuring social media activity. As much of this interest in devising a way of measuring and comparing social media ‘success’ comes down to monetising social media activity through the sale of advertising, these measures don’t easily translate to the cultural sector. Advertisers are after a ‘ratings’ system to compare the different ‘value’ of websites but as we know from old media (TV and radio), ratings don’t work well for public and community broadcasters who don’t sell advertising and have other charters and social obligations to meet.

We know that visits, page views and time spent aren’t the best ways of understanding our audiences or their levels of engagement with our content, and with social media it is all about engagement. If we aren’t selling advertising space to all those eyeballs focussing their attention on our rich and engaging content, then what are we trying to do?

I’d argue that it is about brand awareness. Not just brand awareness in terms of being top of mind when geographically close audiences are thinking of a cultural activity to do in their leisure time, but about linking the perceived authenticity of the information contained on your website to your brand. More and more there is ongoing research into how museums are perceived as ‘trusted’ information sources, and importantly politically impartial sources. But this perception relies upon an awareness on the part of the online visitor that they are indeed on a museum website.

This user awareness is, I argue, not a given, especially now that such a large proportion of our online traffic comes via search. Looking into the future, search will be an even greater determinant of traffic, even if your real-world marketing prominently displays your URL (as it should be doing by now!). Looking at your real world marketing campaigns around your URL you will probably find a spike in direct traffic but a similarly sized spike in brand name searches – we are finding this with the Sydney Design festival at the moment. The whole of Sydney is covered with street advertising from bus shelter posters to street banners, all promoting the URL. The resulting traffic is a mix of direct and brand name search based.

The problem is, now, the brand no longer is just represented in the online environment on our own websites.

One of the first things I talk about in my workshops and presentations is that even if your organisation is not producing social media about yourself, then your audiences almost certainly are. If you aren’t aware of what your audiences are saying about you, what they are taking photos of, or recording on their camera phones, then you are missing a unique opportunity to understand this generally highly engaged tip of your audience.

It is possible those who blog about their experience in your organisation, upload their photos and videos, are going to be those who are potentially your most (commercially) ‘valuable’ customers – high disposable income, high levels of interest and a desire to participate and communicate/advocate to others about your organisation.

They are probably the most likely to climb the ‘ladder of engagement’ from potential visitors through regular visitors to members and finally donors/sponsors. They may not always have positive things to say, but by hearing their gripes and grizzles, you are able to understand and address issues that impact how your organisation is going to be promoted through word-of-mouth. And word-of-mouth is going to almost always be the most ‘trusted’ type of marketing recommendation.

So how do we track these conversations that occur publicly but not on your organisation’s website?

Mia Ridge recently pointed to a great summary of the easiest to use ‘ego search’ tools and methods by which you can easily keep track of your audience conversations. Another favourite of mine for small scale tracking is EgoSurf.

Sixty Second View has compiled an ‘index’ of how these kinds of ego search results might be compiled to generate a figure to compare with competitors and other organisations. Their methodology, whilst very complex, focusses on assessing how connected the people who are talking about you actually are – this allows for a determination of effective reach, and the trust that may be accreted to those in the conversation.

(top level summary mine only)

a) Blogs that are talking about you – what are their Technorati rankings, how high are their Google PageRanks, how many BlogLines subscribers do they have etc

b) Multi-Format conversations – how popular/connected are the Facebook and MySpace people who are talking about your organisation

c) Mini-Updates – frequency and reach of Twitters

d) Business Cards – LinkedIn connectedness

e) Visual – Flickr influence and popularity can be used to determine how connected and visible posters images of your organisation are. This can be applied to YouTube as well.

f) Favourites – Digg, Del.icio.us connectedness

This approach is useful as it provides a detailed analysis of the spectrum of social media that your organisation is probably already represented in. It can reveal areas where your users are’nt talking about you, and it can illuminate areas of your own site that receive unexpected user attention. Not only that it focuses on who is talking about you. On the downside, it is a lot of work – but in undertaking even a cut down version of this methodology it will force you to examine the different impacts of types of social media.

For example, are all blog posts about your organisation equal? When you check the Technorati rankings of the commenting blogs you will find that some have greater reach and authority than others. The real world equivalent here is the different weightings your marketing team probably already gives to print media mentions in national broadsheets versus local weeklies; or the difference between a TV editorial and a local radio mention.

Is this really the job of the web team?

Unless your organisation has a marketing team that is expert in online marketing then the answer must be yes. Web analytics in five years time will be all about measuring offsite activity.

Categories
Conceptual Interactive Media Social networking Web 2.0

Open vs closed

As I have been thinking about my upcoming presentation at Web Directions South there have been a lot of interesting maneuvers out in the commercial web space.

First, a while back Facebook opened their platform to developers allowing content from other providers to interact with Facebook profiles in Facebook. This, coupled with the enormous media coverage that this move got, is in part the driver for Facebook’s phenomenal growth of late. ‘Facebook as a platform’ has made Facebook ‘sticky’ and given everyone who uses Facebook more reasons to go back to look at their profiles on a very regular basis. Whilst most of the Facebook applications are only marginally interesting (do we really need another chain letter-style application?), the best ones are those that turn a Facebook page into an aggregator of personalised content from other sites – Flickr, travel maps, Last.Fm, RSS feeds etc. People who have completely tricked-out Facebook profiles could (and this is what Facebook hopes) feasibly use Facebook as their home page and access everything they need via Facebook.

Now, Netvibes has come along and flipped this. Netvibes is a personalised aggregation portal a bit like iGoogle (formerly MyGoogle). We have been experimenting with it for the Culturemondo network (see the public Culturemondo Netvibes aggregation as an example)

Netvibes has developed a very nice Facebook widget, to bring Facebook’s own data into its network meaning that Facebook-specific data – notifications, friends and data from your profile can now be aggregated into Netvibes, making Netvibes the one stop ‘attention’ shop for tricked out Netvibes/Facebook users. As Mashable points out –

Facebook is now one of Netvibes’ biggest rivals. Before Facebook, who offered to aggregate your friend’s Flickr photos, YouTube videos, blogs and the rest? Netvibes, of course. In fact, Facebook profiles are now a lot like a Netvibes startpage.

So now that Facebook has stolen some of that sheen, they’d [Netvibes] obviously like to create a mini-Facebook within Netvibes, rather than losing users in the other direction. They want you to use Netvibes as your homepage, and visit Facebook only incidentally, rather than aggregating all your stuff at Facebook and never returning to Netvibes.

The tension is indicative of what’s happening with aggregators: they’re all motivated to keep you on their own platforms for as long as possible, rather than giving you absolute freedom to take your identity wherever you like. Right now, it’s hard to make money without owning the user’s identity in some way; user lock-in remains the strongest business model, even though superficially they exist to hand more control to you.

What is interesting here is that what is happening is much more than a battle over attention between two competitors. Facebook can close access to Netvibes but then risk a small proportion of users leaving its network (mostly the super tech-savvy, bleeding edge experimenters). On the other hand, Facebook’s own survival likely relies on them further opening their network – the initial steps towards openness, coupled with usability, is at least part of the reason why some users have started migrating profiles from MySpace, which remains defiantly closed.

The reality is, in the future, both Facebook, Netvibes, MySpace are all better off letting their users move freely between networks – that way they remain at least partially relevant, although in deep competition. Otherwise new audiences which are so critical to their business models and desirable to their advertisers, as Fred Stutzman points out, will go to whichever is ‘coolest’ at the time. Ross Dawson also comments on this, too, spotting a trend ‘towards openness’, pulling recent moves by Plaxo into the picture as well.

I’ll be coming back to this theme over the coming weeks. There are enormous opportunities for the cultural and non-profit sector here – if we can all adapt fast enough. Ideas of attention and brand are just as relevant for us as anyone, possibly more so with the limited budgets in our sector,

Categories
Social networking Web 2.0

Facebook group for museum web folk

Everywhere seems to be bubbling over with Facebook action at the moment – largely as a result of them opening up their system as a platform for developers. Most applications, so far, have been quite gimmicky but no doubt there will be some interesting ones to emerge in coming months.

If you have been pulled into the procrastination vortex that is Facebook then you may want to join the ‘International museum web professionals’ group.

Categories
Collection databases Copyright/OCL Developer tools Interactive Media Metadata Social networking UKMW07 Web 2.0

UK Museums on the Web 2007 full report (Leicester)

Museums on the Web UK 2007 was held at the slightly rainy and chilly summer venue of the University of Leciester. Organised by the 24 Hour Museum and Dr Ross Parry with the Museums Computer Group the event was attended by about 100 museum web techies, content creators and policy makers.

As a one day conference (preceded by a day long ‘museum mashup’ workshop) it was very affordable, fun and entertaining (yes, in the lobby they had a demo of one of those new Phillips 3D televisions . . . disconcerting and very strange).

Here’s an overview of the day’s proceedings (warning: long . . . you may wish to print this or save to your new iPhone)

The conference opened with Michael Twidale and myself presenting the two conference keynote addresses. I presented a rather ‘sugar-rush, no-holds barred view from the colonies’ of why museums should be thinking about their social tagging strategies. (I’ll probably post my slides a little later). I had been quite stressed about the presentation coming off very little sleep and a long flight from Ottawa to London the night before. But I’ve been talking about these and related topics almost non-stop for the past two weeks so it was actually a good feeling to get it done right at the beginning.

After my presentation Michael Twidale from the University of Illinois reprised the joint presentation about museums making tentative steps into SecondLife that his colleague and co-author Richard Urban had presented at MW07 in San Francisco. Michael (like Richard before) certainly peaked the interest of some in the room who I had the feeling had barely thought about Second Life before – although I notice that the extremely minimally staffed Design Museum in London has just been doing an architecture event and competition in Second Life (see Stephen Doesinger’s ‘Bastard Spaces’).

Mike Ellis from the Science Museum followed the tea break with a presentation that looked at the outcomes of letting a small group of museum web nerds loose for a day without the pressures of a corporate inbox. Using a variety of public feeds the outcomes of such a short period of open-ended collaborative R&D were quite amazing. In many ways Mike’s presentation ended up challenging the audience to think about new ways of injecting innovation and R&D into their museum’s web practices. Amongst the mashups were a quick implementation of the MIT Simile Timeline for an existing project at the Cambridge University Museum tracking dates; a GoogleMaps mashup of all known museum locations and websites in the UK (something that revealed that current RSS feeds of this data are missing the crucial UK postcode information); a date cleaning API to allow cross-organisational date comparison built by Dan Z from Box UK; and an exciting mashup using Spinvox‘s voice to text service to allow museum visitors to call a phone number and be SMSed back information about locations, services or objects.

These were all really exciting prototypes that had come out of a very small amount of collaborative R&D time – something every museum web team should have. Apart from this a couple of problems facing museum mashups were revealed – stability issues and reliance on other people’s data – but as Mike pointed out how does this really compare to the actual stability of your existing services?

Nick Poole from MDA presented Naomi Korn’s slides on rights issues (moral, ethical and Copyright) involving museums implementing Web 2.0 applications. Nick presentation was excellent and had two main points to make. The first being that the museum sector is already going the way of increased audience focus and interaction in real world policy and has been for at least the past decade so why should the web be any different? Further that the recent political climate in which museums in teh UK exist has focussed on the cultural sector being a lead in enhancing social cohesion and the sharing of cultural capital. Secondly, Nick emphasised that as museums “we have a social responsibility to the population to exploit any and all methodologies which makes it easier for them to engage with and learn from their (cultural) property”, concluding that despite the potential legal issues, Web 2.0 offers a “set of mechanisms by which we can enhance accountability and effectiveness in a public service industry”. Excellent stuff.

Alex Whitfield from the British Library then presented an interesting look at an albeit extreme example of the tensions with implementing Web 2.0 technologies with certain exhibition content. Alex demonstrated some of the website for the Sacred exhibiton which shows some the key religious manuscripts from the faiths – Christianity, Islam, and Judaism. The online exhibition shows 66 of 152 texts and includes a GoogleMaps interface, expert blogs, podcasts and some nice Flash interactives (yes, I did ask why Flash? apparently because it was a technology choice encouraged by the IT team). Alex then proceeded to look at a few examples of where tagging and digital reproduction can cause community offence or at the very least controversy, before closing referencing from Susan Sontag’s ‘On Photography’ where Sontag claims that there is a reduction of ‘the subject’. (see an interview with Sontag where she explains this concept). Alex’s example was certainly provocative and reminded me, again, that the static web and the participatory web both carry their own particular set of implicit politics (individualistic, pro-globalisation, and pro-democracy although to differing depths of democracy).

After a light lunch Frances Lloyd-Baynes from the V&A gave an overview of some of the work they have been doing and some of the challenges ahead. She reported that the V&A has 28% of their collection online but that the figure reduces to 3% once bibliographic content is excluded. Of course they have been working on other ‘collections’ – those held by the community – for quite a while as evidenced by their Every Object Tells A Story and the new Families Online project.

She also mentioned the influence of the MDA’s ‘Revisiting Collections‘ methodology which focuses on making a concerted effort to engage audiences and bring user/public experiences to museum collections content. This and other concepts have become a key part of the V&A’s strategic policy.

In terms of user-generated content she highlighted problems that manyof us are starting to face. What UGC gets ‘kept’? How long, how much? What should be brought into the collection record? Should it be acknowledged? How?How should museums respond, mediate and transform content? Or should they remain unmediated? And how do we ensure that there is a clarity and distinction between voice of the museum and voice of the user.

Fellow Australian, now ex-pat who works as a database developer at the Museum of London, Mia Ridge, gave a practical overview of how Web2.0 can be implemented in museums. She covered topics like participation inequality, RSS and mashups, and the need to be transparent with acceptable use and moderation policies. it was a very practical set of recommendations.

Paul Shabajee from HP Labs then gave a very cerebral presentation on the design of the “digital content exchange protoype” for the Singapore education sector. The DCX allows for the combination of multiple data and metadata spread across multiple locations and sources, as well as faceted browsing and searches for teachers and students allowing for dynamic filtering by type, curriculum subject area, format, education level, availability, text search, etc. It was a great example of the potential of the Semantic Web. He then went on to explain the CEMS thesaurus model of curriculum and the taxonomies of collection, and how actual users wanted to do things in a more complex way such as finding topic for a class then find real world events and map them against topics. And because everything had been semantically connected, building new views in line with user needs did not mean massive re-coding. More information ont eh project can be gleaned from Shabajee’s publications.

Then after some very tasty micro-tarts (chocolate and raspberry, of which I must have partaken in five or six . . ), we moved on to the closing session from Brian Kelly of UKOLN. Brian is a great presenter although his slides always seem so lo-fi because of his typographic choices. Brian managed to make web accessibility for Web 2.0 are compelling topic and his passion for reforming the way we generally approach is ‘accessibility’ is infectious.

Brian is a firm believer that ‘accessibility is not about control. rules, universal solutions, and an IT problem’. Instead he asks what does accessibility really mean for your users? And rather cheekily ‘how can you make surrealist art accessible’? Accessibility, for Brian, is about empowering people, contextual solutions, wideing participation, blended solutions, all the things that Nick Poole and Frances Lloyd-Baynes (and the rest of us) were pushing for earlier in the day.

Brian has come up with a model of approaching accessibility that uses as a metaphor the tangram puzzle (for which there is no single ‘correct’ solution) rather than a jigsaw. He advised that we should focus on content accessibility because a mechanistic approach doesn’t work. How do you make an e-learning resource 3d model? It is just not possible and instead we should be focussing on making the learning objectives/outcomes accessible instead. If we see things in this way then there is no technical barrier for doing museum in projects in say, Second Life, citing the reasons that it isn’t ‘accessible’ by some disabled users, but that we should focus on providing alternatives as well that achieve or demonstrate similar outcomes for other users. Michael Twidale also provided the example of the paralysed Second Life user who can, in his virtual world, fly when in the real world he cannot walk.

Brian closed by advising that at a policy level we should be saying things like “museum services will seek to engage its auidences, attract new and diverse audiences. The museum will take reasonable steps to maximise access to its services”. By applying principles of accessible access across the whole portfolio of what the museum offers (real and virtual) we can still implement experimental services rather than using accessibility as a preventative tool. After all, as he points out the BBC has a portfolio of services for impaired users rather than ensuring access on every service.

Categories
Copyright/OCL Social networking Web 2.0

Potential of social networking / Peer to Patent

How do we re-build our patent system in light of the technology that enables the crowd-sourcing of scientific information?

A very interesting and wordy post from Beth Noveck on Peer to Patent, a pilot project that aims to examine how social networking may offer new possibilities for analysing the enormous backlog of US Patent Office claims and use the community’s aggregated knoweldege to quickly strike out patent trolls.

. . . what we are seeing the deconstruction of the notion of expertise – or at least the sociological organization of expertise – and we need to understand how this changes our institutions and might impact their legitimacy.

Whereas once expertise meant strictly a body of knowledge accumulated by a single person in a professional capacity, increasingly it also means the aggregation of discrete bits of knowledge into collective databases impelled by the new social networking tools, such as friend-of-a-friend (FOAF) social networking sites like Doppr or LinkedIn, or driven by rating and reputation techniques, such as those used by eBay, Amazon and Slashdot, and visual tools like Second Life and There.com that make social practices transparent as well as other other Web 3.0 (I think 2.0 was last year) to organize that information.

These suggest that: ordinary people, regardless of institutional affiliation or professional status, possess information that could enhance decision-making and improve governance. Participating in a social network not only aggregates the wisdom of the crowd – summing up individual parts a la Surowiecki’s jelly bean jar – but it can also structure information into manageable knowledge and help build expertise through participation over time.

Categories
Developer tools Interactive Media Social networking Web 2.0

Museums on the Web UK 2007 – Friday June 22 – register now

If you happen to be one of our UK or European readers then you may be interested in Museums on the Web UK 2007 which happens on Friday June 22. It is organised by the Museum Computer Group, 24hr Museum and the University of Leicester.

The Web is changing – faster, smarter, more personal, more social. The software that drives it and the usage that shapes it are evolving at a rapid pace. Is the museum sector responding to this evolution? And as visible and trusted providers of rich and unique content might museums have, in fact, an opportunity to influence the future Web?

Is it time to become more ‘Web adept’?

From Web ethics, to user-generated content, and from the implications and possibilities of mashed-up content, to the need for new values and holistic approaches to accessible design…this year’s conference will explore the many ways the Web is being transformed around us, and how museums can respond to – and perhaps lead – this change.

UKMW will, as in previous years, be an accessible and affordable event welcoming around 100 delegates. It will aim to bring to together a programme of high quality speakers with a national and international perspective, from inside and outside the sector, offering creative, leading edge thinking relevant to anyone working with museums and the Web today.

I am giving one of the keynotes on social tagging and the future of collections online. The other keynote is Michael Twidale speaking about Second Life. Other speakers include Mike Ellis, Naomi Korn, Jon Pratty, Jeremy Keith (Clearleft), Paul Shabajee (HP Labs) and Brian Kelly. It is a low cost single day event and should be excellent.

Register online over at the UK Museums Computer Group.

I hope to see you there.

Categories
Imaging Interactive Media Social networking Web 2.0

Visualising sound and music – Last.fm visualisation tools

The big news around the internet at the present, apart from Microsoft’s Surface, is that Last.Fm has been bought out by CBS. Hopefully that isn’t going to mean the closing down of their current open policy towards data sharing and use.

One of the coolest data visualisation applications for Last.fm is one that creates a rather stunning layered histogram of your tracked listening habits. Originally this popped up as an art project by Lee Byron at Carnegie Mellon, but now you can create your own visualisations via this nifty little program written by a 23 year old, albeit a little rougher.

Here’s my listening habits based on my Top 50 most listened to artists, averaged monthly,for the last 12 months.

Top 50 Last.fm for the last 12 months

(click for larger)

I’m very excited about generating one of these layered histograms based on object usage in our collection database . . . . stay tuned.

Categories
Interactive Media Social networking Web metrics

Ubiquitous system ethics

Coming hot on the heels of all this talk of tracking user behaviour, Adam Greenfield proposes five ethical guidelines for ubiqitous systems in a recent keynote:

(1) all ubiquitous systems should default to harmlessness.

(2) ubiquitous systems should be self-disclosing (e.g. be clearly perceptible, “seamlesness” must be an optional mode of operation). proposal of 5 different graphical icons to disclose capabilities of an object (see first image above the post).

(3) be conservative of face, so that ubiquitous systems do not unnecessarily embarrass, humiliate or shame their users.

(4) ubiquitous systems should be conservative of time, not introduce undue complications into ordinary operations.

(5) ubiquitous systems should be deniable, offer users the ability to opt out, always & at any point

(via the rather excellent Information Aesthetics)