Picnic is a large ‘creativity’ conference held annually in Amsterdam. I’ve been here as a guest of n8 talking about the notion of ‘open museums’, or as my presentation was called “Sorry we’re open: the open, collaborative museum“.
Set in the reclaimed grey zone of Westergasfabriek, Picnic08 was quite a remarkable thing to behold. On arrival the place looked more like a rave than a conference – in fact the venue is used for huge parties as well. There were fake sheep stationed around the place, the opening ceremony featured a donkey loaded with a video camera, laptop and wifi leading a brass marching band. On day two I was greeted by people dressed up as poodles and crawling around on fours. And, the picnic’ theme extended through to the ‘discussion’ sessions having the speakers seated at a picnic bench on stage.
Here is the first set of notes (with only a minor cleanup for the sake of timeliness). More to follow.
Charles Leadbeater opened the event with a brief introduction to the opportunities of ‘collaboration’ now made possible. Much in the vein of his We-Think book, he sees great potential in a change he feels we are only in the first decade of. As he says, “there are another 50 years to run . . . we’re only 10 years in”. Using the metaphor of pebbles and boulders on a beach, Leadbetter sees the mass creative production of YouTube as millions of people placing pebbles on a beach that previously only supported the boulders of mass media. For this collaborative creativity to have maximum effect it needs to be understood that “diversity drives this sort of creativity”.
Could we imagine learning, politics and media that was “with you” rather than “at you”? How do we get to the “with”?
Clay Shirky’s keynote problematised Leadbeater’s high level optimism.
Managing communities around social objects is hard. These are design problems that plague everything from Flickr to old style forums. If the computer is a box then lots of ‘features’ and complexity matters, but if the computer becomes a communicative/social device then simplicity matters because ‘synchronisation between the mental models of the users’ matter more (communication intent is foregrounded).
Linus’ law – “Many eyes make all bugs shallow”. The best example of this is Pluto on Wikipedia. Wikipedia gets better because it is arguemntative. There is no ‘hive mind’ here. What happens is a very small groups cares much more than everyone else. What Wikipedia does in this instance is allow broad participation rather than apply traditional managerialism that, in the name of efficiency, drops off 80% of the users who only minimally participate (80/20 rule), keeping only those who ‘most participate’. But now in the digital space there is no need to optimise like this and lose so many but instead allow a lot to contribute a little.
But then there is “the Gallileo problem” – the semi locked article. Essentially this is the manifestation of a 500 year old flamewar involving the Catholic Church. This indicates that it is probably necessary to stop thinking about ‘users’ in the generic and start to consider users an inevitable unequal and unequally motivated. This then allows systems to build in defense mechanism to allow the head users to fight off the tail users when necessary – much like governance in a democracy.
Shirky then moved on to discuss the unequal participation and unequal motivations behind Aaron Koblin’s Ten Thousand Cents. This is a collective art work created by 10,000 ‘workers’ on Amazon’s Mechanical Turk.
Now just who were these 10,000 ‘turkers’ who co-created this art? Koblin’s own research showed that turkers’ return rates and time spent ‘working on creating the tiles of the art varied greatly. Chinese turkers comprised 90% return visits, 24 mins; Egypt 97% return 32 mins; whilst turkers from the USA were only 17% return visits and spent far less time, 3 mins, ‘turking’.
This revealed enormous different global motivations for participation. 10,000 cents, 10,000 ‘turkers’ meant that the distribution of labour was global and what 1 US cent ‘meant’ varied greatly. This spontaneous division of labour, voluntary, but different motivations. Spontaneous division of motivation – some are doing it for love and other are definitely doing it for money.
This makes planning and predictability are incredibly hard to forecast. In these social collaborative spaces you cannot ‘recruit’, only ‘invite’ – and it is this that makes it incredibly difficult to handle – especially for large established organisations.
Shirky then closed by posing the questions as to “why is almost all online collective action about ‘stopping’ things”? Perhaps we need a new governmental license model like GPL – rather than the heavy ‘incorporation’ model – which makes ‘group collaboration’ legal and thus able to get things like bank accounts etc. He then showed some inital attmepts to get towards this through virtual companies (Vermont), Community Interest Companies (UK), MeetUp (USA).
Genevieve Bell, an anthropologist working at Intel gave an interesting talk titled ‘Secrets and Lies’. This explored the challenges of identity, reputation and trust online (and offline).
Bell explained that lies are at the basis of everyday life. They come in many varieties and even the strictest religious codes don’t blanket ban all variants. For children lies can be play, boundary testing, working through rules and identity. For adults and lies can be a conscious prevention of ‘reality’.
Secret knowledge is essential rituals. And in many cultures always something is held back. But now we have taken our lying online and we now are forced to openly operate between a cultural practice (lying is everyday) and a cultural ideal (lying is bad).
With the changing online environment we also find new ‘technologies of lying’ emerging. Tracking services are defeated by new alibi services, cellphone tracking is defeated by the phones being left in a drawer at work. The problem is is that whilst people know how to lie but our devices don’t know how to.
Data trails, location aware, satellite navigation don’t know how to lie when you need them to. Governments, regulators, companies, researchers can, potentially ‘know all’ and this is not optimal. For example all our devices know I was at Picnic all the time. THe RFID in the conference pass tracked me, uniquely, through the space and captured all my interactions.
So technology needs to bring back the imprecision, blurring of truth that we as a society need to function. What are the implications, then for e-government? Reputation indices? where does secrecy fit?
(part two of my notes will be posted soon)