Categories
Conceptual Interviews

The museum website as a newspaper – an interview with Walker Art Center

There’s been a lot of talk following Koven Smith’s (Denver Art Museum) provocation in April – “what’s the use of the museum website?”. Part driven by the rapid uptake of mobile and part driven by the existential crisis brought on Koven, many in the community have been thinking about how to transform the digital presence of our institutions and clients.

At the same time Tim Sherratt has been on a roll with a series of presentations and experiments that are challenging our collections and datasets to be more than just ‘information’ on the web. He calls for collecting institutions “to put the collections themselves squarely at the centre of our thoughts and actions. Instead of concentrating on the relationship between the institution and the public, we can can focus on the relationship we both have with the collections”.

Travelling back in time to 2006 at the Powerhouse we made a site called Design Hub. Later the name was reduced to D’Hub, but the concept remained the same. D’Hub was intended to be a design magazine website, curated and edited by the museum and, drawing upon the collection, engaging and documenting design events, people and news from that unique perspective. For the first two years it was one of the Powerhouse’s most successful sites – traffic was regularly 100K+ visits per month – and the content was as continuous as it could be given the resourcing. After that, however, with editorial changes the site began to slip. It has just relaunched with a redesign and new backend (now WordPress). Nicolaas Earnshaw at the Powerhouse gives a great ‘behind the scenes’ teardown of the recent rebuild process on their new Open House blog.

It is clear that the biggest challenge with these sorts of endeavours is the editorial resourcing – anything that isn’t directly museum-related is very easily rationalised away and into the vortex, especially when overall resources are scarce.

So with all that comes the new Walker Art Center website. Launched yesterday it represents a potential paradigm shift for institutional websites.

I spoke to Nate Solas, Paul Schmelzer and Eric Price at the Walker Art Center about the process and thinking behind it.

F&N: This is a really impressive redesign and the shift to a newspaper format makes it so much more. Given that this is now an ‘art/s newspaper’, what is the editorial and staffing model behind it? Who selects and curates the content for it? Does this now mean ‘the whole of Walker Art Center’ is responsible for the website content?

Paul Schmelzer (PS): The Walker has long had a robust editorial team: two copy editors, plus a managing editor for the magazine, but with the content-rich new site, an additional dedicated staffer was necessary, so they hired me. I was the editor of the magazine and the blogs at the Walker from 1998 until 2007, when I left to become managing editor of an online-only national political news network. Coming back to the Walker, it’s kind of the perfect gig for me, as the new focus is to be both in the realm of journalism — we’ll run interviews, thinkpieces and reportage on Walker events and the universe we exist in — and contemporary art. While content can come from “the whole of the Walker Art Center,” I’ll be doing a lot of the content generation and all of the wrangling of content that’ll be repurposed from elsewhere (catalogue essays, the blogs, etc) or written by others. I strongly feel like this project wouldn’t fly without a dedicated staffer to work full-time on shaping the presentation of content on the home page.

F&N: The visual design is full of subtle little newspaper-y touches – the weather etc. What were the newspaper sites the design team was drawing upon as inspiration for the look and feel?

Nate Solas (NS): One idea for the homepage was to split it into “local, onsite” and “the world”. A lot of the inspiration started there, playing with the idea that we’re a physical museum in the frozen north, but online we’re “floating content”. We wanted to ground people who care (local love) but not require that you know where/who we are. “On the internet, nobody knows you’re a dog”.

The “excerpts” of articles was another hurdle we had to solve to make it feel more “news-y”. I built a system to generate nice excerpts automatically (aware of formatting, word endings, etc), but it wasn’t working to “sell the story” in most cases. So almost everything that goes on the homepage is touched by Paul, but we use the excerpt system for old content we haven’t manually edited.

PS: Yeah, the subtle touches like the weather, the date that changes each day, and the changing hours/events based on what day it is all serve as subtle reminders that we’re a contemporary art center, that is, in the now. The churn of top stories (3-5 new ones a week) and Art News from Elsewhere items (5-10 a day, ideally) reinforces this aspect of our identity. The design team looked at a wide range of news sites and online magazines, from the New York Times to Tablet Magazine to GOOD.

Eric Price (EP): Yeah, NYTimes, Tablet, and Good are all good. I’d add Monocle maybe. Even Gawker/Huffington Post for some of the more irreverent details. We were also taking cues from print – we’re probably closest in design to an actual printed newspaper.

F&N: I love the little JS tweaks – the way the article recommendations slide out at the base of an article when you scroll that far – the little ‘delighters’. What are you aiming for in terms of reader comments and ‘stickiness’? What are your metrics of success? Are you looking at any newspaper metrics to combine with museum-y ones?

NS: It’s a tricky question, because one of the driving factors in this content-centric approach is that it’s ok (good even) to send people away from our site if that’s where the story is. We don’t have a fully loaded backlog of external articles yet (Art News from Eleswhere), but as that populates it should start to show up more heavily in the Recommendation sections. So the measure of success isn’t just time on site or pageviews, but things like – did they make it to the bottom of the article? Did they stay on the page for more than 30 seconds (actually read it)? Did they find something else interesting to read?

My dream is of the site to be both the start and also links in a chain of Wikipedia-like surfing that leads from discovery to discovery, and suddenly an hour’s gone by. (We need more in-article links to get there, but that’s the idea.)

So, metrics. I think repeat visitors will matter more. We want people to be coming back often for fresh & new content. We’ll also be looking for a bump in our non-local users, since our page is no longer devoted to what you can do at the physical space. I’m also more interested in deep entrance pages and exit pages now, to see if we can start to infer the Wikipedia chain of reading and discovery. Ongoing.

F&N: How did you migrate all the legacy content? How long did this take? What were the killer content types that were hardest to force into their new holes?

NS: Content migration was huge, and is ongoing. We have various microsites and wikis that are currently pretty invisible on the new site. We worked hard to build reliable “harvesting” systems that basically pulled content from the old system every day, but was aware of and respected local changes. That worked primarily for events and articles.

A huge piece of the puzzle is solved by what we’re calling “Proxy” records – a native object that represents pretty much anything on the web. We are using the Goose Article Extractor to scrape pages (our own legacy stuff, mostly) and extract indexable text and images, but the actual content still lives in its original home. We obviously customized the scraper a bit for our blogs and collections, but by having this “wrapper” around any content (and the ability to tag and categorize it locally) we can really expand the apparent reach of the site.

F&N: How do you deal with the ‘elsewhere’ content? Do you have content sharing agreements?

NS: [I am not a lawyer and this is just my personal opinion, but] I feel pretty strongly that this is fair use and actually sort of a perfect “use case” for the internet. Someone wrote a good thing. We liked it, we talked about it, and we linked right to it. That’s really the key – we’re going beyond attribution and actually sending readers to the source. We do scrape the content but only for our search index and to seed “more like this” searches, we never display the whole article.

That said, if a particular issue comes up we’ll address it responsibly. We want to be a good netizen, but part of that is convincing people this is a good solution for everyone.

F&N: What backend does the new site run on? Tech specs?

Ubuntu 11.04 VMs
LibVirt running KVM/QEMU hypervisor
Django 1.3 with a few patches, Python 2.7.
Nginx serving static content and proxying dynamic stuff to Gunicorn (Python WSGI).
Postgres 8.4.9
Solr 3.4.0 (Sunburnt Python-Solr interface)
Memcache
Fabric (deployment tool)
ImageMagick (scaling, cropping, gamma)

F&N: What are you using to enable search across so many content types from events to collections? How did you categorise everything? Which vocabularies?

NS: Under the hood it’s Apache Solr with a fairly broad schema. See above for the trick to index multiple content-types: basically reduce to a common core and index centrally, no need to actually move everything. A really solid cross-site search was important to me, and I think we’re pretty close.

We went back and forth forever on the top-level taxonomy, and finally ended with two public-facing categories: Genre and Type. Genre applies to content site-wide (anything can be in the “Visual Arts” Genre), but Type is specific to kind of content (Events can be of type “Screenings”, but Articles can’t). The intent was to have a few ways to drill down into content in cross-site manner, but also keep some finer resolution in the various sections.

We also internally divide things by “Program”, programming department, and this is used to feed their sections of the site and inform the “VA”, “PA”, etc tags that float on content. So I guess this is also public-facing, but it’s more of a visual cue than a browsable taxonomy.

Vocabularies are pretty ad-hoc at this point: we kept what seemed to work from the old site and adjusted to fit the new presentation of content.

The two hardest fights: keeping the list short and public-facing. This is why we opted to do away with “programming department” as a category: we think of things that way, no one else does.

F&N: Obviously this is phase one and there’s affair bit of legacy material to bring over into the new format – collections especially. How do you see the site catering for objects and their metadata in the future?

NS: Hot on the heels of this launch is our work on the Online Scholarly Catalogue Initiative from the Getty. We’re in the process of implementing CollectionSpace for our collections and sorting out a new DAMS, and will very soon turn our attention to building a new collections site.

An exciting part of the OSCI project for me is to really opening up our data and connecting it to other online collections and resources. This goes back to the Wikipedia surfing wormhole: we don’t want to be the dead-end! Offer our chapter of the story and give them more things to explore. (The Stedelijk Museum is doing some awesome work here, but I don’t think it’s live yet.)

F&N: When’s the mobile version due?

NS: It just barely didn’t make the cut for launch. We’re trying to keep the core the same and do a responsive design (inspired by but not as good as Boston Globe). We don’t have plans at the moment for a different version of the site, just a different way to present it. So: soon.

Go and check out the new Walker Art Center site.

17 replies on “The museum website as a newspaper – an interview with Walker Art Center”

[…] Resource: The museum website as a newspaper – an interview with Walker Art Center It is clear that the biggest challenge with these sorts of endeavours is the editorial resourcing – anything that isn’t directly museum-related is very easily rationalised away and into the vortex, especially when overall resources are scarce. Source: http://www.freshandnew.org […]

[…] And getting your head around using stats monkey software – narrative science- I am still trying and as overheard in the coffee queue did she really mean all that data entry on the collection can be done by a machine??? But more immediately graspable  the Walker Art Museum employs a journalist to curate the internet for its website. http://www.walkerart.org/ here’s Seb Chan talking about it in his blog http://www.freshandnew.org/2011/12/museum-website-newspaper-interview-walker-art-center/ […]

Leave a Reply

Your email address will not be published. Required fields are marked *