Journalism, writing

Tips for (science) writing

200_s

The University of the West of England are running an interesting science writing competition and asked a bunch of writers for their tips for potential entrants. Here are mine.

When you’re really into a topic, it’s easy to think that everyone else will be and it’s hard to narrow it down to just a few aspects – everything is so fascinating. I have to cover all the bases to do it justice. (This counts double if you’re coming from an academic background).

Take a few moments before you start writing to think: what is my story really about? What is the one take home message that I’d want a reader to get from my story, even if they only skimmed it or read a bit? If you were to sum up your story in a sentence or two to explain to a friend or your mum – what would that be?

In film and business, they talk about the ‘Elevator pitch’ – you catch the executive in the lift, and in the one minute between floors you succinctly pitch your idea – enough to give a taste of what the story is about, what’s fascinating about it, why it’s important and how you’re approaching it – why it’s worth them investing. It’s the same principle in writing.

More than anything, it helps you, the writer, stay focused and clear on the purpose and point of your story, in your writing, research and interviews. Your readers – and editors – will be thankful for it.

Journalism, Social media

What does success look like in the digital magazine age? (Or How to track qualitative metrics using Pinterest and Upwork)

 

decline

I work for a funny little magazine. It’s online only, and published by a non-profit charitable foundation. Moreover, it purposefully gives all its content away for anyone to republish, for free – we care not whether someone reads it on our website or not, just that they read and engage with it at all, wherever they like. And it does go everywhere.

When your model is like this, what on earth does success look like? How do you measure that? When your content is everywhere, and the subsequent conversations too, how can you possibly know how well it’s doing or what people think?

Three years ago, this is what we at Mosaic wondered. So we came up with a system.

  1. Quantitative metrics: the straightforward stuff – unique pageviews, average time on page, referrals. All stuff you can get off of Google Analytics. We track our own website, and also have a republish tracker – building on an open source ‘tracking pixel’ that can be embedded into syndicators sites and which pings back data to us. But not everyone is able to put this in, so we also get what we can off of any known republishers who are kind enough (we ask nicely) to, confidentially of course, share their stats. We also use HotJar to generate heatmaps of our stories – a snapshot of how much and where people are engaging with our stories (i.e. how long they’re scrolling down and where they’re dropping off). Elsewhere, we track metrics from Facebook and YouTube insights, particularly for our videos, and in the last year have added podcast metrics – via our server Libsyn.

  2. Qualitative metrics: “Engagement” – comments, likes/reactions, shares, retweets, sentiment – sometimes seen as the softer stuff, but I actually think this is more important, particularly to us.

Yes, the quantitative stuff is important. Everybody likes to see numbers and graphs going up (especially higher ups). We watch those just as carefully as everyone else does. And we have to take them with a pinch of salt too. No web metrics are perfect, and it’s even harder to integrate when your data is coming from different systems and sources – how do we know what one publisher counts as a ‘view’ is the same as what another does, or if that’s the same as our data?

But given our publisher and mission statement, it was clear from the start that the qualitative stuff was going to be key.

When we launched in March 2014, we were – to be honest – taken aback at how quickly other publishers took to our Creative Commons offering. Though we’d done a lot of work to talk to potential partners about what we were doing to and to convince them of the quality and rigour of our work, we thought they’d be pretty cautious at first – fishing for more of a sense of exclusivity, seeing what the content was like first, and reader reactions to it. Instead we got a plethora of republishes right from day one including the BBC, Gizmodo, Digg, the Guardian and CNN.

With topics such as the safety of cycling and the legitimacy of female condoms, heated discussions started exploding all over the place – comments under Gizmodo and the Guardian, on forums such as Hacker News and Reddit, under other pages’ Facebook posts, and of course on Twitter – in reaction to our tweets and those of our republishers’ links.

It was a real buzz to see how big a response we were getting to our stories, and I got somewhat obsessed about reading and tracking every single thing – I lapped it up, I didn’t want to miss a thing, and was convinced that we needed to log it all lest we missed any of it – in the social media age, if you don’t capture it on the day it’s gone (or at least very very hard to find) tomorrow. We were just starting out and I sensed that we needed to demonstrate this impact to our managers and board to show just how viable the model was, and to hopefully keep their support – financially and in faith.

Initially, I favourited everything on Twitter I could find and then dumped it all into a massive Storify of each week (we publish a new longform story each week, a bit like launching a new book every week). I’d then add anything I saw on Facebook – on our page, but also if I saw any republishes, say, on Digg, I’d be sure to look up their posts on their page, then log every single (meaningful) comment – good or bad – from that. I’d even click on the ‘Shares’ under the post to see if any public shares had any meaningful comments alongside the share. I did the same for every forum and comments thread I could find across the web. For every one of our stories. I called all this ‘going down the rabbit hole’.

I dumped all of these into Storify but also started a spreadsheet on Google Docs to paste in each comment or tweet, links, dates, along with notes if a share, comment or retweet was from someone particularly ‘influential’ (e.g. noting the biog in their profile and follower count). This I felt was a good way of future-proofing the data and also making it meaningful, at a glance, for our Editor, the rest of the team, our Board and anyone else to see how we were doing, and what readers thought of our work, even if they only had a few minutes to glance it over.

I also started somewhat obsessively pinning every republish to our Pinterest boards – one for every story – a nice visual way of tracking republishes, while also maintaining a presence on a popular social network.

We still do this – it’s been extremely useful for everyone: writers, editors, my Editor when he’s writing a feedback report, us when we’re compiling Award entries. It’s also reasonably transparent – anyone can look at our Pinterest or Storify (the spreadsheet, for now, remains team eyes only).

I don’t do this all myself anymore. Frankly, I burned myself out trying – doing all that while running social media, production AND commissioning and editing the stories themselves is impossible. So after a year we took on a freelancer to help.

At the suggestion of a colleague, I tried Upwork, which allows you to recruit people to do web-based tasks all over the world. It’s mostly software developers and marketers – nobody did exactly what we were looking for, so I did a few searches and approached a few listees who seemed to have the skill in web marketing and keyword searches to handle the job. We found someone, and she’s great. Her work is invaluable to us and she’s a key part of our team.

Mosaic is now in its third year and this system of qualitative and quantitative tracking works out for us.

Going forward, I’d love to see some more investment (hint hint boss) in added resource for tracking, and to free up time particularly for analysis so we can really use the feedback and insights to improve our content. I’d love to afford a service like Chartbeat, that can integrate a lot of the quantitative metrics – website and social – into dashboards that can give you better at a glance insights that are actually meaningful (that said, I’ve reviewed a lot of services that people have pitched at us – lots of them are pants and nothing you can’t do if you can just take the time to look at the free metrics Google, Facebook and others give you).

I don’t think there is any shortcut to the qualitative tracking though. Especially when you have a content/publishing model like ours. If we were a commercial publisher, it may be a little easier, given that you would only have to track your own links and hence can tie that into a connected dashboard tracker like Chartbeat better.

Someone said to me recently that our way of doing things is a bit unusual and interesting, hence my motivation to share in this post. Hope it’s helpful to others in someway. If anyone needs me, I’ll be down the rabbit hole.

Journalism

The future of journalism

Last week I went to The Guardian to hear its Editor-in-chief Alan Rusbridger read his 2010 Hugh Cudlipp lecture, which he delivered earlier this year (you can read the whole thing here), and take part in an audience discussion.

The topic was the future of journalism and whether ‘journalism’ even exists anymore. Top billing was the free vs pay debate, highly topical given that The Times went behind its paywall just a few weeks ago. Rusbridger made the fair point that paywalls are not necessarily all bad or all good — they may be right for some but not others, they may be the right idea, but wrong at this moment in time.

How will digital paywalls change journalism, he wondered. Rusbridger said the debate marked the first fork in the road for journalism and represents a wider debate about open vs closed journalism and ‘us’ (journalists, special) vs ‘them’ (non-journalists, not special). He wondered if the key might be the value of specialist knowledge or information, as opposed to the general information that will be freely available.

He also touched on the technology debate. Would charging for mobile access be the way forward, with everything else free? Screens give us more than just words, said Rusbridger, “We are in an age where most under 25s can’t remember a time without them”. He argued how some stories work best with a combination of links and embedded video, evolving content, while others are best as a pure snapshot. “Journalists have never before been able to tell stories so effectively,” he said.

Most interesting to me (though obvious) was the effect of all this on the scoop. In a 24/7 news environment, he said, it’s difficult to break stories. A scoop has a lifespan of just 3 minutes in the Twitter age. Those 3 minutes are still a commodity to those in a market sensitive environment (like the Financial Times) but it changes the game for the others. Most people will be prepared to wait until it is free elsewhere, rather than pay to read it first. The fact is, he said in the discussion later, the speed information travels makes it difficult to tell who breaks which story these days — in 45 minutes it’s appeared on other media outlets and aggregators and most readers won’t have a clue it came from you originally.

In the Q&A, Rusbridger pointed out that speed vs accuracy was not a problem. Wire services have been dealing with this for decades — the trick, he said, was to file quickly, and repeatedly, reporting on what you do know for sure, not what you don’t. He also put in a nod to the Guardian’s story trackers when he said that stories don’t end with publication, and remarked that there was no excuse for failing to add, clarify and correct afterwards. This constant addition and clarification leads journalists to act in different ways, he said. To quote Rusbridger, quoting CP Scott, “What a chance for the world, what a chance for the newspaper.”

Event, Journalism

To tell or not to tell?

Last week I attended an evening seminar on health journalism organised by the Patient Information Forum and held at the Trust. Jo Brodie’s written a decent summary of the proceedings on her blog.

The presentations were interesting, though largely of a familiar ilk: both press officers and journalists are partially at fault, fitting responsible health information with news values is a difficult task, the public needs to be more critical of what they read etc. etc.

However, one point got me thinking. Ginny Barbour, Chief Editor of the journal PLoS Medicine, gave a talk about how scientists can help journalists and how journal editors work to get their stories picked up by the press. Usually, she said, they encourage researchers to write more easily understandable titles and abstracts, so that non-specialists can make sense of them. But in one slide she gave the example of a paper they received on suicides in Taiwan. This paper found that media coverage of charcoal-burning suicides was fueling a steep rise in Taiwanese suicides.

Among the authors recommendations are “introducing and enforcing guidelines on media reporting” to deal with the problem. In keeping with this, Barbour and colleagues were happy to stick with the wordy title, ‘The Evolution of Charcoal-burning Suicide in Taiwan: A spatial and temporal analysis’, rather than push for a change (admittedly, the title and abstract aren’t actually that bad for this paper). Barbour argued that in this case it was of more benefit to society for the paper not to be covered in the media. And her team patted themselves on the back when, sure enough, the paper received zero press coverage.

I can certainly see their point of view, and it’s not as if this is an uncommon thing — there are guidelines on reporting suicides in many countries for the same reasons. However, I had to ask myself if it is really in the interest of press freedom not to report findings such as these, particularly if those results could be useful to others. Would knowledge of these results raise awareness and help policymakers prevent future suicides? Or would it give people ideas on how to kill themselves, as they feared?

It reminded me of something an African colleague said to me at a recent meeting. She mentioned how some of her researchers (she’s a communications officer) were unwilling to publicise a paper containing some quite important findings about HIV in the men who have sex with men (MSM) community. The reason? Homophobic activists had trashed one of their labs a few weeks earlier and the scientists were afraid of a repeat. But what is the point of doing such important research if you don’t tell anyone about it, or if the only people who do are those who stumble across your paper in a literature review years after?

This isn’t the same as reporting suicides of course, but it got me thinking about the responsibility to report scientific findings and when social responsibilities, individual responsibilities and journalistic responsibilities clash. Thoughts?