In order to sprinkle some star dust into the contents of my book I've been doing a few interviews with various professionals from data visualisation and related fields. These people span the spectrum of industries, backgrounds, roles and perspectives. I've only scratched the surface with those I have interviewed so far, there's a long wish list of other people who I haven't approached yet but will be doing so. My aim is to publish a new interview each week through to the publication of my book next year so look out for updates!

I gave each interviewee a selection of questions from which to choose six to respond. This latest interview is with Alyson Hurt, Graphics Editor for NPR. Thank you, Alyson!


Q1 | What is the single best piece of advice you have been given, have heard or have formed yourself that you would be keen to pass on to someone getting started in a data visualisation/infographics-related discipline?

A1 | I'll offer two, both of which I've heard/seen in the past year and keep coming back to:

1) From Nigel Holmes, on *editing* as the distinction between datavis and infographics: "So instead of asking 'what's the data?' [data visualizers] are trying to humanize their work by asking 'what's the story?' More often than not, that means editing the data."

2) From Kat Downs of the Washington Post: (paraphrased from a talk she gave at SND) Once you've processed your data, made your sketches, identified your story and gotten down to make the final graphic, *start with the headline.* What is the key thing (or things) you want your users to take away from this piece? Defining that up front will help lend focus to your design.

Q2 | When you begin working on a visualisation task/project, typically, what is the first thing you do?

A2 | At the beginning, there's a process of "interviewing" the data — first evaluating their source and means of collection/aggregation/computation, and then trying to get a sense of what they say (and how well they say it) via quick sketches in Excel with pivot tables and charts. Do the data, in various slices, say anything interesting? If I'm coming into this with certain assumptions, do the data confirm them — or refute them?

Q3 | With deadlines looming, as you head towards the end of a task/project, how do you determine when something is ‘complete'? What judgment do you make to decide to stop making changes?

A3 | I work backwards a little bit and ask the question: What is the LEAST this can be? What is the minimum result that will be 1) factually accurate, 2) present the core concepts of this story in a way that a general audience will understand, and 3) be readable on a variety of screen sizes (desktop, mobile, etc.)? And then I judge what else can be done based on the time I have. Certainly, when we're down to the wire it's no time to introduce complex new features that require lots of testing and could potentially break other, working features. But it's not uncommon for there to be lots of minor fiddling up to — and even for a short time after — the piece is published online.

Q4 | What advice would you give to anyone working under pressure of timescales: What are the compromises you are willing to make vs. those you are not? How do you juggle an ambition to innovate within the constraints you face?

A4 | At minimum, every project we produce must be:

  1. 1) Accurate
  2. Understandable to a layperson
  3. Readable/functional on a variety of screen sizes (desktop to mobile)
  4. Up to our editorial/visual standards

So, starting from that baseline, we can consider more or less ambitious treatments based on the material and time we have.

We also have a number of templates and standardized practices for our projects (large-scale and small), which means that when we start a new project, a good bit of the baseline work — the base HTML file and JS libraries, starter code for various types of charts, a hook into Google Spreadsheets, etc. — is already set up. Starting from there buys us a little more time to weigh greater "ambition" with a given project.

Q5 | What advantages do you think working in a journalistic setting introduces to your visualisation/infographic work?

A5 | Firstly, the sheer variety and quantity of work means that I get to learn a little bit about quite a few things, and there are many opportunities to apply design/technical/workflow lessons learned from one project to the next. I'm constantly learning, and it's fantastic. Secondly, being in a newsroom, there's an emphasis on *story* rather than *data*. (Nigel Holmes's comments on editing re: datavis are quite apt.) This lends a certain amount of focus to how we frame our work — identifying key concepts rather than including every possible thing.

Q6 | As you will fully appreciate, the process of gathering, familiarising with, and preparing data in any visualisation/infographic design task is often a sizeable but somewhat hidden burden - a task that can occupy so much time and effort but is perhaps ultimately invisible to the ultimate viewer. Obviously, pressures during this stage can come in the shape of limited timescales, data that doesn’t quite reveal what you expected and/or substantial data that offers almost too many possibilities. Have you got any stand out pieces of practical advice to share about your practices at this stage?

A6 | My main advice is not to be disheartened when this happens. Sometimes the data don't show what you thought they would, or they aren't available in a usable or comparable form. But sometimes that research still turns up threads a reporter could pursue and turn into a really interesting story — there just might not be a viz in it. Or maybe there's no story at all. And that's all okay. At minimum, you've still hopefully learned something new in the process about a topic, or a data source (person or database), or a "gotcha" in a particular dataset — lessons that can be applied to another project down the line.