The little of visualisation design: Part 44

This is part of a series of posts about the ‘little of visualisation design’, respecting the small decisions that make a big difference towards the good and bad of this discipline. In each post I’m going to focus on just one small matter – a singular good or bad design choice – as demonstrated by a sample project. Each project may have many effective and ineffective aspects, but I’m just commenting on one.

The ‘little’ of this next design concerns a nice technique for elegantly displaying chart gridlines. The example of this approach comes from an article published in the Guardian about the affordability of homes around the UK.

This is a very simple approach whereby the gridline colours take the same colour as that of the chart background. In this case white is used, which means you avoid the visual clutter of visible lines crossing the entirety of the chart space but, as a reader, you still gain the benefit of a more subtle guide to help judge the size of the bars.

The problems with b’arc charts

Almost one month ago I tweeted* my contempt for radial bar charts, having come across a particularly egregious case. It seemed to gain a bit of traction on social media but was also met by genuine queries as to my reasoning for this disdain**.

In response, I compiled a quick illustration to visually explain the distortion at play when you decoratively stretch bars into arc lengths. As this was buried in a passing tweet, I have belatedly decided it was worth committing to a blog post, as much for my own reference for illustrating this point in the future.

If you are still confused, think about why athletes in the 200m or 400m races start from staggered positions…

* I was particularly happy with myself for inventing, in this tweet, the new description of this type of chart as the B’arc Chart. Work wise, this was one of my strongest contributions during the month of August.

** It is also worth checking this post from Alberto Cairo about a deeper discussion around the same general issues, this time about a chart plotting duration on a radial timeline.

Talk slides from second Tableau 2017 webinar

Earlier today I had the pleasure of delivering a second webinar presentation for Tableau this year. The title of my talk was ‘The Design of Nothing’. This was about my latest thinking regarding the challenges of displaying nothing (null, zero values), the opportunities of utilising emptiness and the invisibilities connected with the experience of consuming a visualisation. This is an enhanced, updated and expanded version of the same talk I gave originally at OpenVisConf in 2014.

The slides are now available, published on SlideShare, and can be viewed below. Note that due to the amount of detailed images included in this deck some of the resolution has been compromised in appearance during the uploading process. If you want to watch the video of the talk, you can register for free on Tableau’s website to obtain the link via here.

Think like a journalist

One of the standard pieces of advice I (and most others in a similar position) give to people looking to develop their data visualisation skills is to constantly seek opportunities to practice. Every task is different, every dataset presents new challenges. From small and simple to large and complex, any chance to practice will sharpen the eye and broaden your repertoire of capabilities.

There are of course practice opportunities in the workplace, developing your experiences through working on tasks with fresh thinking, challenging what you have done in the past and being more discerning in your choices going forward. There are practice opportunities outside of the workplace context possibly through pursuing passion projects: acquiring and exploring data about subjects you might be personally interested in, pushing yourself beyond your creative/technical comfort zone and embracing the valuable learning that comes from making mistakes without too much consequence.

One particular strategy I like to employ (but, hands up, rarely actually find the time to do) is to challenge oneself to occupy the mindset of a journalist. Look at the current news stories and try to imagine what data-driven visuals could be useful to substantiate or supplement a story. You can do this to different degrees of involvement, whether it is just thinking “if that was my story I would have included a graphic to show…” right the way through to finding the angle, sourcing the data and building a solution. If it is the latter, maybe consider imposing artificial constraints on yourself such as fixing your timeframe to a maximum of 6 hours or adopting the constraints of space limitations.

Here’s a small example of this thinking, just showing some notes I applied to a screengrab of the BBC news website yesterday (click on it to see a larger view).

Good data visualisation is trustworthy

It is approaching the one year anniversary of my book being released in the wild. Though I will be doing something to mark this milestone in a couple of weeks, after returning from a very-needed holiday, I wanted to share a one-page extract concerning a topic that has become ever more important since I wrote it.

I’m not alone in growing weary at this suffocating and rather inescapable era we now seem to be in where lies and lying, whether intended or accidental, has become an accepted norm. OK, still not acceptable but somehow it is no longer the exception as relentlessly stupid and/or bad people are dominating our attention.

Anyway, this is a section I wrote to introduce one of my key principles of good data visualisation design: that it should be trustworthy.

Bivariate choropleth maps

Yesterday, I came across an example of a bivariate choropleth map produced by Quartz. It isn’t an approach you see deployed too often so I was curious about what people thought about them on Twitter.

There were some interesting replies, with as many fans as there were doubters and also several who mentioned they’d never come across them approach before. The previous most recent example I’d come across was this from the Washington Post.

I thought I’d compile a quick post to share this technique with a wider audience and, principally, to point you towards a fantastic article written by Josh Stevens from 2015 in which he brilliantly explains their role and value (as well as a ‘how-to’ guide). As Josh explains, this technique involves simultaneously plotting two quantitative variables on a map. To achieve this, the two variables are split into (typically three) intervals/bins and then a matrix colour scheme is formed out of mixing the two univariate variable scales. Typically, you’ll see these as a 3×3 grid, as shown below.

My immediate feeling is that each time I have encountered a bivariate choropleth I have found it a challenge. Though I admire the intent I find the task of parsing the mix of colours quite effortful. That said, as with many charts, simplicity and immediacy are not always a relevant aim. The audience characteristics, the communication setting, the subject matter, and the complexity of the phenomenon being plotted are just some of the classic circumstantial factors that will have more influence on whether requesting effort from your audience could be proportionate and justified.

So, how can you make these as accessible as possible? I feel the key design factors include the following:

1) The two variables plotted should have a reasonable/logical relationship to make the overlapping colour states meaningful (eg. dominance of voting % for democrats vs republicans, the mixed state demonstrates even split)

2) As the eye struggles to contend with many colour classes, it seems important that the colour combinations used are supported by written explanations to describe what each colour/mix means

3 a) Providing interactive filtering to emphasise individual colour mix components

3 b) Providing interactive annotation through tooltips can be a useful aid to help orientate your eye for the colour meaning of the values portrayed in each location (A couple of examples: this and this)

4) Accompanying the main display with separated small multiples of the univariate plots is advantageous

Talk slides from first Tableau 2017 webinar

Earlier today I had the pleasure of being invited back by Tableau to deliver a webinar presentation. The title of my talk was ‘The Seven Hats of Visualisation Design: A 2017 Reboot’. This was about my latest thinking regarding the skillsets, mindsets, knowledge and attitudes you need to possess in order to a truly all-round superstar. It is badged as a reboot because the previous iteration of this talk as compiled in 2012 was titled the ‘eight hats’ but I have since rationalised a modified grouping of competencies.

The slides are now available, published on SlideShare, and can be viewed below. (Note that due to the amount of detailed images included in this deck some of the resolution has been compromised in appearance during the uploading process).

(You should also be able to register for free on Tableau’s website to watch back a video of this talk via here)

Talk slides from DataFest17 ‘Data Summit’

Yesterday I had the pleasure of giving a short talk at the Data Summit 17 conference, part of the Data Fest 2017 festival in Edinburgh. My talk was titled ‘The Good, the Bad and the Greys of Data Visualisation Design’ and was aimed at providing conference delegates with a resume of things they should do more of, others they should less of and matters they need to be carefully discerning about.

The slides are now available, published on SlideShare, and be viewed below. (Note that due to the amount of detailed images included in this deck some of the resolution has been compromised in appearance during the uploading process).

The fine line between intent and perception

I want to reflect on another occasion where the innocent intent of a chart has been at odds with the perceived incompetence claimed by a snarling audience. This is something of a companion piece to a similarly-motivated post I published almost three years ago. I changed the title this time from ‘CONFUSION AND DECEPTION’ to ‘INTENT AND PERCEPTION’ as I don’t think the circumstances have triggered quite the same DEFCON 1 level of reaction, mainly because this is about pizza toppings rather than gun deaths.

To set the scene, below is the piece in question – let’s consider some of the different issues surrounding this piece and its fallout:

1. Subject matter metaphors and imagery

“We need to do a chart about pizza toppings?”. Imagine you are verbally giving this requirement for a visualisation task, instinctively what imagery comes to mind? 100% of normal people are going to say an image of a pizza. I spoke about the pitfalls and benefits of exploiting visual metaphors associated with subjects in my book. There’s no right or wrong, rather it is entirely subjective.

Sometimes it works as an immediate thematic cue for the subject, sometimes it does not work, usually through the misjudging of negative connotations or overly-stretched metaphors. As a designer, it is a natural place to go: as a reader it is a natural place to expect the designer to have gone. If you’re reading about pizza toppings you’re probably going to be surprised if it does not use a pizza as a graphic apparatus to host this analysis.

2. People do not read intros/instructions particularly well, if at all.

If you publicly call out a perceived mistake in a chart and you haven’t read a provided explanation that might offer some context or clarification then you should feel a bit of a plonker. We all do it, I certainly do it as much as others but, sometimes, we just have to take that moment to avoid fixing our attention just towards the bright, colourful shiny thing in the middle of the screen and check that we have read it all.

Ask yourself “am I jumping to conclusions here or is there something I’ve misread or failed to read here that might help overcome my initial confusion?”. Some argue that if you have to explain something it is already a failure. I strongly disagree. Sometimes, sure, but often things merit explanations because many people who could benefit from *the thing* might need explanations. Not all, but many do.

In this case, the intro makes clear that these results are not individual parts-of-a-whole, rather the relative popularity of toppings amongst a series of options from which people surveyed could choose several. Knowing this before twitterising some outrage about the ‘misleading’ pie chart would maybe have allayed some of the noise.

3.Muscle memory of assuming a familiar chart archetype

The key cause of people instinctively misreading the intent of this chart as a pie-chart is due to the exploding appearance of the pizza whole into six distinct pieces, effectively separating each slice from the other through the presence of the white/empty space. This understandably would have triggered people to *initially* read the slices as parts of a whole and then tempted them into associating these with the percentage callout labels seen around the display. As mentioned above, this could have been overcome upon realising the analysis wasn’t offering a part-to-whole portrayal but, nevertheless, there would have been some initial incongruence in perceiving the values displayed against the apparent equal-looking 16.67% splits of the parts.

That said, once you see that the pizza slices are of equal size (just like a normal pizza you order might be pre-sliced and separated out) AND that the call-outs are pointing to individual ingredient pieces, your initial sense of it being a pie chart should be replaced by an awareness that the pizza image is playing more the role of a map, offering a board of different ingredient images to hang the annotated values from.

In this case, the visible slices were pre-existent in the stock photography used for the main image. Had this pizza image been displayed as a whole, complete circle, with no separated slices, or if the slices were more chaotically detached and presented in isolation from the whole, I don’t think we’d be here now.

This is just another example of how little design matters can have a big impact (both positively and negatively) in the eye of the reader.

4. Social media reaction: ‘all pile on!’ outrage-mongery

Not surprisingly, the publication of the original graphic on social media was met with the famously balanced, nuanced and kind manner with which this platform is commonly associated a stream of pitch-fork wielding henchfolk wanting to be the first to get THE viral tweet that secures them some envy-inducing #big #social #media #numbers.

Let’s face it people love nothing more than a bad pie chart. There is no greater open-goal opportunity on data visualisation twitter than the chance to beckon the masses to have a look at the latest ‘dumb pie’.

Have a look at the original tweets replies, though, as well as the follow up clarification. Even IF this had been a badly conceived pie-chart, imagine these are aimed at you: you’d never emerge out from under your duvet! We’ve got to do better than this. We’ve got to be better than this. Let’s do less of this both within and outside the field. Rather than empty your anger reservoirs at this chart design, aim it towards more important matters happening around the world right now: there are British people voluntarily putting mushrooms on their pizzas and they think they are right! Bloody animals.

Gauging election reaction

Of all the graphics produced before, during and after the election (a selection here) it is fair to say the decision by the New York Times team to use ‘gauge’ charts caused quite a stir. For data visualisation purists, the gauge chart represented, on the face of it at least, a fairly controversial and surprising choice.


One of the primary arguments against using gauges is that they represent a faux-mechanical means of showing just one number against a scale, taking up a disproportionate amount of space to do this. I would certainly acknowledge that I am not a fan of the gauge as a charting device, it always feels to me as being representative of a desire by people to try something cool, to create a skeuomorphic solution just because they’ve worked out how, not because they should.


However, let’s pause and think about why the gauge was not just a valid choice but actually an extremely astute one, a decision amplified by the story that unfolded on the night. This was probably the only and best case reason to use one.

Unlike most applications of gauges, used to display a fixed single value, those used on election night were used to show live data. They were also used to convey the notion of swing (the very essence of the evening’s proceedings), to show progressive movement along the scale and to also indicate levels of uncertainty. (You could argue that a linear gauge might have been more efficient in its use of space but I would guess that the mechanics of the radial gauge probably facilitate a smoother needle motion).

The main backlash appears to be concentrated on the jitter effect used to display uncertainty (see here, here, here).

Speaking to Gregor Aisch (one of the co-authors alongside Nate Cohn, Amanda Cox, Josh Katz, Adam Pearce and Kevin Quealy), he was able to clarify/confirm the thinking behind this jitter. I’m paraphrasing his response but here it is in a nutshell:

So what we are seeing here is uncertainty expressed through motion. A single value would have implied certainty so it was more responsible to show a significant sense of the uncertainty. Of course methods like banded colour regions around the needle could have been employed but that would have probably left something of a mixed visual metaphor mishmash.

As Gregor observed, ‘peak freak out’ about the gauge coincided with the 9pm movement away from Clinton and towards Trump:

“When it comes to important events like this particular election, uncertainty makes us feel very uncomfortable. A display that shows a “narrow but persistent” lead for Clinton makes people feel safe. But one that keeps swinging back and forth into a Trump win (and thus truly shows how close this race really is) does not.”

My own view is that the negativity expressed towards the jitter was a visceral reaction to the anguish caused by the increasing uncertainty of the outcome, heightened by the shocking twist in events during the night. From a visualisation practitioner perspective as well as that of the anxious viewer, I found it an utterly compelling visual aid. When they make the movie about this past week, this will surely be one of the absolute go-to displays to recreate the story of the night.

There’s also something here (picked up by many others so not an original proposition) being exhibited about the widespread misunderstanding of probabilities. Perhaps visualisations create an implication (and even an expectation) of precision, regardless of any measures to provide caveats about the underlying models they portray. Perhaps we need to redouble our efforts to think about how we show fuzziness in numbers. We’re always so caught up in discussions about which encoding methods lead to the greatest precision in data discernibility yet that is not always the goal of the visual.

Update: Gregor has compiled his own post offering reasoning for the gauge chart: ‘Why we used jittery gauges in our live election forecast