Talk slides from Tableau webinar

Earlier today I had the pleasure of being invited back by Tableau to deliver a webinar presentation. The title of my talk was ‘The Seven Hats of Visualisation Design: A 2017 Reboot’. This was about my latest thinking regarding the skillsets, mindsets, knowledge and attitudes you need to possess in order to a truly all-round superstar. It is badged as a reboot because the previous iteration of this talk as compiled in 2012 was titled the ‘eight hats’ but I have since rationalised a modified grouping of competencies.

The slides are now available, published on SlideShare, and can be viewed below. (Note that due to the amount of detailed images included in this deck some of the resolution has been compromised in appearance during the uploading process).

(You should also be able to register for free on Tableau’s website to watch back a video of this talk via here)

Talk slides from DataFest17 ‘Data Summit’

Yesterday I had the pleasure of giving a short talk at the Data Summit 17 conference, part of the Data Fest 2017 festival in Edinburgh. My talk was titled ‘The Good, the Bad and the Greys of Data Visualisation Design’ and was aimed at providing conference delegates with a resume of things they should do more of, others they should less of and matters they need to be carefully discerning about.

The slides are now available, published on SlideShare, and be viewed below. (Note that due to the amount of detailed images included in this deck some of the resolution has been compromised in appearance during the uploading process).

The fine line between intent and perception

I want to reflect on another occasion where the innocent intent of a chart has been at odds with the perceived incompetence claimed by a snarling audience. This is something of a companion piece to a similarly-motivated post I published almost three years ago. I changed the title this time from ‘CONFUSION AND DECEPTION’ to ‘INTENT AND PERCEPTION’ as I don’t think the circumstances have triggered quite the same DEFCON 1 level of reaction, mainly because this is about pizza toppings rather than gun deaths.

To set the scene, below is the piece in question – let’s consider some of the different issues surrounding this piece and its fallout:

1. Subject matter metaphors and imagery

“We need to do a chart about pizza toppings?”. Imagine you are verbally giving this requirement for a visualisation task, instinctively what imagery comes to mind? 100% of normal people are going to say an image of a pizza. I spoke about the pitfalls and benefits of exploiting visual metaphors associated with subjects in my book. There’s no right or wrong, rather it is entirely subjective.

Sometimes it works as an immediate thematic cue for the subject, sometimes it does not work, usually through the misjudging of negative connotations or overly-stretched metaphors. As a designer, it is a natural place to go: as a reader it is a natural place to expect the designer to have gone. If you’re reading about pizza toppings you’re probably going to be surprised if it does not use a pizza as a graphic apparatus to host this analysis.

2. People do not read intros/instructions particularly well, if at all.

If you publicly call out a perceived mistake in a chart and you haven’t read a provided explanation that might offer some context or clarification then you should feel a bit of a plonker. We all do it, I certainly do it as much as others but, sometimes, we just have to take that moment to avoid fixing our attention just towards the bright, colourful shiny thing in the middle of the screen and check that we have read it all.

Ask yourself “am I jumping to conclusions here or is there something I’ve misread or failed to read here that might help overcome my initial confusion?”. Some argue that if you have to explain something it is already a failure. I strongly disagree. Sometimes, sure, but often things merit explanations because many people who could benefit from *the thing* might need explanations. Not all, but many do.

In this case, the intro makes clear that these results are not individual parts-of-a-whole, rather the relative popularity of toppings amongst a series of options from which people surveyed could choose several. Knowing this before twitterising some outrage about the ‘misleading’ pie chart would maybe have allayed some of the noise.

3.Muscle memory of assuming a familiar chart archetype

The key cause of people instinctively misreading the intent of this chart as a pie-chart is due to the exploding appearance of the pizza whole into six distinct pieces, effectively separating each slice from the other through the presence of the white/empty space. This understandably would have triggered people to *initially* read the slices as parts of a whole and then tempted them into associating these with the percentage callout labels seen around the display. As mentioned above, this could have been overcome upon realising the analysis wasn’t offering a part-to-whole portrayal but, nevertheless, there would have been some initial incongruence in perceiving the values displayed against the apparent equal-looking 16.67% splits of the parts.

That said, once you see that the pizza slices are of equal size (just like a normal pizza you order might be pre-sliced and separated out) AND that the call-outs are pointing to individual ingredient pieces, your initial sense of it being a pie chart should be replaced by an awareness that the pizza image is playing more the role of a map, offering a board of different ingredient images to hang the annotated values from.

In this case, the visible slices were pre-existent in the stock photography used for the main image. Had this pizza image been displayed as a whole, complete circle, with no separated slices, or if the slices were more chaotically detached and presented in isolation from the whole, I don’t think we’d be here now.

This is just another example of how little design matters can have a big impact (both positively and negatively) in the eye of the reader.

4. Social media reaction: ‘all pile on!’ outrage-mongery

Not surprisingly, the publication of the original graphic on social media was met with the famously balanced, nuanced and kind manner with which this platform is commonly associated a stream of pitch-fork wielding henchfolk wanting to be the first to get THE viral tweet that secures them some envy-inducing #big #social #media #numbers.

Let’s face it people love nothing more than a bad pie chart. There is no greater open-goal opportunity on data visualisation twitter than the chance to beckon the masses to have a look at the latest ‘dumb pie’.

Have a look at the original tweets replies, though, as well as the follow up clarification. Even IF this had been a badly conceived pie-chart, imagine these are aimed at you: you’d never emerge out from under your duvet! We’ve got to do better than this. We’ve got to be better than this. Let’s do less of this both within and outside the field. Rather than empty your anger reservoirs at this chart design, aim it towards more important matters happening around the world right now: there are British people voluntarily putting mushrooms on their pizzas and they think they are right! Bloody animals.

Gauging election reaction

Of all the graphics produced before, during and after the election (a selection here) it is fair to say the decision by the New York Times team to use ‘gauge’ charts caused quite a stir. For data visualisation purists, the gauge chart represented, on the face of it at least, a fairly controversial and surprising choice.


One of the primary arguments against using gauges is that they represent a faux-mechanical means of showing just one number against a scale, taking up a disproportionate amount of space to do this. I would certainly acknowledge that I am not a fan of the gauge as a charting device, it always feels to me as being representative of a desire by people to try something cool, to create a skeuomorphic solution just because they’ve worked out how, not because they should.


However, let’s pause and think about why the gauge was not just a valid choice but actually an extremely astute one, a decision amplified by the story that unfolded on the night. This was probably the only and best case reason to use one.

Unlike most applications of gauges, used to display a fixed single value, those used on election night were used to show live data. They were also used to convey the notion of swing (the very essence of the evening’s proceedings), to show progressive movement along the scale and to also indicate levels of uncertainty. (You could argue that a linear gauge might have been more efficient in its use of space but I would guess that the mechanics of the radial gauge probably facilitate a smoother needle motion).

The main backlash appears to be concentrated on the jitter effect used to display uncertainty (see here, here, here).

Speaking to Gregor Aisch (one of the co-authors alongside Nate Cohn, Amanda Cox, Josh Katz, Adam Pearce and Kevin Quealy), he was able to clarify/confirm the thinking behind this jitter. I’m paraphrasing his response but here it is in a nutshell:

So what we are seeing here is uncertainty expressed through motion. A single value would have implied certainty so it was more responsible to show a significant sense of the uncertainty. Of course methods like banded colour regions around the needle could have been employed but that would have probably left something of a mixed visual metaphor mishmash.

As Gregor observed, ‘peak freak out’ about the gauge coincided with the 9pm movement away from Clinton and towards Trump:

“When it comes to important events like this particular election, uncertainty makes us feel very uncomfortable. A display that shows a “narrow but persistent” lead for Clinton makes people feel safe. But one that keeps swinging back and forth into a Trump win (and thus truly shows how close this race really is) does not.”

My own view is that the negativity expressed towards the jitter was a visceral reaction to the anguish caused by the increasing uncertainty of the outcome, heightened by the shocking twist in events during the night. From a visualisation practitioner perspective as well as that of the anxious viewer, I found it an utterly compelling visual aid. When they make the movie about this past week, this will surely be one of the absolute go-to displays to recreate the story of the night.

There’s also something here (picked up by many others so not an original proposition) being exhibited about the widespread misunderstanding of probabilities. Perhaps visualisations create an implication (and even an expectation) of precision, regardless of any measures to provide caveats about the underlying models they portray. Perhaps we need to redouble our efforts to think about how we show fuzziness in numbers. We’re always so caught up in discussions about which encoding methods lead to the greatest precision in data discernibility yet that is not always the goal of the visual.

Update: Gregor has compiled his own post offering reasoning for the gauge chart: ‘Why we used jittery gauges in our live election forecast

Visualisation lessons from Jaws

This lesson draws from a 2011 article in The Atlantic ‘Remembering Bruce, the Mechanical Shark in Jaws‘ and was referred to in the third chapter of my new book, where I discuss the matter of establishing your task or project’s ‘circumstances’ – an understanding of the freedoms and constraints that exist within and around your process of creating a data visualisation.

One of the points I made in this passage in my book was about recognising the value, in many cases, of the existence of limitations and constraints in your process. Often such restrictions can prove to be a positive influence. To illustrate this argument I discussed the circumstances faced by Director Steven Spielberg during the filming of Jaws.

The early attempts to create a convincing-looking shark model proved to be so flawed that, for much of the film’s scheduled production, Spielberg was left without a visible nor viable shark prop to work with. Such were the diminishing time resources that he could not afford to wait for a solution to film the action sequences so he had to work with a combination of props and visual devices. Objects being disrupted, like floating barrels or buoys and, famously, a mock shark fin piercing the surface, were just some of the tactics he used to create the suggestion of a shark rather than actually show a shark.


Eventually, a working shark model was developed to serve the latter scenes but, as we all now know, in not being able to show the shark for most of the film, the suspense was immeasurably heightened. Indeed we don’t even see Jaws until that famous moment when Chief Brody is casually throwing bait into the sea and comes face-to-face with his nemesis, leading him to the conclusion that they might need a bigger boat.


The necessary innovation and improvisation that emerged from the limited resources and increasing pressure led to a solution that made it one of the most enduring films of its generation. Indeed, it was arguably the BEST solution, one that would have surely transcended all other options even if there had been freedom from any sort of constraint.

The key message here for data visualisation practitioners is to embrace the constraints you face. Exhaust the options that exist in any reduced possibility space you work within. I often hear frustration voiced by delegates or students that they don’t have tool X or talent Y to truly express their work in the creative way they would desire but, in my mind, true creativity is heightened when obstacles are present. I’m sure most of us face restrictions in most of our work but perhaps, for the sake of practicing, consider working on personal projects within artificial constraints of perhaps fixed-size layouts, print-only conditions, or limited colour palettes.

Featured image taken from ‘Screen Prism

Update: This was going to be the first part of a series of posts about how data visualisation can be inspired by/draw lessons from the movies. I’m killing the series after just this single pilot episode because, whilst I have *lots* of ideas around this theme, I feel it lends itself better to a talk/presentation format rather than written blog posts.

Six questions with… Mirko Lorenz

This is the latest in a series of short articles titled ‘six questions with…’. The purpose of this growing collection of interviews is to provide a conveniently sized platform to offer perspectives about data visualisation-related topics from professionals within, around or outside of the field, spanning different industries, backgrounds, roles and standpoints. The first ‘series’ of interviews emerged primarily from research on my book, whereas this new series is much more open in scope so I will be accessing perhaps a more varied set of voices.

This latest interview is with Mirko Lorenz a journalist turned information architect thinking about and actively working on software to be used in newsrooms. Included on his CV is his role as co-founder of Datawrapper.

Q1 | Datawrapper has been around for a while now. Thinking back about the beginning: How did the idea and desire to create the tool first come about?

A1 | The initial idea was simple, getting to what the tool can do today needed some stamina. The idea was to simplify how anyone creates charts. At the time we approached the project with a mix of missionary zeal and not knowing how complex simplicity is. At the time most journalists had few options to create charts or maps themselves, at least not under the always present time pressure of a newsroom. Almost everywhere they relied on specialized graphic editors. Yet, most newspapers did not have a full graphics team on staff. Today Datawrapper is a real company, has clients around the world.

Q2 | As a journalist, how would you explain why to use Datawrapper today? In the context of today’s myriad offering of different tools and applications – a market still somewhat characterised by constant flux?

A2 | Datawrapper aims to be a quick, simple, swiss-knife for charts and maps. Datawrapper doesn’t have myriads of chart types or options, but a focus on the most common ones. Plus we really care about making the tool easy to use. This is why we put a lot of work into basic chart types, because those the ones journalists will actually use every day. For example, an annotated line chart that shows the development of the revenue of a company. Or a map that compares GDP growth for different countries, produced in less than five minutes with a bit of experience. Our goal for the next phases is to step by step increase the quality even further – to eventually help to produce the best line charts, bar charts possible. With Datawrapper – at least so far – we achieved to keep it simple to use. You don’t need a big manual to create a chart or even a responsive map.

Q3 | What has been the best demonstration of the Datawrapper’s value? Has there been a situation where you thought ‘that single use case justifies all the hard work we put into creating this tool’?

A3 | That single development are the numbers we have achieved by now: Roughly 60.000 registered from all around the world have created more than 600.000 charts. Some have created hundreds of charts in less than a year. And because quite a few of the charts are published on news sites the number of chart views has skyrocketed. Although we are a super-small company we serve 40-50 million chart views, per month. The value is simple: If you build a tool that is usable the impact can be huge.

Q4 | With your data journalist hat on, what do you see as being the biggest changes you have seen over the past few years?

A4 | There has been a 10x moment in data visualization. Today, there are 10 times as many journalists creating charts than just five years ago. We had a 10x moment in journalism before: When the internet first started, all of a sudden there was an order of magnitude more people that could create and publish articles. And, most importantly, actually reach a sizable audience. The same thing is happening with charts. Just a few years ago, only very few experts worked on data visualizations. The range of what we can do today is amazing: From quick chart production to big one-off visualization projects. Data is gaining importance as a part of reporting – and that’s a good development.

Secondly I get motivation and joy from seeing people do good work. Julius Tröger from Germany is one example, John Burn-Murdoch from the FT another. Georgia Lupi and Stefanie Posavec. The amazing work from Gregor Aisch at the NYT. I am aware I am doing injustice to quite a few more people who are equally contributing. But it’s just to illustrate how I try to keep track of changes: By watching people do good work.

Q5 | We appear to be experiencing (what some have termed) the post-truth era. In your view, what responsibility should data journalism take (if any) for what has happened and what role should it take going forward to address some of the problems we see?

A5 | I think that data and data visualizations can help to get facts straight. But just like all other forms of communication, you can use them to distort, to hide, to twist the narrative. The “New York Times” had this outstanding example on how people with different views and political orientation can look at the same data.

Although I’m happy we can have a positive impact here: With Datawrapper, we try to make it as hard as possible to take data and create misleading charts with it. For example, it’s not possible to create bar charts with cropped axes. From time to time, users ask us to add this feature, but we never have and we never will.

Q6 | What is the single best piece of advice you have been given, have heard or have formed yourself that you would be keen to pass on to someone getting started in a data journalist or visualisation-related discipline?

A6 | Use data visualizations like a camera. Do not approach every visualization like a big project. Instead: Take many “shots”, try to find the best perspective. Often there are many. Eventually eventually you create something new, revealing. This is true for writers, for photographers, filmmakers and equally for the day-to-day work of visualization.

Six questions with… John Gray

This is the latest in a series of short articles titled ‘six questions with…’. The purpose of this growing collection of interviews is to provide a conveniently sized platform to offer perspectives about data visualisation-related topics from professionals within, around or outside of the field, spanning different industries, backgrounds, roles and standpoints. The first ‘series’ of interviews emerged primarily from research on my book, whereas this new series is much more open in scope so I will be accessing perhaps a more varied set of voices.

This latest interview is with John F. Gray. Based in Vancouver, BC, John was a freelance writer and a former West Coast editor of @BetaKit and is the co-founder of Mentionmapp.

Q1 | What was your entry point into the field: From what education/career background did you transition into the world of data visualisation/infographics?

A1 | Trace it back to my “bookwormish” childhood. Our family had limited television programming access with the rabbit ears often being deaf to the airwaves. National Geographic was my window on the world and universe (still is today). The photography and of course infographics & maps always left me with a sense of awe.

Professionally, it was meeting Niel McLaren in 2009. The journey traversed business mentorship, to friendship, to us eventually working together. He’d developed his own proprietary dataviz application, and was doing some great personal projects including the first version of Mentionmapp; and custom client work. By 2010, along with my current co-founder (Travis) they were working on custom dataviz projects for clients and I was managing the business.

I have to include Manuel Lima’s Visualizing Complexity (Mentionmapp is on page 153); and of course you Andy *, @visualisingdata was one of the first Twitter profiles we started following, and have always found it a treasure trove of thought leadership.

Fair to say it just happened. There was no intent to call data visualization a career. The way it worked out, I couldn’t be happier to have fallen into something to care so much about.

Q2 | We are all influenced by different principles, formed through our education, experience and/or exposure to others in the field – if you had to pick one guiding principle that is uppermost in your thoughts when working on a visualisation what would it be?

A2 | It’s never easy picking just one… I’m guided by a combination of curiosity about the world in general, and having a meaningful consideration of our human condition. It’s a process of grappling with the tensions between what we know versus what we don’t know; and what we see versus what we don’t see.
I think this Edward Tufte quote nails it – The commonality between science and art is in trying to see profoundly – to develop strategies of seeing and showing.

Q3 | What is the single best piece of advice you have been given, have heard or have formed yourself that you would be keen to pass on to someone getting started in a data visualisation-related discipline?

A3 | Be intellectually honest. Take care in thinking about how you approach the data, and the questions you’re asking. It’s not being about what you want the data to tell you, and simply being an exercise in confirming your own bias.

Q4 | In your view, what is the problem, curiosity or need that draws somebody to use Mentionmapp: Give me your USP or elevator pitch?

A4 | I look at Twitter, Hootsuite and Tweetdeck and see the same thing… a linear feed of information. The problem is that we’re not really wired as linear thinkers. Our relation with time and space isn’t linear. Mentionmapp’s presenting a non-linear worldview of Twitter. We’re unfolding a network knowledge, ideas, opinions and conversations as they really are; something that’s fluid, dynamic and organic. I also don’t think software can replace our curiosity. And it’s curiosity that leads to serendipitous moments which in turn is the spark of creativity and innovation.

There’s so much happening on Twitter right now. It’s like the 5000 channel universe without syndication or re-runs, or the old AP Wire service (on steroids) typing out stories in the pre-internet newsrooms. It so easy to blink and miss something, plus it’s still hard to find unique and interesting voices. Mentionmapp’s helping people visually explore the network to discover new connections and conversations. We’re also providing new tools to archive and create reports for this visual conversation layer, in ways that spreadsheets, pie charts, and bar graphs can’t.

Ask the question – “what can I learn from seeing the world through a different lens?” I hope Mentionmapp’s a lens that will help people see social conversations differently. Maybe they’ll feel challenged into asking a few never before asked questions of the world around them.

Q5 | What has been the best demonstration of the tool’s value – things you have created yourself and things you have seen others use it for?

A5 | I’ve seen such a diversity of use cases, it’s one of the reasons I enjoy doing this so much. But, spotting this project was a Wow moment for sure (translated) “It works a botnet on Twitter designed to promote a means” splicing multiple maps together like this and tracing out a network of spam is impressive and valuable.
However, being included in this presentation at the International Journalism Festival is a significant highlight. First Draft News offers this great summary: 4 verification case studies from #IJF16

Q6 | I know you are a keen learner as both participant and witness to the evolution and growth of the data visualisation field. What are the biggest changes you have seen over the past few years? What are your hopes or expectations for where it goes next?

A6 | I remember our frustrations trying to redevelop Mentionmapp from Flash to HTML is early 2011. It didn’t go too well. We couldn’t acheive the spring physics and that organic flow we wanted. As the non-technical guy, today I look at D3 with wonder for instance. But trying imagine how new interfaces of the future will change how we see and interact with data. I close my eyes and see scenes from Minority Reports or Iron Man and think that the gap between science fiction and science fact isn’t a massive chasm. And, considering how VR & AR technologies will impact how we present information is a conversation we need to be having now. Beyond the technology and the tools, (which are only as good as the craftsperson using them) it’ll always keep coming back to crafting compelling narratives; the big aim is being able to transcend theory and talk into action.

Header image taken from a screenshot of my Mentionmapp account.

(* Let me assure you that John stated that without any form of coercion or remuneration!)

Talk slides from Bright Talk webinar

Earlier today I had the pleasure of doing a live webinar for Bright Talk. The talk was titled ‘Separating Myth from Truth in Data Visualisation’ during which I dispelled and acknowledged some of the ‘always and nevers, mostlys and rarelys’ that exist in data visualisation design.

The slides are now available, published on SlideShare. (Due to the amount of detailed images included in this deck, as ever, some have been compromised in their appearance). For those who registered for the webinar you will be receiving a link from Tableau to a recording of the session.

(If you prefer to watch the slides being presented with my audio from today’s session, you can sign up and get free access here).

Talk slides from second Tableau 2016 webinar

Earlier today I had the pleasure of being invited back by Tableau to deliver a second webinar of this year. The talk was titled ‘Bringing Method to the Madness’ during which I discussed the aims of my new book and profiled the design process behind my recent visualisation project called Filmographics.

The slides are now available, published on SlideShare. (Due to the amount of detailed images included in this deck it looks like some have been unfortunately compromised in definition terms). For those who registered for the webinar will be receiving a link from Tableau to a recording of the session.

Andy Kirk's Webinar for Tableau (July 2016) from Andy Kirk

Six questions with… Santiago Ortiz

In order to sprinkle some star dust into the contents of my book I’ve been doing a few interviews with various professionals from data visualisation and related fields. These people span the spectrum of industries, backgrounds, roles and perspectives. I gave each interviewee a selection of questions from which to choose six to respond. This latest interview is with Santiago Ortiz, Head at Moebio Labs. Thank you, Santiago!

Q1 | What was your entry point into the field: From what education/career background did you transition into the world of data visualisation/infographics?

A1 | As a kid (at least since I was 5) I had a deep interest in maps. I tried to create my own maps of imagined places. My parents report I said “I want to be a cartographer” when I was 5. Also as a kid, but later, I started coding. As many, I developed my own simple video games, but also played just with graphs driven by simple algorithms. Later fractals and cellular automata blew my mind, and then genetic algorithms. I studied mathematics while continue using code as creative tool. My math thesis was a model for evolution and genetic algorithms. I have to say that at that time I kind of hated applied maths, so statistics didn’t interested me on the least. That came later.

In 1999 I co-founded a web design company in Colombia, called Moebio. Although websites tended to be very conventional, I continued being interesting in the creative capabilities of the digital medium, specially the hyper connected medium of internet. Soon, I started using data to drive visual outcomes. In 2003, while living in Spain, working in collaboration with an awesome Protein Laboratory in Madrid, ran by Alfonso Valencia that was doing pioneer research on data crawling and data crowdsourcing for molecular biology data, we created Gnom, the first serious (although pretty much experimental) data visualization project I was involved.

In 2005 I co-founded Bestiario, a company devoted to interactive data visualization (I left the company in 2012). Now I lead Moebio Labs, a small data consultancy team. We combine interactive data visualization with data science.

Q2 | What is the single best piece of advice you have been given, have heard or have formed yourself that you would be keen to pass on to someone getting started in a data visualisation/infographics-related discipline?

A2 | You should pursuit a career on data visualization only if you’re more interested in what you visualize than in data visualization itself. As a corollary, for each datavis book you read, you should read other 9 about a variety of other subjects such as psychology, economics, mathematics, genetics, sociology, statistics… (my personal rate is actually 1/30).

Q3 | When you begin working on a visualisation task/project, typically, what is the first thing you do?

A3 | I do two things in parallel: I explore the data focusing on structure, with a domain-agnostic approach; and I also talk with the client a lot: I make lots of questions, I try to understand client’s landscape of pains and opportunities, its expectations towards the data and project outcomes, the client’s general major problems and challenges.

Q4 | At the start of a design process we are often consumed by different ideas and mental concepts about what a project ‘could’ look like. How do you maintain the discipline to recognise when a concept is not fit for purpose (for the data, analysis or subject you are ultimately pursuing)? How should one balance experimentation with the pragmatism of what is a data driven and often statistically influenced process?

A4 | “At the start of a design process we are often consumed by different ideas and mental concepts about what a project ‘could’ look like”… Actually I don’t let that happen. As mentioned before I focus on structural exploration in one side and on the reality and the landscape of opportunities in the other. Then we start playing (it’s usually me, a data scientist and a visualization designer+developer), building fast prototypes (we create our own tools for that). By combining and exploring options, forking, pivoting, trying… we end up with good candidates, that we execute and wrap into something we can send to the client. That’s the result of the first iteration. Then it comes lot of conversations with the client and then the tests… tests by us and the client. We talk again, we take into consideration client’s feedback guided by questions. We find what is useful in the first iteration, what is really starting to give some insights to the client, what helps it to make faster and better decisions, etc… Next iteration will then expand what works, remove what doesn’t, and add more experimental approaches to be tested. Typically in the fourth iteration or so the client has a set of tools, some training and gained knowledge that allows it to take real advantage of the data. So, again, we try not to impose any early ideas of what the result will look like, because that will emerge from the process. In a nutshell we first activate data curiosity, client curiosity, and then visual imagination in parallel with experimentation.

Q5 | Beyond the world of infographics/visualisation what other disciplines/subject areas/hobbies/interests do you feel introduce valuable new ingredients and inspire ongoing refinement of your techniques?

A5 | These are some other fields of knowledge that we consider very important, because they provide ideas and tools for the stuff we build, or because they are typical sources of information we might end up analyzing and visualizing… in most of the cases it’s both! We also have to deal often with experts on those fields.

Of course neither of us has deep knowledge on all those fields. We are dilettantes, but we all spend a big deal of time studying and reading.

Q6 | In your experience, what has proven to be the most valuable approach to evaluating your work (post completion) or what methods have you seen others taken that you felt were especially smart? There will always be a balance between effort and reward but very keen to learn of any specific effective tactics.

A6 | Evaluation of work is key for us, as previously explained. Evaluation means improving, it’s a constructive concept not a passive one. Evaluation is also, in our case, a collaborative process, something with do along with the client. We use concepts and methodologies borrowed from data science. We test against reality, we lean towards real return. Although the perceptual, psychological and usability aspects are certainly important and are also assessed, we don’t share the academic general approach to evaluation, that tend to focus on those reductive aspects. We have a holistic approach in which being able to read or memorize values from a chart is really not so important; instead we aim to sense making, insight, complexity of tasks, capability to inform when it comes to make decisions, capability to provide vision, and, in certain projects that might contain more sophisticated analysis, including prediction, the accuracy and value of specific answers.

Header image taken from Santiago’s portfolio of incredible work.