Lazy pun, I know, but it’s hard not to be lethargic after the fantastic weather we’ve had here in England. According to records it is the first consecutive Friday, Saturday and Sunday sequence where we have had nice weather since 1836. That’s not entirely true but it feels like it.
Anyway, whilst this post is going nowhere, I am going somewhere, for a holiday to be precise. So there will be no updates for the next 10 days or so, during which time the only information I’ll be bothered about will be…
Having picked up a recent blog post by Stephen Few about a graph that had been promoted as an example of good practice by Oracle’s BI suite, I was searching around for reports on BI vendors and came across this decision matrix report by DataMonitor from 2007. The report I unearthed was on the SAS site, presumably because SAS came out very well in the recommendations.
My reason for sharing this is simply because it has ironic connotations similar to another report I came across in that it attempts to make a credible assessment of the BI industry but entirely undermines this (in my eyes at least) through its ineffective use of graphs through the report.
This first graph to comment on has one key flaw – the undermining of its integrity caused by the axes labels not starting at zero (even though the label says scale 0-10) which creates a skewed view of the plotted proximity of the various BI vendors.
The next graph type (of which there are three) is simply a shocker. It is a multi-variate radar graph plotting assessments of the 10 vendors across 12 different characteristics. Once again the scales don’t begin from zero. The more effective alternative would have been to plot small multiples of bar-charts for each vendor displaying their scores against each characteristic. Similar treatment was provided to this graph by Jon Peltier.
The final set of graph types – there are 12 of these – once again plots measurements of various activities, against various categories, referring to various types… I would make a greater attempt to accurately describe the purpose of these but, as I say, I got Radar’d to death.
I’ve been sent a link to an interesting new service offered by 3M (who despite apparently having a portfolio of 55,000 products I still think of as the Post-it Note company). It is called the Visual Attention Service and is pitched around the tag line “You’ve got seconds to engage a viewer. Be sure they count”:
As a designer, your challenge is to maximize visual impact within a visually cluttered world. That’s why we built 3M Visual Attention Service (VAS). 3M VAS uses science-based modeling to help you increase the likelihood people will notice key design elements during the first 3–5 seconds of exposure.
The idea is that, for a certain cost per image (ranging from $20 to $11 depending on bulk), you can upload your design, mark out the important elements you want analysing and then run the VAS analysis. This employs visual algorithms to present an assessment of the probability that viewers will notice those elements within the first 3-5 seconds. It will also indicate other visual properties that catch the eye, regardless of whether you indicated these as key elements.
The output is initially presented using graded, coloured boxes around the prominent properties indicating the probability levels. As you then approach a decision about redesigning the image you can view a heatmap representation to gain further insight into where the viewer is likely to focus their attention.
Here is an example from the 3M site showing an original design on the left, the initial probability analysis and the corresponding heatmap, and then the revised version and the resulting improvements in the viewing patterns.
I have a suspicion that the design flaws used in their gallery examples are so obvious that they have been reverse engineered to present the service in the most positive light. I have signed up for a trial of the service to see how some of my own images might be assessed through this system, but the emailed account activation has yet to arrive, an hour after I completed the form. If it does ever arrive I will give it a go and publish the results.
In summary, it is easy to default to a cynical position about such tools, perhaps instantly passing something like this off as nothing more than a gimmick. However, I do think VAS offers a useful and accessible service especially for supporting those customers who might not have access to the expertise or design skills necessary to get it right first time. I would expect, however, that trained designers would never create such design shortcomings this service helps to identify.
Not sure how and why I’ve missed this publication for so long but today I’ve come across Information Age magazine having unearthed links to a couple of very interesting articles which I recommend you read.
Firstly, one published today entitled “The Analytical Eye” talks about the as-yet untapped potential of data visualisation across business. It contains discussions with Stephen Few, Jock Mackinlay and Pat Hanrahan so you can be sure it is a sound and well-informed read. Here’s a couple of excerpts:
Tools that exploit the human visual system’s ability to process information can help unlock workers’ untapped powers of analysis
…while histograms, line charts and scatter plots are to be found everywhere in working life, the full potential of the visual system not only to understand but also to analyse information is still underused by business information systems, from desktop spreadsheet software to million-pound business intelligence architectures.
Secondly, “The Shape of Things to Come” describes Gartner’s latest focus which involves using pattern seeking approaches to support proactive strategic decisions:
There is, perhaps unsurprisingly, a long list of technologies that Gartner claims can help organisations achieve and execute a pattern-based strategy.“Visualisation techniques enable humans to recognise patterns, because pattern recognition is very visual in nature,” explains Genovese. Also on the list is social media analysis, which allows businesses to use patterns in customer sentiment as a lead indicator for changes in commercial activity, and predictive analytics.
Via The Big Picture website I’ve come across a New York Times graphic sequence explaining the national debt levels across European states. The sequence presents seven different graphics that build a story of the debt crisis across the continent and I want to comment on a couple of aspects of these displays which I feel undermine its effectiveness.
My first issue relates to the colour schemes used to encode the value ranges. As you can see in the graphic below the coloured value ranges begin with a dark green and end with a shade of red. Whilst I understand the implication of these choices (green = good, red = bad) the resulting display, in my opinion, doesn’t efficiently help the viewer pick out the true message of debt/GDP % levels across the countries.
This is because the visual representation of the lower value levels (in dark green and the murky pea-soup colour) actually creates a stronger emphasis than the higher debt/GDP % level band 60-75% which has a much more subtle, pale sandstone colour. This creates an instinctive impression that the 60-75% range is somehow of least important and lowest value when in fact it is neither. This causes an unnecessary amount of cross-referencing with the legend key. A more effective approach would have been to have use a single colour with decreasing shade bands for each value range. This would enable the viewer to more naturally pick out the prominence of each country in relation to the size of their debt/GDP %.
A further observation about this matter concerns the inconsistency of colour ranges used in the graphics across this series. This creates creates an unnecessary amount of re-familiarisation with each graphic display and associated legend key and so interrupts the process of learning at each stage.
My other issue concerns the attempt to present the countries in revised shapes and sizes according to their GDP with the colour relating to the budget deficit share (example below). Put simply, I just don’t think this works well at all. It reminds me of playing Daley Thompson’s Decathlon on the Commodore 64 in the 80’s. It looks clunky and is very awkward to read. For this presentation I would have dropped the attempt to sustain a geographical relationship and instead focus just on the two variables of GDP size and budget deficit %.
The trickle of government bodies and large-scale organisations freeing up their data for transparency, scrutiny and creative exploration is quickly turning into something of a flow.
The FutureGuv website has published details of the latest large institution to follow the open data initiative and make its data freely and publicly accessible. The World Bank has created a portal which will enable “policymakers to access to more than 2000 financial, business, health, economic and human development statistics, information that was previously exclusive to paying subscribers”. Access to the data-sets can be found here.
“The real power of open data is the opportunity to turn data into knowledge and useful applications to enhance transparency and accountability of all actors in development. Free and open access to data will empower citizens to get more directly involved in the development process.” Aleem Walji, the World Bank Institute’s manager of new Innovation Practice
The article also explains how, as well as making the data available, the World Bank portal also intends to offer a platform on which developers will be able to create visualisation tools to help make the data more accessible and useful insight more possible.
Once again this presents high-profile evidence of the prominence and importance of the emerging visualisation field as the world’s data becomes more and more accessible.
Here are some of the most relevant, interesting and useful articles I’ve come across during April 2010. I don’t necessarily agree with all the principles, opinions or advice presented in these links but sometimes consuming such information can only help enhance your knowledge on a subject:
Peltier Tech Blog | Guest post on Jon Peltier’s site by Naomi B Robins commenting on dot plots | Link
Smashing Magazine | An article observing how the fundamental skills and craft of design have started to take a back seat | Link
Visual Business Intelligence | Stephen Few talks about how ‘doing the unprecendented’ is overrated | Link
Flowing Data | A data visualisation tutorial in Processing | Link
Eager Eyes | Robert’s passionate article about ‘cargo cults’ of visualisation | Link
Juice Analytics | Useful article which goes into detail about the advantages of small multiples in visualisation | Link
Smashing Magazine | Providing five tips for making ideas happen | Link
Eager Eyes | Considering whether chart junk is useful after all? | Link
Seeper | Video promoting Iron Man 2 using architectural projection mapping of AC/DC on to a 900 year old castle (click on first project image) | Link
Juice Analytics | Details the benefits of parallel coordinates | Link
Site Sketch 101 | 5 tips for becoming a better blogger | Link
Information Aesthetics | An illustrated animation focusing on remarkable geometrical and mathematical properties | Link
Flowing Data | Infochimps release a huge amount of Twitter data | Link
Ted Talks | Video of George Whitesides’ Ted2010 talk entitled ‘Toward a science of simplicity’ | Link
Visual Business Intelligence | Stephen Few asks Orcale if they ‘have no shame’ | Link
Information Aesthetics | With echoes of the infamous ‘Chernoff Faces’ work from the 70’s, this article details a project aimed at mapping environmental impact as humanoids | Link
VisWeek 2010 | Details of the VisWeek event in Salt Lake City during October | Link
BBC | How volcano chaos unfolded in graphics | Link
Online Journalism Blog | Data journalism part 4: visualising data | Link
Drawar | The Differences Between Good Designers and Great Designers | Link
Guardian Election 2010 | National carbon calculator: Can you cut UK emissions? | Link
Guardian Election 2010 | The Datablog editor on how number crunchers are driving this election debate | Link
Health Services Journal (from Nov 2009) | An article about demystifying NHS data published in November 2009 in the Health Service Journal (you can subscribe for ‘today’ to access the content) | Link
Ushahidi | Ushahidi allows anyone to gather distributed data via SMS, email or web and visualize it on a map or timeline | Link
VisitMix (from Mar 2009) | The 7 1/2 steps to successful infographics | Link
I’ve come across an infographic today (via cool infographics) that was originally published in February on Focus.com, a business expertise exchange and research service. I’ve shown the full length of the graphic below, a larger version can be accessed here.
In this post I wanted to discuss my reaction to this graphic and offer an assessment of its design. Unfortunately, I think it contains many flaws but I hope this item will present criticism alongside justification – after all, nobody sets out to produce bad visualisations. It can be a very tricky challenge trying to ensure criticism is fair, constructive and does not step into becoming insulting. It is simply about sharing suggestions for better practice.
Dimensions – my first comment relates to the dimensions of the artwork. Many infographics do tend to be large pieces which grow beyond the normal confines of A-graded paper sizes. It is nice to consume information like this in different shapes and sizes but my feeling is that this is a little bit unnecessarily long and thin when it could have been presented using a rectangular layout. It also makes it that bit harder to consume on a screen which is likely to be the logical, native platform for this piece.
Huge title & dots explanation– the first thing to grab you is the enormous title and explanation for what the dots represent. If it was to be printed and presented as a poster I could understand the need for the large-sized title font, but on the screen it just delays the process of getting to the actual information. The explanation key takes up so much space, should not be placed in such a prominent location and, fundamentally, is unnecessary because the dots are self-explanatory. At most this should be a footnote item.
Mixed font – throughout the infographic the font use is inconsistently employed in terms of size, colour and case. This makes it unnecessarily hard to automatically pick out the sections and topic changes as we read through the “story”.
Colours – the design is significantly hindered by the use of a black background which restricts the potential colour palette that should be deployed to effectively visualise the information. For example, in the first image below showing % of women using the Internet, with a black background the white dots are equally as prominent to the eye as the pink dots when the emphasis should be on the value not the remainder. In the second image we see a statistic about the % of broadband access by income bracket and the value is represented by brown dots which are less prominent to the eye than the white remainder dots. As shown in the third image, with a white background the possibilities of presenting appropriate emphasis are greater by combining strong colours in contrast with a subtle colour scheme (such as softer greys or pastels). These choices help the visual system pick out the most important elements of design to more efficiently display the data.
Dots – the use of 100 symbols in a 10×10 grid, such as dots, to portray proportions are proving to be a popular design alternative to the more common pie chart or bar graph. I don’t necessarily dislike these designs in principle, however, there are two main visual problems with using them in certain ways. Firstly, as you can see from the dot diagrams above, the matrix of dots has a visually draining ‘noise’ effect created by the shaped gaps in between each dot. Secondly, they occupy a fairly sizable space given they only encode a single value. To overcome this I would normally prefer to see a simple horizontal bar which can then be presented in line with other values to aid comparison. However, if the 10×10 grid approach is to be used, ideally the shapes would be presented in a way that absolutely minimises the space in between each, preferably using squares that tessellate better than circles, as suggested in the compact fourth image above.
Data density – building on the comments about the space occupied by each dot diagram, their usage means the data density of the infographic is very low – a lot of space is taken up, a lot of graphical content is applied to present very few data items. The first 25% of space at the top of the graphic only presents two facts about the % of each gender who use the Internet.
Age groups using the Internet – there are some technical issues with this row of facts, such as the inconsistent label font sizes (they seem to imply something of significance – what, I don’t know) and the varied spacing between each graphic which is clumsy. However, the biggest fault here lies with the missing row of white dots (representing 91-100) for the 50-64 group.
How often do people use the Internet at home – these graphs are no better than pie charts and in fact are even less useful with the eye having to rapidly reference between the graph, the colour coding and the legend key labels (which incidentally presents the figure anyway). Furthermore, given that they are trying to present a comparison between ‘today’ and ‘June 2005’, it would have been far more useful to use a graph type that faciliates such comparison much clearer.
People with desktops/notebooks – after the ubiquitous use of dot diagrams, how come all of a sudden we revert back to pie charts?
Countries with most Internet penetration– this is doomed by the decision to use massive labels listing the top 5 countries whilst squeezing the world map the labels point towards into a reduced area. This means the destination of the arrows cannot be clearly seen especially with 4 out of the top 5 being northern European countries. Could we not have also seen some numbers to get a feel for the relative levels of each country?
Average broadband speeds – this bar graph would be fine if it was properly labelled and flipped onto its side so the columns were horizontal (facilitating clearer labelling). However, the labels that have been used are horrendous and simply hijack the potential merit of the graph.
How fast is mobile broadband– in isolation, this is the least interesting pieces of data on the graphic, comparing the speed of three anonymous providers. It would have been more useful to change the units to mbps (like the previous graph) and plot it against standard broadband speeds presented.
Lack of coherence– infographics are most powerful when the reader is engaged in a sequence of insights that build a logical and incremental story that helps to increase the understanding of a subject area. In this infographic we really only have a random collection of Internet user demographic facts. To my mind these are only snippets of information and offer a very narrow glimpse into the apparent purpose of the graphics message which is to present the state of the internet.
It is always far easier to pull apart and identify problems with other people’s work once it has been created. It is the classic “I don’t know what I want until I see what I don’t want“. Furthermore, it is never satisfactory to present criticisms without providing actual examples of alternatives – had I more time I would have made the effort to back up my observations with a re-worked example.
What I’ve tried to do here, at least, is to provide a fairly detailed justification of my criticisms about the infographic, rather than just offer an empty, unsubstantiated insult which can typically be found within comments posted about such examples.
The Guardian has put together a photo gallery showing shots of some of the election night TV coverage through the years right up to some of the technology likely to be on show tonight.
The coverage choice for UK viewers this evening and throughout most of tomorrow will be between BBC1, ITV, Channel 4 and Sky. Sky TV are launching the UK’s first High Definition news channel and will most likely have an unprecendented number of correspondents reporting from all around the country. However, with the Beeb having a £10M budget for their hi-tech coverage there should be a great deal of visualisation fun for viewers to digest, particularly in the shape of Jeremy Vine doing his virtual green-screen Gollum-like routine.
There does, however, appear to be an air of caution with some broadcasters with regards to the more imaginative presentation approaches:
All the broadcasters stress there will fewer gimmicks this election – apparently the lessons have been learnt from Vine’s ill-advised comedy cowboy routine during the US presidential election in 2008 – and the emphasis is on clarity.
“There’s almost an arms race about how you cover election programmes. People are not knocked out any more if they’ve seen things like Avatar. I think what people want is clarity,” said Craig Oliver, the BBC’s election night editor.
As Oscar Wilde never said, ‘there’s a fine line between ill-advised and comedy genius’…
Visitors to the site will notice a new collection of icons and links on the right sidebar that connect to the Visualising Data social web accounts and channels. These provide a range of different platforms to digest and access the opinion, information, examples and resources that I publish on this site.
The RSS feeds have been available since the launch of the site in February and the Facebook page has been around since March – there is much more I need to do to enhance this page over the coming weeks.
Today I have launched a Twitter account (username visualisingdata) through which I will be synchronising feeds, updates and connections to other visualisation-related tweets.
In the coming weeks I will be expanding my social web portfolio further to extend the opportunities for readers to consume this site’s offerings.