VisWeek updates by Jérôme Cukier: Day 5

The IEEE VisWeek Conference 2011 is taking place in Providence, RI this week (23rd to 28th). VisWeek 2011 is the premier forum for visualization advances for academia, government, and industry, bringing together researchers and practitioners with a shared interest in tools, techniques, technology and theory.

The week is organized around three separate conferences IEEE Visualization 2011, the venue for all visualization research for data that has an intrinsic spatial component, IEEE Information Visualization 2011 focused on research relating to visual mappings of non-spatial data and interaction techniques and IEEE Visual Analytics Science and Technology 2011 which concerns the reasoning processes involved in visual analysis and the application of visual environments to generate useful insight about real-world problems.

I’m disappointed to not be able to attend the event this week but am delighted that Jérôme Cukier has very kindly agreed to provide updates of his discoveries, reactions and experiences. I’m particularly pleased to provide a platform for Jérôme’s updates because I consider him to be one of the most astute and thoughtful observers within the visualisation field.

 

Day Five – Thursday 26th October

Remarkable Presentations

Georgia Albuquerque and colleagues from TU Braunschweig presented an incredible tool in a paper called “Synthetic Generation of High-Dimensional Datasets”. The tool and the accompanying paper are available here. Simply put, what this does is let a user “paint” a dataset with a few mouse strokes. Let’s suppose you want a dataset with data following a certain pattern, or with certain correlations: with this interface you can easily generate data that approach the form you want. This is plotting data…. but in reverse.

In the applications section there has been two papers that I really liked. The first, ‘BallotMaps: Detecting Name Bias in Alphabetically Ordered Ballot Papers’, struck me as how visualization techniques can contribute actively to UK democracy. The authors, around Jo Wood, wanted to see whether the order in which names appear in a voting ballot have an influence on the voter choices. There findings confirmed this, and also that candidates with non-English/Celtic names garnered less votes than expected.

The image above shows one of the many visuals of the ballotmaps research. If there were no bias, green and purple squares would be equally spread on the screen. The fact that darker screens are on the top means that names which are towards the top of the ballot are favored. The presentation of the study was very aesthetically pleasing and convincing, and with these results, the researchers approached the British electoral commission with good hopes that their practices change.

Speaking of aesthetics, Danielle Albers presented Scalable Surveyor which is a very impressive tool to explore and compare genomes. I remembered seeing an earlier version as a poster last year that struck me as a balance of design and function. And for those of us who are not obsessed with genetics, the last slides suggested the tool could be used for literature analysis or to study datasets like the google N-grams.

I also really liked two of the papers in the Evaluations track and for similar reasons. By this I mean that I’m not necessarily talking about the most innovative techniques or the ones which will are likely to be most cited, which would be way beyond my capacities. Rather, what I’m interested in in VisWeek in general are techniques that relate to the real world, either because they try to tackle a practical problem, or because they could be easily adapted to a real world use.

This is why I was very impressed by Michelle Borkin’s presentation on “Evaluation of Artery Visualizations for Heart Disease Diagnosis”. The technique presented was a new way to detect heart disease.

In fact there were three remarkable things in the presentation. First, the subject and motivation. Heart disease is the first cause of mortality in all western countries, so anything that can help here has tremendous potential. The presentation style was also one of the best in the conference. While the subject was technical, Michelle made her presentation very accessible, I think everybody understood the problem, the contribution of the technique and the impact. And of course, the results of the technique were incredible. With state-of-the-arts tools, doctors are capable of detecting 39% of the zones impacted by disease. With the new technique, the rate bumps up to 91%! On top of this, this paper is iconic of the visweek discussions as one key change is the move from the rainbow colormap, universally decried by the Vis community.

The following presentation was from Lyn Bartram on Ambient and Artistic Visualization for Residential Energy Use Feedback, or in other words, in devices that show power consumption, with a catch. Those in the study were unobtrusive, designed not to stick out in the house, and artistic, so more focused on the aesthetics than on conveying exact information. The image below is one example of ambient power consumption displays, where the the more waves on the backsplash, the more power the house is using.

In any of the displays presented, in fact, users couldn’t read data directly. But they could guess if there was a power draw or not, or if their current activity was causing high consumption or not.

The Capstone Talk

We were very fortunate to welcome Amanda Cox to close this amazing week. Amanda is a very cool choice for a capstone. For one, everybody knew her work. Also, while what she does is very related to visweek-style visualization, it requires very different qualities. V isualization for storytelling is not the main focus of the infovis research. And conversely, this research isn’t always constrained by strict deadlines or design. Yet she and researchers have a lot of concepts in common.

With that in mind, she helped us understand how she work. The one point she really wanted to carry across is that as journalists, they don’t show a bunch of data, but first find an angle which they then illustrate with data. In her words: what if the front page of the Times read “Here are some words. We hope you’ll find some of them interesting”? and yet instead articles have carefully crafted, engaging titles.

To draw the user in, the NYT graphics team rely on two main techniques. First, they don’t try to show every data point they have (more on that shortly). Second, they rely on the annotation layer. Interestingly, I notice that virtually none of the visweek applications have any form of annotation layer. By this she refers to notes and text that appears close to the actionable parts of the visualizations. One common framework that is used in NYT visualizations is the “stepper”, their “interactive slides” system. You often see in one corner of their designs consecutive numbers one next to another with a “next” button, so the user can make their way through the explanation or go back to any previous point. Well, this stepper model allows the team to provide new explanations with each slide.

Since they are so comfortable with annotations, the NYT team is ok with using unique, non-recyclable visualizations. I really appreciated what she confirmed when she answered questions, that they don’t really care about using novel visualizations that most readers never saw before, versus very traditional ones. Amanda claims that if they use innovative (but commented) forms, they do lose some readers but “the net joy remains positive”. They are very mindful of finding clever, relevant ways to represent the data, versus either familiar templates or bleeding edge forms.

Finally, she insisted that design is hierarchy, that is, knowing is important and what is not as important. This goes beyond the choice to not show every data point that we alluded to earlier, but it’s also about what to highlight and what to downtone. Mastering this balance is critical to achieve the signature NYT style.

Google+ launches ‘Ripples’ visualisation

After what appears to be have been a rapid and widespread take-up of its Google+ social media service, Google has now launched details of its ‘Ripples’ feature. Significantly, news of the release was made by Fernanda Viegas, one of the most prominent names in the field and co-leader, with Martin Wattenberg, of Google’s “Big Picture” data visualization group. (Update: the Ripples development team also included Jack Hebert).

The visualisation plots a network diagram to display how a post spreads as users re-share it on Google+. The direction of the sharing activity is indicated through arrows. Clustered circles show the spread of the original post within and amongst each subsequent user’s own circle of friends and followers.

A great feature is the timeline device which allows you to watch the post grow organically from its originator and spread over time, with new clusters of users’ sharing activity emerging to demonstrate the speed and popularity of the post. The visualisation comes with complete interactivity, enabling you to pan and zoom within the network and explore the ‘journey’ of the post in close-up. Summary details at the bottom draw out the headline statistics of key influencers, describe the post’s ‘chain length’ and plot the proportion of different user languages via a rather ineffective pie chart.

You can see more in the YouTube video below:

VisWeek updates by Jérôme Cukier: Day 4

The IEEE VisWeek Conference 2011 is taking place in Providence, RI this week (23rd to 28th). VisWeek 2011 is the premier forum for visualization advances for academia, government, and industry, bringing together researchers and practitioners with a shared interest in tools, techniques, technology and theory.

The week is organized around three separate conferences IEEE Visualization 2011, the venue for all visualization research for data that has an intrinsic spatial component, IEEE Information Visualization 2011 focused on research relating to visual mappings of non-spatial data and interaction techniques and IEEE Visual Analytics Science and Technology 2011 which concerns the reasoning processes involved in visual analysis and the application of visual environments to generate useful insight about real-world problems.

I’m disappointed to not be able to attend the event this week but am delighted that Jérôme Cukier has very kindly agreed to provide updates of his discoveries, reactions and experiences. I’m particularly pleased to provide a platform for Jérôme’s updates because I consider him to be one of the most astute and thoughtful observers within the visualisation field.

 

Day Four – Wednesday 26th October

Today the InfoVis part of visweek started, and with a fanfare. The first session of the day, Theory and Foundation of InfoVis, put together by Jeff Heer, had two presentations which would easily rank among the 5 more interesting I have seen in my four years of visweek, both by the same person: Jessica Hullman. A lot of what we see in VisWeek is built with a very specific problem in mind. It so happens that some times, we find a more general use for the technique (for instance, treemaps – and more on that in a moment) but most of the time this is not the point, and we, non-scientists, have little chance to see a direct application of that. On top of that I feel a divide, probably unintentional, between researchers and practitioners and designers. Again discuss David McCandless work with an InfoVis researcher and you will not get a nuanced reaction. Still, designers need research, they need people who push the envelope of representation, they need to have their intuitions conceptualized and critiqued so they can move forward. In that respect, Jessica’s work is a very important attempt to bridge this gap in a positive and constructive way.

The first presentation “Benefiting InfoVis with Visual Difficulties” was combating the idea that the best representations are always the most immediate to understand. This is the dominant view, however: do not use pie charts because comparing angles is more difficult than comparing lengths or areas, do not add “chart junk”, make your chart as fast to process as possible. The fancy term for that is make it as cognitively efficient as possible. It so happens however that these stumbling blocks have positive effects which are undervalued: the associated charts may be slower to read but they are easier to understand or to remembered. There have been papers on the subject, such as “Useful Junk?“, or “Fortune favors the bold (and the italicized)“. What I liked about Jessica’s approach is that it led to usable, actionable points.

The second paper, with Nick Diakopoulos, is about “visualization rhetoric”. De-academicized, this means using visualization to convince or persuade. I have always thought that this was an area where visualization had great potential, and again it contrasts with the view that visualization should be used to give an “honest”, “truthful” representation of data, “like it is”. Again, I feel this is the most common view here. The paper looks at minute aspects of visualizations and describe how they can influence the way users form opinion. Then again, it’s only hugely relevant to relatively few people, like politicians, advocacy groups, journalists, communication and advertising agencies, or those businesses that try to sell things.

I won’t go in much details about the next session of the morning about infoVis techniques, chaired by Jason Dykes. I am confident however that such techniques and especially those of the first and last presentation will find their way into consumer-facing visualizations soon. The first one, context-preserving visual links, is a mouthful and the thinking behind it and the implementation is quite sophisticated but it is one of these things that looks brillantly simple when you see it. The last technique, Asymmetric relations in longitudinal social networks, doesn’t have that immediacy effect and took me the whole time of the talk to get my eyes atuned to it but I was sitting with Kim Rees from Periscopic who really liked it and whose expert eye could see how it could be touched up and be put to good use.

On one of the afternoon sessions, Mike Bostock presented d3. I take that most people who are interested in building visualizations know d3 and have probably seen the slides of this presentation who have made the rounds of Twitter. Well there’s more to the presentation than the slides, and Mike’s explanation put it in context nicely and justified the motivation to break from protovis to build d3. So no, Mike didn’t commit that d3 will not be replaced by a new framework sometime down the road, although it’s fair to say that d3 has much more room to grow than protovis so that won’t happen anytime soon. Mike argued though that he felt that d3 is easier than protovis. Offline, he agreed that you could take that with a bit of salt. The d3 learning curves starts off steep but I agree with him that once you get over the initial difficulties it is more efficient to make things with d3 than with Protovis. The good news is that a Mike Bostock book on d3 is coming.

I can’t report much of the last session as I took some time to discuss with the exhibitors and artists. I didn’t attend TCC11 so I spent some time playing with Tableau 7 and chatting with Jock MCInlay and Lori Williams about the features that would be nice to have. In the art exhibit I really liked the Fluid Automata iPad app by Angus Forbes.

Let’s see what we’ll have tomorrow!

Stephen Few’s criticism of information visualisation research

Stephen Few has published an article today criticising the state of research in the information visualisation field. I’m a great admirer of Stephen. His work was largely the reason I discovered this subject, he’s a terrific guy to meet in person and I have huge respect for the clarity and conviction of his principles and his willingness to speak out. However, I can’t agree with the essence of some of his arguments today about research.

Stephen refers to his standing complaint about the quality of research and cites an example of this through the “InfoVis Best Paper” award that was given at this year’s conference around “Context-Preserving Visual Links”:

…these visual links are almost entirely useless for practical purposes.

Before I worked in a University environment (albeit not as an academic/researcher) I had the uninformed view that research appeared extremely disorganised, inefficient and not greatly consequential – ‘all this valuable charity or government money going on people’s elaborate hypotheses or minutiae interests’ – and so, in the past, I would have probably agreed with this sentiment:

I’ve often complained that much information visualization research is poorly designed, produces visualizations that don’t work in the real world, or is squandered on things that don’t matter.  To some degree this sad state of affairs is encouraged by the wayward values of many in the community.

However, what I didn’t appreciate was the importance of the research ‘system’, the bigger picture of this complex beast, beyond the isolation of individual studies projects, and their fundamental interconnectedness. Regardless of whether a project unearths the discovery of a paradigm shift in knowledge, it still has an important role as an informing and inspiring agent for future researchers who comes across that study.

Ideas, methods and conclusions explored in even the most inconsequential study can spark ideas in somebody else which creates a sequence of knowledge we can’t plan for. It might have determined nothing of intellectual importance or practical value, but the fact that it arrived at that outcome is still of importance to others in a “there’s nothing to see down there, move along” kind of way.

Stephen takes a swipe at the information visualisation research community and its focus on the creation of technology to the detriment of “the purpose of information visualization, which is to help people think more effectively about data, resulting in better understanding, better decisions, and ultimately a better world“:

Technical achievement should be rewarded, but not technical achievement alone. More important criteria for judging the merits of research are the degree to which it actually works and the degree to which it does something that actually matters.

I can’t comment on whether the example he has focused on here will achieve this or not, but if we refer to the definition of information visualization, as offered by Shneiderman, Card and Mackinlay offered as “the use of computer-supported, interactive, visual representations of abstract data to amplify cognition”, there is a clear statement about the prominence of technology in this field. Indeed 99% of visualisation design is carried out using technology, often through a suite of many different packages for each project. The importance and value of pushing the boundaries of technology solutions and concepts will be clear to many practitioners, especially those developers who seek to construct new programming solutions for the projects they work on, and each research development or concept followed will surely enrich this directly or indirectly, regardless of whether it works or matters.

Finally, Stephen expresses discontentment with the role of academic supervisors, allowing researchers to direct their efforts in a misapplied ways and leading to results that can’t be meaningfully applied to information visualisation:

My intention here is not to devalue the talents of these researchers, and certainly not to discourage them, but to bemoan the fact their obvious talents were misdirected. What a shame. Why did no one recognize the dysfunctionality of the end result and warn them before all of this effort was…I won’t say wasted, because they certainly learned a great deal in the process, but rather “misapplied,” leading to a result that can’t be meaningfully applied to information visualization.

…Information visualization research must be approached from this more holistic perspective. Those who direct students’ efforts should help them develop this perspective. Those who award prizes for work in the field should use them to motivate research that works and matters. Anything less is failure.

It strikes me that much of the research process is incredibly non-linear, unpredictable and serendipitous, despite the intent of best design (how many of us would still be around if it wasn’t?). Isn’t part of the wonder and complexity of research the fact that it can’t be boxed in to a simplified, binary state of success or failure?

What do you think?

VisWeek updates by Jérôme Cukier: Day 3

The IEEE VisWeek Conference 2011 is taking place in Providence, RI this week (23rd to 28th). VisWeek 2011 is the premier forum for visualization advances for academia, government, and industry, bringing together researchers and practitioners with a shared interest in tools, techniques, technology and theory.

The week is organized around three separate conferences IEEE Visualization 2011, the venue for all visualization research for data that has an intrinsic spatial component, IEEE Information Visualization 2011 focused on research relating to visual mappings of non-spatial data and interaction techniques and IEEE Visual Analytics Science and Technology 2011 which concerns the reasoning processes involved in visual analysis and the application of visual environments to generate useful insight about real-world problems.

I’m disappointed to not be able to attend the event this week but am delighted that Jérôme Cukier has very kindly agreed to provide updates of his discoveries, reactions and experiences. I’m particularly pleased to provide a platform for Jérôme’s updates because I consider him to be one of the most astute and thoughtful observers within the visualisation field.

 

Day Three – Tuesday 25th October

VisWeek, take 2.

Why wasn’t there a VisWeek Day 2 post? Yesterday’s sessions were more technical and specialized than the workshop I attended the first day, so it’s difficult to summarize them honestly in a few sentences. I really enjoyed Daniel Keim’s overview of text analytics though. Daniel explained the main difficulties of text visualizations: that text is somewhat loosely structured, that text analytics is imprecise at best – if humans fail to understand irony, how can we get computers to succeed? – and that on top of that it’s difficult to visualize alphanumeric data. Though, he showed us compelling examples of what can be done in the areas of literature, opinion, readability, documents or news streams analysis.

The other reason why I didn’t write anything on Monday is because I couldn’t get network all day, which is a very sobering experience. Last week, when my twitter timeline was full of #tcc11 mentions, I wrote that I was hoping to see as many #visweek hashtags this week, but it’s pretty difficult to do anything internet-related without the internet.

This brings us to today, Tuesday, and the visweek keynote by Paul Thagard. Of all the VisWeek keynotes I’ve seen so far this was the one which I felt was the least related to visualization. I found it an interesting talk yet a strange choice for the keynote. The subject was visual thinking in discovery and invention, and the main idea was that creativity happens when the innovator is able to combine several existing concepts and get a new notion out of that. The speaker looked at a list of the 100 greatest technological inventions and the 100 greatest scientific discoveries and reports that in all 200 cases, this is what happened. For instance, Copernicus by combining the existing concepts of sun, earth and revolving came up with the proposition that the earth revolves around the sun. Or Edison, by considering the existing deas of bulb, carbon and filament came up with the light bulb. Paul showed a neural simulation that showed how this combination happens in the brain, that is how perceiving two different stimuli is different as perceiving the sum of those 2 stimuli. In the last part of his talk he elaborated on how cognitive science can help social sciences, notably with a tool called Empathica but I find it difficult to report that concisely so I invite you to check the tool out.

In the afternoon I liked the session about tree, network and social network analysis. 2 cool systems were presented to represent “regular” tabular data (i.e. rows and columns) in a network form (nodes and edges): ploceus and orion, which worked wonderfully well on the examples that were presented.

The last paper of that session was the one that impressed me most for that day, it was about creating (and exploring) a social network in the enterprise, based on the explicit or implicit relations that exist there. Explicit meaning: I say that this person is my friend, whereas implicit could be, the two of us sit nearby, I commented her blog post etc. Adam Perer, the speaker, is from IBM so their data was the 450k IBM employees and the 73m relations they found among them. Once the system is built they are able to query it, for instance to return the people most skilled on a topic, or to better know someone.

I ended the day at the Bird-of-a-Feather meeting on blogging. For those of us who didn’t know what that was, let’s just say it’s a fancy name for meeting on a specific topic. It was organized by Robert “@eagereyes” Kosara and Enrico “@filwd” Bertini and a dozen people participated, and we had a good, interesting discussion on at least 3 topics.

First, on having opinions and standing by them. The overall consensus was that it’s a good thing as bloggers to speak your mind even if you disagree with many. Robert for instance did criticize harshly a number of visualizations. Some of us thought there was a limit on what you could say, and of course the David McCandless issue was brought up. To be fair, most people around the table thought the huge amount of negative reviews he’s got for his work was more than justified, although I personally strongly disagree with that. I’m sorry to report that we ended up spending a lot of time discussing it. Then came the question, why target a wide audience vs the vis community? Does it even make sense? We agreed though that on visualization blogs there could be helpful insight that can be picked up through a google search long, long after the publication date. This is helpful, as few people know proper visualization techniques outside the visualization community. One other point on which there was a fair consensus is that while it’s unrealistic to attempt to compete with flowingdata.com or infosthetics.com, it is valuable to keep blogging, be it to write reviews and opinions, tutorials, launch discussions or just cover a specific field. Biological visualizations, for instance, seem pretty much unblogged at the moment…

And that concludes today’s report. Tomorrow will be the first day of infoVis, which I am very much looking forward to!

See you tomorrow!

Experimental isarithmic maps visualise electoral data

David B. Sparks, a fifth-year PhD candidate in the Department of Political Science at Duke University, has today published a fascinating set of experiments using ‘Isarithmic’ maps to visualise US party identification. Isarithmic maps are essentially topographic/contour maps and offer an alternative approach to plotting geo-spatial data using choropleth maps. This is a particularly interesting approach for the US with its extreme population patterns.

David uses hue to depict the strength of party identification using data from the 2008 Cooperative Congressional Election Study. Strong reds indicate a dominance of support for Republicans, strong blues indicate stronger support for Democrats, with purples reflecting an independence or general parity in opinion. Building on previous work experimenting with this mapping approach last year, he has now extended his visual encoding to include an extra layer of data presentation to depict density of respondents:

… since survey respondents are not distributed uniformly across the geographic area of the United States (tending to be concentrated in more populous states and around cities), I have attempted to convey a sense of uncertainty or data sparsity through transparency. Some early products of this experimentation can be seen below.

This layer is depicted by the ‘lightness’ of a region. In the image above he uses a white > dark scale, with the lighter regions representing the more sparse populations. He reverses this in the example below.

Interestingly, feedback suggests the use of black to depict the sparse regions is received better by people on an aesthetic level, but the use of white is believed to better facilitate interpretation. I agree with this – it is more intuitive to me to view sparse regions as white ‘empty’ areas.

I think this is an excellent solution to overcome the challenge of effectively presenting a combined view of opinion data with the spread of respondents. Note, it is important not to confuse this approach with the problems demonstrated in this ‘Are the Richest Americans also the Best Educated‘ map which tried to encode three separate variables using hue alone.

You can see further isarithmic approaches to visualise attitudes towards abortion and family income levels on David’s published post.

VisWeek updates by Jérôme Cukier: Day 1

The IEEE VisWeek Conference 2011 is taking place in Providence, RI this week (23rd to 28th). VisWeek 2011 is the premier forum for visualization advances for academia, government, and industry, bringing together researchers and practitioners with a shared interest in tools, techniques, technology and theory.

The week is organized around three separate conferences IEEE Visualization 2011, the venue for all visualization research for data that has an intrinsic spatial component, IEEE Information Visualization 2011 focused on research relating to visual mappings of non-spatial data and interaction techniques and IEEE Visual Analytics Science and Technology 2011 which concerns the reasoning processes involved in visual analysis and the application of visual environments to generate useful insight about real-world problems.

I’m disappointed to not be able to attend the event this week but am delighted that Jérôme Cukier has very kindly agreed to provide updates of his discoveries, reactions and experiences. I’m particularly pleased to provide a platform for Jérôme’s updates because I consider him to be one of the most astute and thoughtful observers within the visualisation field.

 

Day One – Sunday 23rd October

Hello everyone! On behalf of Andy, I’ll try my best to cover VisWeek. It’s a tall order, because every day of the visweek several sessions happen in parallel, so it’s technically impossible to see all of what’s going on here. And the first two days are the worst in that respect, because while the other days there are up to 3 tracks, today and tomorrow there are no fewer than 5!

Eventually, I chose to attend the Telling Stories with Data workshop. Or rather, the workshop chose me as I was invited to talk there, which solved my dilemma.

It’s the second year that this workshop is ran, and I really liked it last year and set my expectations on “very high”. We started with Steven Drucker from Microsoft who presented the rich interactive narratives (www.digitalnarratives.net). The rich interactive narratives is a platform, not dedicated to data visualization, that aims to help authors to develop and structure interactive stories. Not too long ago the frontier between data exploration and data presentation was more or less unknown territory. Since then, we have seen many examples of data-augmented slides, but while what constitutes an interactive story is now better known, creating such stories can be difficult for authors and require programming.

The next speaker, Wesley Willet discussed the importance of sharing and integrating user comments and interaction within the visualization process. Sharing visualizations should no longer be thought as a byproduct of the experience, but rather as a valuable part of the experience. Wesley proposes several ways to improve the state of things: to make visualization increasingly linkable and sharable, for instance by being able to link to specific states of a visualization rather than to the home page of a project; to better recover discussions about visualizations, by the clever use of twitter hashtags for instance, and finally by better analysing the discussions. Data derived from how users manipulate a visualization can in turn be visualized, for instance.

I happened to be the 3rd speaker of the workshop. My talk was on the merits of interactive models in rhetoric. Suppose you want to pass a message across. Sure thing, you could do an awesome static chart. One limitation of that approach is that if you are discussing a sensitive subject, you will never be able to convince those who are strongly prejudiced against your opinions this way. Instead, it could be more interesting to present your users with a sandbox – a toy of sorts – and an assignment. Here is a problem, solve it within the tool I gave you, without any further intructions or constraints. If the tool is designed to subtly orient the conclusions of the user, it can be a much more effective approach. The fourth speaker, Sunah Suh, criticized how some visualizations are presented as intuitive, while in fact they systematically require from the user a certain degree of competence that cannot be ignored. How to read visualizations has to be learned, like the use of any new media; on top of that, being able to decipher certain cultural representations and a basic understanding of statistics are also required skills.

The 5th speaker of the workshop presented pandemic 1.0, and I’m not sure of the right word to describe what it is. It’s a film, a book, several games, toys, social media, mobile apps, physical interactions all brought together. That, and a gratuitous dose of data visualization. All of those mediums are used to immerse users in a complex story with which they interact through a variety of means. For instance, finding hidden objects in the physical world make the story unfold in a certain way.

The next speaker was Brad Stenger from the New York Times on how they use APIs to communicate data and how these APIs are used in terms by the great number of writers and developers that depend on either the data or the news produced by the NYT.

And the workshop was concluded by the brilliant presentation of Jo Guldi, an historian whose work revolves around the representation of time and representation of space. She told us about several revolutions that completely changed how people approached maps. One of such events was the rebirth of the British road system, which went from a state of complete disarray in the 16th century to become a vital infrastructure in the 18th century. When Britain started to have a usable road network, mapmakers and other visualizers of the time did an amazing job of creating incredible spatial representations, which are impressive even by today’s standards. It was only decades later that the government of the time officially published the equivalent data. The incredible effort of compiling all of these maps was achieved through the creativity of the practitioners, even though they didn’t have “clean” data to rely on.

That concludes it for today’s sessions! Tomorrow, I’m thinking of the session on interactive visual text analytics, but maybe my dilemma about which track to attend would be best solved by following the one on working with uncertainty! We’ll see that tomorrow.

Google Analytics introduces ‘visitor flow’ visualisation

Yesterday, at the Web 2.0 Summit, Susan Wojcicki & Phil Mui unveiled the release of “Flow Visualization” in Google Analytics, which helps web site owners and analysts to enhance their understanding of visitor path and goal analysis – how visitors flow through your site’s pages and how this maps on to your intended goals.

Many observers have compared this approach to the famous Charles Joseph Minard 1869 depiction of Napoleon’s march on Moscow - and I can see where they are coming from - but I instinctively thought these more strongly resembled Sankey diagrams when I saw them first.

Another point, are we seeing the influence of Martin Wattenberg and Fernanda Viegas here, in their roles as co-leaders of Google’s “Big Picture” data visualization group? Especially if you recall the strong emphasis of flow in their respective/joint visualisation portfolios.

The visitor and goal flow visualisation features will be rolling out to all Google Analytics accounts this week. Intriguingly, it sounds like further visualisation devices will be coming to Google Analytics in the near future.

O’Reilly Radar article: “Why animated geospatial data works”

As I mentioned a couple of months ago, I am delighted to have been approached by Editor Mac Slocum to contribute a series of ‘Visualization deconstructed’ articles for the superb O’Reilly Radar website.

The first of these articles was published today, focusing on why animated geospatial data works so well. In this piece I explore some of the most prominent recent demonstrations of this technique and analyse the design choices that lead to their great effect.

Click on the preview image below to read the full article.

Inspirational design photostream of Prof Michael Stoll

Followers of my Twitter feed may have seen me post a link to a set of Flickr images collated by Professor Michael Stoll, of the Department of Design at the Augsburg University of Applied Sciences. Having browsed around the rest of his incredible collections and sets I decided it necessary to capture this work in a more substantive way via this blog post.

The simple piece of advice is to immerse yourself in this wonderful catalogue of vintage and inspirational design collections. Here are a few examples:

Sample pages from Willard Cope Brinton’s second book ‘Graphic Presentation’ (1939):

Willard Cope Brinton – Graphic Presentation   /_50

Willard Cope Brinton – Graphic Presentation   /_45

Willard Cope Brinton – Graphic Presentation   /_44

Sample pages from ‘The Statistical Atlas of The United States’ (1900):

Statistical Atlas of the United States 1900

Statistical Atlas of the United States 1900

Distribution of Population USA 1850

Rank in Population US-States and -terretories (1790 - 1900)

(Thanks to Benjamin at datavisualization.ch via whom I came across Michael’s photostream, mentioned in this 2009 post)