Research in Information Visualisation

Andreas Loizides
Department of Computer Science
University College London



This page is maintained by Andreas Loizides and points to information about INFOVIS (Information Visualisation) and specifically EVA a novel method for visualising abstract multivariate data sets.

Information Visualisation Background
Introduction to EVA
Methodology - EVA
Siggraph Sketch 2001
Experiments (under construction)
Links to Infovis related websites

InfoVis Background

Information Visualisation



Formal Description

How Visualisation Amplifies Cognition

Multivariate Information Visualisation

Abstract Visual Structures - Arbitrary Mapping

Abstract Visual Structures - Automatic Mapping

Naturalistic Visual Structures - Arbitrary Mapping


The focus of the thesis is on information visualisation techniques to visualise abstract multivariate data sets. A fundamental choice for any visualisation technique is to be applicable to a broad range of data sets, to solve different problems for different areas. In other words a general purpose visualisation system or tool. The visualisation tool used throughout the thesis, is the Empathetic Visualisation Algorithm (EVA) which is a technique to automatically map multivariate data sets to naturalistic visual structures taking into consideration the impact of the visual structure on the emotions of the observer. The reason for this choice will become apparent as we describe different methods. It is therefore important to overview previous work related to Information Visualisation and especially visualisation of multivariate data sets.

Information Visualisation

Information Visualisation is a new research area that focuses on the use of visualisation techniques to help people understand and analyse data. It can be defined as the process of transforming data, information and knowledge, into visual form making use of the human natural visual capabilities. Research in decision analysis, cognitive psychology and computer graphics concludes that the human mind assimilates information more efficiently in a pictorial form than in raw data form, i.e. in numeric and alphanumeric form. Jern explains the role of visualisation as being at the interface between what machines are good at (data, information) and what humans are good at (knowledge, experience). Therefore, visualisation can be seen as a powerful link between the two most powerful information processing systems in our days, the human mind and the modern computer. Hamming correctly identified that ``The purpose of computation is insight, not numbers.'' Likewise, for visualisation, visualisation is more than pretty pictures; ``The purpose of visualisation is insight, not pictures''.


In our everyday life we often use external aids to enhance cognitive abilities. These are physical representations of abstract information in what can be called ``external cognition''. For example, if we want to do long division, we use a notation to store the results of each stage of the calculation. This particular example helps us extend the working memory. External representations can also be used to allow patterns, clusters, relationships, cluster gaps and outliers of the data to become apparent. Or they can be used for quickly searching vast amounts of data for something specific by giving an overview and powerful navigation. As Norman says ``The power of the unaided mind is highly rated. The real powers comes from devising external aids that enhance cognitive abilities. How we have increased memory, thought and reasoning? By the invention of external aids: It is things that make us smart.'' Information visualisation is just about that - exploiting the dynamic, interactive, inexpensive medium of graphical computers to devise new external aids enhancing cognitive abilities. These visual artifacts have profound effects on peoples' abilities to gather information, compute with it, understand it and create new knowledge.

Let's start by giving a brief historical perspective to visualisation before definining it in a formal way.


Visualising data and especially scientific data is not new. The idea of representing data visually has been around for much longer than computer based visualisation. Legend has Archimedes was slain while drawing geometrical figures in the sand. Astronomical charts were produced in the Middle Ages in which there were arrow plots of prevailing winds over the oceans and magnetic charts that included isolines.

In ancient Hellas, distinguished scholars like Euclides and Pythagoras visually presented advances in the field of geometry. This area of mathematics not only benefits greatly in visually communicating its findings, but also its very basis and significance lies on objects naturally existing in two and three-dimensional space. In the middle ages (1570) researchers of the subject used paper to construct three dimensional models of these shapes in an attempt to make it easier to communicate these ideas. In the seventeenth century, Galileo used visual reasoning to support his conclusions about the solar system. Quoting Tufte (1990, p.19) ``His argument [Galileo's] unfolds the raw data (what the eye of the forehead registers) in to a luminous explanation of mechanism (what the eye of the mind envisions)''. What is common in the above cases is that the information visualised has the property of naturally existing on spatial axes, so its visual mapping is straightforward.

The challenge here is the visualisation of abstract information which does not have a natural mapping to spatial axes. In this area, ground breaking advances occurred in the nineteenth century with the linking of the spread of cholera to water supply in central London. During the 1853-54 cholera outbreak in London and Dr John Snow, a physician, identified a large grouping in the Soho area. He went on to plot the homes of the 500 victims who died in the first 10 days of September 1854 on a map of the area. This simple representation of the data he had collected showed that the grouping of cholera sufferers in this area was centred around a particular water pump. Investigation of this water pump established that it had been contaminated by a leaking cesspool.

It can be seen that the role of visual perception in data understanding has been long understood.

Formal Description

Computer visualisation has been with us almost since the first digital computers but the 1980s saw fundamental changes due to the need for more complex visualisation algorithms and tools to cope with the large amounts of data that sensors and supercomputers supplied. Scientific Visualisation's birth as a discipline is generally placed with the publication of the 1987 Report of the National Science Foundation's (NSF) Advisory Panel on Graphics, Image Processing, and Workstations. The report used the term "Visualisation in Scientific Computing" (ViSC), now generally shortened to "scientific visualisation". The term scientific visualisation in this context is preferred to the more general term "data visualisation", due to the fact that the latter has connotations of statistical methods that were outside the scope at that time. Since then scientific visualisation has experienced vast growth and emerged as a recognised discipline. Visualisations from this discipline show abstractions, but the abstractions are based on physical space. The data is scientific and the use of visual images for such information has great benefits in enhancing cognition, given that the data maps naturally to the spatial axes.

Information visualisation on the other hand, uses graphic images to represent abstract, non-physically based data. Examples of such data includes financial data, business information, collections of documents, traffic flows through the internet, statements in a computer program, purchasers at a grocery store, and other abstract conceptions. Presenting visually this kind of information poses great challenges, since there is no natural mapping to the spatial axes and thus no right or wrong metaphor for representing it. There is a great deal of such abstract information in the contemporary world and its mass and complexity are a problem, motivating attempts to extend visualisation into the realm of the abstract. A more detail description of the difference between scientific and information visualisation can be found at a discussion by Gershon and Eick.

Card, Mackinlay and Schneiderman in their book ``Readings in Information Visualisation'', define information visualisation as:

``The use of computer-supported, interactive visual representations of abstract data to amplify cognition.''

Table 1 shows a number of definitions adopted from the same book that clarify the relationships among concepts related to information visualisation.


External Cognition Use of the external world to amplify cognition
Information Design Design of external representations to amplify cognition
Data Graphics Use of abstract, nonrepresentational visual representations of data to amplify cognition
Visualisation Use of computer-supported, interactive visual representations of data to amplify cognition
Scientific Visualisation Use of interactive visual representations of scientific data, typically physically based, to amplify cognition
Information Visualisation  Use of interactive visual representations of abstract, nonphysically based data to amplify cognition

Table 1: Definitions for concepts related to Info Vis.

External Cognition is concerned with the interaction of cognitive representations and processes across the internal/external bounder in order to support thinking. Information design is the explicit attempt to design external representations to better acquire or use of the knowledge. Data graphics is the design of visual but abstract representations of data for this purpose. Visualisation uses the computer for data graphics. Scientific Visualisation is visualisation applied to scientific data and Information Visualisation is visualisation applied to abstract data. It is important to note that while emphasising visualisation, the general term is perceptualisation. It is possible to design systems for information sonification or tactilisation of data and there are advantages in doing so. But since vision is the sense with by far the largest bandwidth (in fact half of the neurons in the human brain are dedicated to vision), visualisation is an obvious place to start. Research in this area is still exploratory.

To put Information Visualisation into context, we will have to classify the different kinds of research that are being performed in this area. We can categorise Information Visualisation either by data type or by techniques used. A very common taxonomy is that first proposed by Schneiderman with seven data types: 1-, 2-, 3- dimensional data, temporal and multi-dimensional data, and tree and network data and seven tasks: overview, zoom, filter, details-on-demand, relate, history and extract. However, we believe that 1-, 2-, 3- and multi-dimensional visualisation should be grouped in a new taxonomy called n-dimensional visualisation. Young proposed a different taxonomy based on visualisation techniques: surface plots, cityscapes, fish-eye views, benediktine space, perspective walls, cone trees and cam trees, sphere visualisation, rooms, emotional icons, self-organising graphs, spatial arrangement of data and the information cube.

Although, throughout this thesis, we are only interested with multi-variate visualisations we will very briefly describe each area by data type since we believe it will give a better overview of the whole field. Table 2 shows those categories with a few visualisation examples for each of them.

1-D Linear SeeSoft, Document Lens, TileBars
2-D Map ArcView, Pad++
3-D World Volvis
Multi-Dim Parallel Coordinates, Worlds within worlds, VisDB
Temporal Perspective Wall, LifeLines
Tree & Cone Cone/Cam, Treemap
Network SemNet, Fisheye

Table 2: Taxonomy of Info Vis according to Data Type.

Linear data types include sequential lists which are often text based. Interface design issues include what fonts, colour, size to use and what overview, scrolling or selection methods can be used. Tasks include, traversing long lists with changeable sort orders, filtering out unwanted data, viewing summary data about many ordered items, and finding important specific elements.

Planar or Map data (2d) include geographic maps, floorplans, or newspaper layouts. The users' problems are to: find adjacent items, containment of one item by another, paths between items, and the basic tasks of counting, filtering and obtaining details on-demand.

Real world visualisation (3d) is used to view real world objects such as the human body, buildings, or molecules for the purpose of extracting information. Volume visualisation is the form most widely used in world Visualisation and has significant impact in medicine.

The use of temporal information visualisation has a fundamental quality that separates it from 1-dimensional data. The distinction in temporal data is that items have a start and finish time and that items may overlap. Frequent tasks include viewing and creating historical overviews of events or data, and viewing events or data in sequence.

Hierarchical and Network Visualisations are seen as a promising medium for information searching. Using tree structure, hierarchies or custom types of networks the information of digital libraries, documents, the internet can be catalogued and searched quicker and easier than by the use of conventional techniques.

Multi-dimensional information visualisation represent data that is not primarily spatial. The number of attributes of a given item in the collection is more than three. Tasks include understanding, or getting an overview of the whole or a part of the n-dimensional data. For example, finding patterns, relationships, clusters, gaps and outliers of the data. Other tasks include finding a specific item in the data. For example, zooming, filtering and selecting a group or a single item from the data. Multi-dimensional information visualisation is the target of this thesis as mentioned above, therefore a more detail background review that focus on this area will follow.

How Visualisation Amplifies Cognition?

Card, Mackinlay and Schneiderman stress that information visualisation has from its definition three goals: to aid discovery, exploration and decision making.

So, what are the things that help humans discover, explore and make decisions? How does information visualisation amplify cognition of the users, observers?

Larkin and Simon compared solving physics problems using diagrams versus using non-diagrammatic representations. Their conclusions was that diagrams helped in three basic ways:

  1. By grouping together information that is used together, large amount of search were avoided.
  2. By using location to group information about a single element, the need to match symbolic labels was avoided, leading to reductions in search and working memory.
  3. Finally, the visual representation automatically supported a large number of perceptual inferences that were extremely easy for humans. For example, with a diagram, geometric elements like alternate interior angles could be immediately and obviously recognised.

Two of these essentially improve the Cost-of-Knowledge Characteristic Function (how much additional information becomes available for each additional amount of time expended for accessing information, the third reduces costs of certain operations. A very important note is that, in order to understand the effectiveness of information visualisation we need to understand what it does to the cost structure of a task.

Card, Mackinlay and Schneiderman proposed six ways in which visualisation can amplify cognition:

  1. by increasing the memory and processing resources available to the users,
  2. by reducing the search for information,
  3. by using visual representations to enhance the detection of patterns,
  4. by enabling perceptual inference operations,
  5. by using perceptual attention mechanisms for monitoring, and
  6. by encoding information in a manipulable medium.

As can be seen above, Information Visualisation has a lot of potential to alter the way in which we interact with the multitude of information around us. It promises important benefits, especially in decision making. However, it must not be over-estimated but users should recognise it as an aid. Like numerous software and technological tools, it is not a solution to all information overload and complexity problems. Rather, it is a new medium,, which when properly and effectively used can augment our cognitive abilities.

Multivariate Information Visualisation

Some times with Human Computer Interaction (HCI) problems, you run an experiment, get feedback for two-three variables and you analyse those using spreadsheets and graphs. And for such problems, these kind of visualisations are very effective indeed. But, what about cases where you have 5, 10, 20, 70 variables?

Multiple dimensions >3 refers to the harder problem of multidimensional visualisations where data tables (matrices of data in which you have cases as rows and attributes as columns) have so many variables that an orthogonal Visual Structure is not sufficient. In fact most visualisations do and they are the interesting ones. They start with multivariate data sets that have too many variables to be encoded directly using 1-, 2- or 3-dimensional visual structures. For this kind of data, graphs and charts lose their effectiveness.

Variables, attributes of such multivariate data sets can be divided into three basic types:

N = Nominal (are only = or \not= to other values), 
O = Ordinal (obeys a < relation), or
Q = Quantitative (can do arithmetic on them).

A nominal variable N is an unordered set, such as car models (BMW, FIAT, HONDA). An ordinal variable O is a tuple (ordered set), such as film ratings <G, PG, PG-13, R>. A quantitative variable Q is a numeric range such as year [-5000, 1999]. These distinctions are very important for some of the systems that will be described below, since they determine the type of axis that should be used in a visual structure.

The thesis will concentrate on quantitative variables of multivariate data sets. Of course, we could provide rules that we could take Nominal, Ordinal variables and quantify them, but such a process is beyond the scope of the research. In the past, when you had a ton of raw data, the answer was to give it to a statistician and that statistician would massage the data with a sophisticated tool and explore the correlations and trends both graphically and analytically. Things are changing nowadays.

Abstract Visual Structures - Arbitrary Mapping

Below, we will show numerous techniques used today that solve, or attempt to solve, some of the problems of visualisating multidimensional data sets starting from the one that we believe has a major disadvantage, and was the fundamental problem we are trying to solve in this thesis. These techniques are usually not an alternative to statistical analysis but a complementary one.

The very first technique, is that of multiple views. The idea is to give each variable its own display. So, if we have n dimensions, n variables, we could have n bar charts, one for each of the variables. In a way we break the dimensions down, to individual components that can be visualised in 1-dimension. For some kind of data, such multiple view analysis might be reasonable but for others it has a major drawback. Unfortunately, is easy to get lost in the details and hard if not impossible to find trends in the data.

Consider financial information systems which are multivariate and hence multidimensional. The data components are correlated and their values (or range) effect each other with respect to the decision analysis process. As Smith and Taffler define the above: ``the assessment depends on the simultaneous effect of several variables in different spheres of activity''. As an example, the turnover of a firm is, usually, highly correlated with the sales level. When viewing the whole information system and analysing it with respect to another variable, capital expenditure, the former figures introduce a new dimension. From the above it can be argued that financial information systems, are complex. In fact Lux presents the argument that financial information can be, figuratively speaking described as an iceberg. One can see the tip of the iceberg but its sheer mass is hidden under underwater. And we say that financial information systems are not the only ones that can be described as an iceberg, there are data warehouses, business information, library databases, documents and others that could be described in the same way. Therefore, multiple views are not good enough for our purposes.

Bertin developed a direct technique for creating multidimensional visual structures from multivariate data tables which he called permutation matrices. The technique that was developed before computers were used to support visual thinking, involves representing rows of data as bar charts and sorting them. Graphical icons of data values were placed on cards and permuted with metal rods. The goal of the permutations is to form patterns, typically to place the large values on the diagonal of the matrix, thereby clustering similar cases with their representative variables.

When computers became available, permutation matrices became an early example of information visualisation. For example the TableLens which is a data analysis application intended to give non-experts the ability to visually spot trends and correlations in the data set. An example of TableLens can be find in the image below.

TableLens, starts with a regular data table and displays the whole table graphically. In essence TableLens is the graphical equivalent of a relational table in which the rows represent cases and the columns represent variables. It is best used for numerical and categorical data.  For quantitative variables, a graphical bar is used to represent the values The bars are aligned to the left edge that may represent a minimum value, zero or a lower boundary .The length of the bar indicates the relative size of the represented value. This visualisation provides a scale advantage since bars can be scaled to one pixel wide without perturbing relative comparisons, and also an exploration advantage, since large numbers of tiny bars can be scanned much more quickly than a bunch of textually represented numbers.

There are a wide range of manipulators you can use to discover trends in the data set without affecting the underlying data. You can sort any column, you can break down your display by categories, you can focus on any of the rows, or columns, you can spotlight rows i.e. change their colour, you can filter categories and you can create a new column computed using a formula based on other columns. Using these manipulators you can search for patterns, outliers in your multivariate data set. For example as correctly stated in their paper, sorting can be seen as the first step of looking for correlations among variables. After a variable has been sorted, if another variable is correlated then its values will also appear sorted.

A similar approach is that taken by Becker and Chambers with the system. Splus is an interactive data analysis environment similar to TableLens that integrates several data manipulation and viewing techniques as a library of primitive functions that can be performed on the data. In particular, after the data is loaded (and they use an interpreter environment for that) a user can invoke a ``brush tool'', commonly known as a scatterplot matrix which displays a matrix of all pairwise scatterplots. Optionally a histogram of each variable could be placed at the base of each column of scatterplots associated with that variable. A series of other manipulators is used in the tool to help understand more about the data.

But the key problems with this are similar with those of TableLens.

Late visualisations like HomeFinder and FilmFinder deal with some of these problems. An example of the Filmfinder can be find in this image. Although the interface representation is a 2-dimensional scatterdiagram they offer dynamic queries for interactive user-controlled visualisations of additional dimensions. Dynamic queries allow users to formulate queries by adjusting graphical widgets, such as slider, and see the results immediately. By providing a graphical visualisation of the database and search results, users can find trends and exceptions easily, User testing that was done with eighteen undergraduate students who performed significantly faster using a dynamic queries interface compared to both a natural language system and paper printouts. The interfaces were used to explore a real-estate database and find homes meeting specific search criteria. However, in these systems we believe that only a small number (probably two) of independent variables must be of significant importance to the user in order for the visualisation to work effectively. After all, only two dimensions are being plotted directly in the scatter plot.

A different technique for solving multidimensional data problems is that of starplots. This image shows an example of such technique with five different variables. Basically we map the data variables around the circle at equal distances between them. Each variable is encoded with lines. Line size defines the value of the variable. The result is a shape as the one shown below. It can be seen even from this simple diagram that as the dimensionality increases (let's say >10) the available space will be narrowed much to make the visualisation hard to read. For small sets it is a good method for comparison of the variables.

A technique that is very close to starplots is that of Multidimensional Detective using Parallel Coordinates. It is a very popular technique that involves the parallel placement of axes in 2D. The fact that orthogonality ``uses-up'' the plane very fast and also that parallelism does not require a notion of angle led to the inspiration of parallel coordinates. In drawing each axis separately the technique  is a reminiscent of Bertin's permutation matrices. Each case in the data set is encoded as a line, each line connects axes that represent process parameters. A line may be colour encoded. Correlated cases often create recognisable patterns between adjacent axes. The challenge of parallel coordinates is to recognise these patterns. Interactivity is provided in the system to help people find these relationships. Interaction allows the user to reduce the complexity by limiting the range of an axis or brushing specific lines. Therefore we can focus on specific data items. The image shows an example of parallel coordinates.

Parallel coordinates can be seen as the method to model relations, or a 2d pattern recognition problem. However, the method requires skill from the user to get a geometrical understanding and properly query the picture. Also, due to the narrowness of information the colour encoding being performed is unclear, you do not really see colour. Having said that, case studies show that this visualisation in the hands of a skilled user supports complex visual thinking. This conclusion arises from the following observations:

A different technique is that proposed by Mihalisin, Timlin and Schwegler. They developed a technique to visualise a scalar dependent variable that is a function of many ``independent'' variables.

Each variable is plotted within the space delimited by the previous variable for each discrete value that represents the variables range. More specifically, you take your working space and divide it into multiple windows. Then you take each variable in turn, select for that variable a reasonable set of values and subdivide the window for those values. The process is done recursively until the whole function has been plotted i.e. all the variables have been realised.

It is a hierarchical technique for visualising and visually analysing multivariate functions, data and distributions in various ways. The order you place your variables in the hierarchy affects the visualisation you get. It is like sampling the axes of the input variables at slow, medium and fast variables to create a new variable that is a function of the input ones. Some visualisations are comparatively easier to understand than others. Therefore, it gets quite complicated as the dimensionality of the variables increases and as the range of values of your variables increases. A problem that becomes apparent for this method is that of treating the variables non-uniformly.


The technique mentioned above, seems to be influenced by an earlier method. Beshers and Feinerdescribed a technique called ``worlds within worlds'', a multidimensional visual structure based on overloading. The image shows an example of this method. They visualise high-dimensional functions by placing 3d coordinate systems inside other 3d coordinate systems recursively until all dimensions are included. Changing the position of the inner coordinate system results in changes in the surface displayed since three variables are changed. However, at any one time, the surface displayed is constructed out of only 3 variables (the outer coordinate system) and the constant values of the rest of the variables.

The main idea of the method (worlds within worlds) is a way to gain information lost in the process of reducing the complexity of the data in order to be displayed in 3-dimensions.

The final technique that will be presented, for this kind of visualisations, is VisDB proposed by Keim and Kriegel. Image shows an example of this method. They propose a more radical method of presenting the multidimensional results of database queries. The central idea of VisDB is to use each pixel of a square to represent a data item resulting from a query. So you start by issuing a query that specifies a target value of the different dimensions. Most of the time you get no exact match. This technique helps emphasise on close, near matches. The way to do this is using a technique they called ``relevance function''. How close an item is to the query? In order to measure closeness we must be able to numerically quantify each data value. Then we sum up the distances of each dimension away from query item. The relevance is the inverse of this distance.

In order to visualise an n-dimensional data, we divide the window to (n+1) squares. Then colour encoding of the same pixel on each square represents the relevance factor of the query firstly for total relevance and then for each of the n-dimensions separately. Items close to the query are mapped close to the centre of the square. We can also do an aggregate visualisation. In this case we only have one square, but instead of 1 pixel representing relevance, we now have (n+1) pixels representing relevance. Again we start from the canter and follow a spiral path to visualise items further away from the query. It is very hard however to visualise the aggregated solution. A possible problem of the method is that if we want to see data types in certain order we cannot. It is set up so that it is sorted by relevance.

All of the techniques described in this section use abstract visual structures to visualise multivariate data sets and the mapping from data to visual structure is an arbitrary one. It does not take into account the impact of the visual structure on the emotions of the observer. In fact lying in this category (abstract visual structures, arbitrary mapping) is the vast majority of the attempts created nowadays to visualise data of multiple dimensions. However, in order for a system or tool to be used as a general purpose visualisation tool the principle is that the mapping from data to visual structure to be done automatically. Visualisation systems that attempt to solve this problem are the focus of the next section.

Abstract Visual Structures - Automatic mapping

The problem we are looking here, as mentioned above is that of automatically mapping data to visual structures in a meaningful way. In other words, the problem of automatic design. One of the first attempts for automatic design is Jock Mackinlay's APT (A presentation tool) based on a formalisation of Bertin's scheme and artificial intelligence techniques.

Mackinlays' goal was to create automatic visualisation from data based on generating and testing possible solutions that satisfy rules of expressiveness (language's ability to represent the data correctly) and effectiveness (psychological performance, how good the visualisation is). The significance of this paper is that it showed the theoretical analysis of graphical presentations were an adequate basis for partial mechanisation. Data are composed to data tables that are mapped to visual structures that are, in turn, composed into complex presentations. For example, two variables from a data table might be mapped to two 1-dimensional visual structures and then composed to create a 2-dimensional visual structure.

The data model is based on a set of relational tuples (e.g. Price(Fiat, 2005); Mileage(Fiat, 35)) and associated structural properties (e.g. Price: Cars[1000,15000]; Mileage: Cars -> [10,40]; Cars = {Fiat, ...}). User directives are given in terms of the relations to be presented (some of which can be omitted), and a priority ordering of relations. A particular graphical language can also be specified. Visual representations are based on a set of abstract graphical primitives, formally expressed as primitive graphical languages. Given a precise syntactic definition for the graphical languages, their semantics can also be specified using formal techniques. The expressiveness criterion is based on the semantic definition of the language: a graphical language can be used to represent some information, if it encodes exactly the input information, where exactly means all the information and only the information.

The effectiveness criterion is used to rank the primitive languages according to accuracy in perceiving quantitative, qualitative and nominal characteristics of the data.  Image shows a ranking of the perceptual tasks. It is an extension of Cleveland and McGill's ranking. The effectiveness and expressiveness criteria are used in the selection step of APT's matching procedure. The procedure is as follows:


  1. Partition the set of relations to be presented in subsets that match the expressiveness criterion of at least one of the primitive languages. The most important relations are given preference in being matched to effective graphical languages.
  2. Select for each partition: (a) all candidate graphical languages according to the expressiveness criterion, (b) the most effective candidate language form (a) which has not been ruled out yet (for example, because it has been used for another partition).
  3. Compose (if possible) the primitive graphical designs by applying three composition operators. Composition principle is that two visualisations can be composed by merging the parts that encode the same information.

The matching procedure uses backtracking when some of the choices made at the various stages do not allow for a feasible design.

It is a very good early attempt for automatic design but isn't it supposed to be a knowledge-based interactive data exploration? There must be a way for the user to express their needs even if it is only from the point of view of what is important in the data set. What things they are interested at. They could also have a say in how they want this data to be visualised. But where is the interaction coming from? A much more interactive system is SAGE, a knowledge-based presentation system that automatically designs graphics and also interprets a user's specifications of how to present the data. It is similar to APT but also allows the user to supply none, some or all of the specifications of the visualisation. It can visually integrate many kind of information including combination of quantitative data, relational, temporal, hierarchical, geographical, categorical and other. The system addresses the automatic of ``visualisation'' and ``manipulation''.

SAGE is the engine that handles the automated creation of images.

IDES (Interactive Data Exploration System) is used for the data manipulation task. IDES tools include dynamic queries (DQ) with the use of sliders and aggregate manipulator (AM) to aggregate and decompose groups. The latter makes disjunctions possible.

SageBrush assembles graphics from primitive objects like bars, lines and axes. It is used to search a portfolio for relevant examples.

SageTools the combine environment (Sage, SageBrush, SageBook) enhances user-directed design by providing automatic presentation capabilities with styles of interaction that support data-graphic design. A user uses ``directives'' to communicate goals to Sage engine with SageTools (task, style, aesthetic, data).

The shortcomings of the system are: despite the help of DQ the complexity of the visualisation results in a large learning time, the AM is to complex to be used effectively and the total flexibility in interaction allows the user to break the rules.

The visualisation systems described here allow for automatic mapping from data to abstract visual structures based on some criteria. Earlier attempts ignore the support for user interactions whereas most recent attempts tackle the problem of interactivity as well as improving the quality of the visualisation.

However, are we ready to eliminate graphic designers? We believe that we are not even near to it and that there are other possibilities for exploration in the process for automatic design. One such possibility which is the choice of the visualisation tool proposed in this thesis, is the use of naturalistic representations; the background to this will be described below.

Naturalistic Visual Structures - Arbitrary mapping

Naturalistic visualisations are the ones that use a ``naturalistic'' visual structure. This means it should be a representation of something encountered in everyday life, something that does not require special knowledge for interpretation by a normal human observer preferably irrespective of nationality.

Some disadvantages of information visualisation systems arise from the fact that visualisations are not as natural or easy to understand, requiring learning time from the user and for in some cases the visualisation does not offer a complete view of the data set. Naturalistic visualisations effectively handle the problem mentioned above.

The first to use such a technique was Herman Chernoff when he recognised the potential of using a human face as a representation for data. The vast majority of the techniques in this category expand this idea. Chernoff proposed using a mapping of variates in data into features of faces as a method of visualising multi-dimensional data. An example can be found in the image The method is most commonly known as, Chernoff faces. This involved assigning to each column of the data a facial feature such as width of the eye, position of the mouth etc., and for each row of the data constructing the face associated with the assignment. It is believed to be able to represent a total of 20 different dimensions as shown in Table 3. His idea capitalises on two important principles:

  1. Our familiarity with human faces and our ability to rapidly process even the smallest nuances and changes in a human face due to everyday interaction.
  2. That the human face often evokes an emotional response in us and can therefore affect the way in which we behave.

The hope is that one can use this information to group the data into interesting ways, and to determine interesting structure of the data. Since humans are optimised in some sense, for face recognition, it was hoped that using faces (as opposed to other more geometrical objects) would aid in the grouping of the data and also to illustrate ``trends'' in multi-dimensional data.

Dimension Facial Feature
1 Face width
2 Ear level
3 Half face height
4 Eccentricity of upper ellipse of face
5 Eccentricity of lower ellipse of face
6 Length of the nose
7 Position of centre of mouth
8 Curvature of mouth
9 Length of mouth
10 Height of centre of eyes
11 Separation of eyes
12 Slant of eyes
13 Eccentricity of eyes
14 Half length of the eye
15 Position of pupil
16 Height of eyebrow
17 Angle of brow
18 Length of brow 
19 Radius of ear
20 Nose width
Table 3: Description of facial features of Chernoff faces

Wilkinson has shown that, when considering similarity of data sets, faces prove to be a better representation than other forms. The use of faces as a means for communication has also been shown to lead to fewer mistakes being made and to users becoming more involved in what they are doing.

Walker has shown that the use of face does not introduce an extra element that could distract a user from the task they are trying to achieve, thereby potentially reducing their effectiveness. Instead, it actually increases the impact of the information, while providing a familiar medium for communication.

Another potential advantage that Chernoff does not discuss in great depth is the way in which human beings perceive faces. Instead of considering each individual variable as we would have to if we were to look at a table of numbers, work by Homa has shown that human process facial expression as a whole, or holistically. This means that in addition to the multivariate data being encapsulated in one face, it is also processed as one object when we look at it. The face therefore provides an excellent abstraction of the data.

Many people have built upon Chernoff's initial idea. Shane and Moriarity used Chernoff's idea to investigate whether schematic faces were a useful abstraction of communicating financial information. They concluded that persons with a limited knowledge of financial analysis as well as practising accountants were able to distinguish bankrupt and non-bankrupt firms more efficiently than using financial statements or ratios. Smith and Taffler showed renewed interest in the area and pointed out that previous work such as Moriarity's failed to compare the performance of users of varying levels of sophistication of subject. They also argued that since no indication is given in previous studies of the statistical nature of the information the results might be partly due to superior information and not superior means of visualising it. Their study included considerations from psychology literature which previous work had ignored. This includes mapping data variables onto the features of the face which psychologists have shown to be of more importance when humans consider facial expressions. Emphasis is placed on the mouth and eye areas which are considered the more important features of the face when conveying information. This is due to the high amounts of movement seen in these features relative to other parts of the face such as the ears and nose.

Through their own experiments they suggest that users of all levels of sophistication gave speedier, more accurate results than in the raw data form despite that the most specialised group were reluctant to accept that their own results given this technique were superior to standardised decision processes.

There are a number of limitations when using Chernoff faces:

  1. The variables in the representation are treated non-uniformly. In fact it seems obvious that an identification and placement of the important features in the area near the mouth and eyes would make the visualisation more effective.
  2. It is necessary to spend some time training test subjects as to which features apply to which variables.
  3. We do not see the actual values (quantity) of the variables being plotted.
  4. Loses its effectiveness when we have extreme values, since these might produce unrealistic faces.
  5. Subjectiveness of the visual structure. Different people might interpret things differently.

The latest attempt to use naturalistic visualisations is by Alexa and Muller in a technique called Visualisation by Examples. Based on the specification of only a few correspondences between data values and visual representations, complex visualisations are produced. The foundation of this approach is the introduction of a multidimensional space of visual representations. The basis of their method is the ``morphing'' technique (for more about morphing see Alexa and Muller. Morphing between two graphical objects results in a one dimensional space. Morphing between an element of such space and a third graphical base object results in a two dimensional space. By repeating this process they construct a space of any given dimension.

This technique is better illustrated by an example. Suppose we want to visualise an overall (scalar) ranking of cities in the USA. We might be interested in a nice place to live. The data for this particular example, contains values from nine different categories. That means, there is a need to project nine values to one scalar value. To visualise the rankings a Chernoff-like approach is used. We then take a smile and a grumble and produce a 1-dimensional visual scale. Thus, the degree of smiling represents the living quality determined by a combination of the nine attributes. The way they find the mapping is by allowing the user to supply a ranking based on personal experience. A ranking of a subset of all the cities is sufficient. In this example they mapped Chicago to a smiling face, Miami to a grumble and Washington to a neutral face. The result is an image for every city by combining the given example ones.

This method is nice and simple. However, a question to be asked, is that since we are finally only visualising one dimension what is the point of having a naturalistic representation? Techniques like barcharts are very effective at representing such data. If to be used, then a more sophisticated technique is needed to map the data to the visual structure. The first principal component might be a good candidate for such case.

All the relevant work needed to put our visualisation tool in context have been reviewed.


The techniques described in the literature review capture all kinds of attempts to visualise multiple-dimension data sets of quantity variables. From those we can conclude that there are numerous techniques that use arbitrary mapping and abstract representations, few techniques that use arbitrary mapping and naturalistic representations, few systems that use automatic mapping and abstract representations and no techniques that use automatic mapping combined with naturalistic visual structures. Image shows that information visualisation systems that produce ``automatic'' mappings from multi-dimensional data to ``naturalistic'' visual structures have not been exploited. It is believed that such systems will realise advantages from both automatic mapping and naturalistic representations mentioned above to produce visualisations that:


Click here for more information on UCL-CS.