How the ‘scan of scans’ can help predict the future.

When I worked on the Global Strategic Trends programme one of the things that got the analysts most excited (aside from the prospect of free sandwiches in a meeting) was the ‘Scan of scans’.  In many ways, this is the holy grail for Futures analysis, as such a thing enables an analyst or decision maker to capture the strategic high ground.  In a nutshell, understanding what everyone else is saying about the future, gives you a more powerful starting point to make your own predictions about what may or may not happen in the future.  That’s the theory in a nutshell, but bear with me while I unpack it a little more in a brief technical history.

The ‘scan of scans’ is, effectively, an aggregate forecast.  This means its derived from all the hard work and analysis that has gone into other assessments.   Such techniques were pioneered by analysts like Phil Tetlock, who from 1989 collected around 27,500 forecasts and focused on using these to improve expert judgements (there is loads more on his work documented here in this excellent piece by Tim Hartnett in the Financial Times).  Aggregate forecasts are often used in many areas of quantifiable prediction, such as meteorology and earthquake prediction and, statistically, they generally produce more accurate forecasts.

aggregate forecast2

How are aggregate forecasts used in the futures industry?

The strange thing is, probably, because of the difficulty of making specific/accurate predictions about future trends that relate to human behaviour, we don’t at present, tend to produce aggregate forecasts for long term futures analysis/horizon scanning.  There are a few reasons for this:  Firstly, we tend to focus on scenario-planning and ideation.  A lot of the tools and techniques that are used in the futures industry today are about generating ideas.  In a way these work really well – we have techniques that enable us to get interesting insights from large groups of experts.  But, at best, they represent methods of assessing the crowd consensus of  small groups of distinguished experts and peers.  Such a practice isn’t wrong, but, it has a place – generally perhaps at the beginning of the project, when you wish to sample expert opinion in order to build the research for your forecast (i.e. when you want to fully understand the problem/question your analysis is trying to address).

Secondly, there is the practical aspect.  Once you’ve amassed a lot of data, what do you do with it? For example, an archive of 27,500 forecasts will contain datasets describing expert judgements, predictions scenarios and models.  How do you standardise such varied data?  How do you derive some kind of meaning from such rich and varied datasets?

These are questions that myself and many forecasters have grappled with and the truth of it is to address them, you need to know what your context is?  What is the idea that you’re testing, what is your hypothesis, what outcome or risk are you betting on? Once you know this, you can go to your dataset and start to extract what you want to know? Knowing this you then need to derive some kind of meaning from all the previous scans and assessments that you’ve made. Back in the 1990’s (when Phil Tetlock first started to amass his archive) this would have been tricky.  But today it’s slightly easier thanks to the abundance of low cost computing and vast data storage.  Now, thanks to data science, cheap data storage and more and more data visualisation tools, you can work through all these many sources, and make assessments from them.

How do you produce a scan of scans?

At Simplexity Analysis we’ve developed a range of techniques to produce aggregate forecasts by combining machine processes with human analysis.  We essentially take forecasts, chop them up into strings and then look at the most frequently occurring instances of words – these are then put into ‘knowledge maps’, which, through iterations of systems analysis (balancing levels of machine and human interpretation), are tidied to gradually give us a clearer picture of the knowledge represented in these forecasts.  Put simply, we produce a sample of what is being reported and use the metrics derived from this processes as a basis for our predictions.  For these we give specific probabilities to reflect, categorically, our belief in the likelihood of a specific trend or outcome occurring.  Then, using a host of potential data visualisation tools we can display the data and the metrics we have used to produce our assessment in complex but explorable formats.

Aggregate forecast

This form of sampling is a way of mining the rich and varied data contained in the abundance of forecasts published.  What’s really cool is that by sampling this way we can tell two things:  Firstly, what everyone else is saying – which is a useful place to start any assessment.  Secondly, we can pick up new ‘signals’ that are being reported.  For example, by selecting for the least reported trends – we can get an idea of potential outliers, new low frequency ideas that could represent the big idea of tomorrow.  For example, how many futures reports from 2009 mentioned the term ‘big data’?

Producing aggregate forecasts offers a new ways of accessing and predicting future trends.  For such forecasts to work most effectively, they need large data sets. The larger the dataset we sample from, the greater the confidence we can have in the insights we generate.  The great thing is, at present the range of data available, through online archives, social media, academic papers and open data means ‘big data’ is getting bigger every day, further increasing the scale and magnitude and therefore, hopefully the accuracy of the ‘scan of scans.’

Evolving ‘Futurology’

We’re not saying our processes our perfect.  They evolve and are tested as our predictions are either shown to be accurate or otherwise.  But, we believe, implementing clear, testable systems of prediction, are an important step in the evolution of ‘futures’ as a discipline.  In order to move away from non-specific, ideation techniques that don’t really produce clear, accountable predictions, we need to develop processes that quantify qualitative data and give clear, testable insights.  Because if we as futurists/futurologists/futures analysts are not accountable to our predictions, then what’s the point of making them?

Chris Evett is a Director at Simplexity Analysis Ltd.  For a copy of Simplexity’s first ‘scan of scan’s’ on the ‘Future of Healthcare’ please email info@simplexity.org.uk.

 

 

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>