All organisations want to understand what has happened in the past and what will happen in the future. The use of statistics and statistical thinking is essential to be a better forecaster but that doesn’t mean it is easy to do! At the same time, we are bombarded with forecasts in the media and that can make it difficult to decide which forecasts to pay attention to and which can be ignored.
My course “Identifying Trends & Making Forecasts” is all about doing the basics right when it comes to analysing trends and making predictions. To support this course, this post makes available a variety of material in the public domain covering the following themes:-
- A – What makes a good forecaster?
- B – How to identify a suitable baseline forecast
- C – Some real-life identification of trends
- D – Some real-life forecasting models
- E – How to identify and measure forecasting skill
- F – Learning the lessons when your forecasts go wrong
- G – Publish your track record
- H – Some recommended books about forecasting
For more details, please read the relevant section below.
A. What makes a good forecaster?
If you are attending my “Identifying Trends & Making Forecasts” course, then you should read the 1st post as many of themes explored in that will be discussed in the course.
- How to identify a good forecaster
- This is a very useful webinar delivered by the ASA in 2018 – “Why are forecasts so wrong?”
B. How to identify a suitable baseline forecast
I am passionate about forecasters tracking their performance, measuring their skill and identifying ways to improve. The simplest way to track and measure forecasting skill is to compare your forecasts with a simple baseline (or dumb) forecast. A baseline forecast is one that requires no skill such as “same as last time” or “equal to long term average”, etc. Even then, it is possible to make mistakes in choosing a suitable baseline.
If you are attending my “Identifying Trends & Making Forecasts” course, then you should attempt the survey linked in the 1st post about Donald Trump as there will be a discussion during the course on what this tells you about baselines.
- Will Donald Trump win a 2nd Term in 2020?
- Some baseline models are unstable as I explain in “The Fat Tail of Kim Kardashian“
- Keir Starmer’s train to Downing Street. This is a variant of baselining where the size of the task needed to achieve a specific goal (Labour winning the next election in the UK) is placed in context of what has happened before in previous elections.
- Has mandatory gender pay gap reporting narrowed the UK gender pay gap? I answer by projecting the historical trend before 2017 (the year pay gap reporting became mandatory) through to today and using that as a baseline to evaluate the effectiveness of the policy.
C. Some real-life identification of trends
If you are attending my “Identifying Trends & Making Forecasts” course, then you will be asked to critique the BBC article in the 1st link since you will be analysing this data in one of the case studies!
- How has Britain’s climate changed over the last 20 years?
- Latest trends in COVID19-related cases in England. You may be asked to download the spreadsheet highlighted in section 3a as an example of more advanced identification & extrapolation of trends.
- Latest trends in COVID19-related deaths in England
- Are Public Health England’s COVID19 death statistics misleading?
- Is the UK an outlier for COVID19 trends when compared with other countries? You may be asked to undertake a similar analysis if case study 5C is done.
- Is the gender pay gap narrowing in the UK & has mandatory reporting had an effect? This case study will be demonstrated during the course.
- Who will win Australia’s Voice Referendum? In June 2023, I stumbled across polling data for a referendum being held on 14th October to modify the Australian constitution. I was startled by the trends I saw and immediately started analysing trends and making forecasts.
D. Some real-life forecasting models
Here are some examples of prediction models I have built & published in the public domain. The 3rd link is an example of how to evaluate the success or failure of a forecasting model and will be discussed in my course “Identifying Trends & Making Forecasts“.
- My prediction of the final league table for the English Premier League in 2017/18 season
- Rugby World Cup 2019 – Who will win?
- An evaluation my 2019 Rugby World Cup forecasting model. We will be discussing this in my course.
- My 2019 UK General Election seat forecasting model
- My Weekly Excess Deaths model for England during the COVID19 pandemic. I updated this weekly so it is a good example of how my thinking changed as new data came in. The full series of 8 posts in reverse order is here. I have yet to write an evaluation of this model which I will do eventually!
- Did the gender pay gap close in 2019? This posts describes the use of imputation when data is missing which happened with the 2019 UK GPG statistics following the suspension of the reporting deadline in March 2020 when the coronavirus pandemic took hold. Most notably, the resulting model included an autocorrelation effect which I cover in my training course.
- Who will win the West England Mayoral election? This illustrates a different forecasting approach whereby I identify what needs to happen for the incumbent party to retain the mayoralty and then evaluate how likely it is that scenario will materialise.
E. How to identify & measure forecasting skill
Here are two posts on measuring forecasting skill.
- Who has been the most accurate pollster in the last 10 years?
- Do election pollsters show forecasting skill?
- I was deemed the “most accurate” forecaster for the 2019 UK General Election!
F. Learning the lessons especially when your forecasts are wrong
A well known quote from George Box is “all models are wrong but some are more useful than others.” The same can be said of forecasts “all forecasts are wrong but some are more useful than others.” To my mind, this statement will only be true if you undertake post-mortems of your forecasts and seek to learn lessons. Here are some examples of post-mortems.
- A post-mortem of my prediction for the 2017 UK General Election
- How accurate have opinion polls been since 1945 – updated with GE2019
- At the end of post D7 above, I added a postscript with the actual results and a link to this twitter thread where I evaluate what went right and wrong.
G. Publish your track record
Any forecaster worth their salt should be publishing their track record in a format that is:-
- traceable i.e. you can go back and verify that the forecast made was indeed made at that specific point in time.
- transparent i.e. the basis of the forecast should be clear.
- trackable i.e. all forecasts and errors made should be in an easy to digest format that allows forecasting skill to be measured.
- public i.e. anyone can view the track record and it should be easy to find.
It is actually very hard to find forecasters who do this but here are two examples I have found
- Electoral Calculus who make predictions of general elections.
- FiveThirtyEight (a site run by Nate Silver) looked back at all their predictions since 2008 and concluded they were “reliable”. They go into some depth on how they came to that conclusion and it covers a number of other themes as being a track record.
At present my own track record largely meets points 1 & 2 in that most predictions are here on my blog. However, I don’t yet have my track record collated in an easily digestible format and it can be hard to find. Don’t worry I intend to practice what I preach and it will be available at some point!
H. Recommended books about forecasting
The following 4 books underpin a lot of what I teach in my forecasting course. What’s great about all of them is that they take different angles on the forecasting conundrum but together they make a great collection.
- The Signal & The Noise by Nate Silver – Published in 2012, Nate takes a look at how forecasts are made in a number of fields and the typical errors made. The central message is that whilst forecasting is hard, we are still making too many unnecessary basic errors that if eliminated could improve forecasts. Are you making any of these basic errors?
- Superforecasting by Philip Tetlock – Published in 2016, Tetlock explains the backdrop to the Good Forecasting Project and how his team was so successful. It draws heavily on the idea of baselining and track records and shows that an expert’s credentials is no guide whatsoever to their ability to make good forecasts.
- AntiFragile by Nassim Taleb – Published in 2012, Taleb central point about forecasts is that it is not about your ability to make accurate forecasts that matter, it is your ability to survive and thrive when your forecasts are wrong that is the most important thing. This is a theme that reoccurs in many of his books but AntiFragile (a word he had to invent) is his best in my opinion and shows that risk & forecasting are simply two sides of the same coin.
- Reckoning with Risk by Gerd Gigerenzer – Published in 2002, I think it is a great shame that Gerd’s ideas on how to explain and present risk have not been taken up more widely. As I say, forecasting and risk are two sides of the same coin but the human race can confuse itself about risk especially when they are presented as probabilities and percentages. Gerd’s central insight is that numerical frequencies are much more likely to be understood and he writes about numerous examples of how this can be applied.
If you would like to book a training course in Forecasting, then please contact me.
For more information about my other training courses in statistics, please visit my Statistical Training homepage.