At 2200 on Thursday 12th December 2019, the BBC/ITV/Sky Exit Poll was revealed to the nation and pointed to a large majority for the Conservatives. Unlike 2017, I was able to turn to my wife and say “it looks like I will be right this time!” By the end of the night, Gavin Freeguard from the Institute of Government was tweeting that not only was I the most accurate election forecaster of 2019, I was more accurate than the Exit Poll.
My Forecasts vs the Exit Poll
My official forecast can be found by clicking here. What you see there though is in fact the 3rd of 4 forecasts I made in the last few days before the election. All 4 forecasts were tweeted as below.
- Monday 9th December 2019 at 1845
- Wednesday 11th December 2019 at 2359 (alright a few minutes after midnight!)
- Thursday 12th December 2019 at 1600 (and is described in depth here)
- Friday 13th December 2019 at 0245 after 50+ declarations and just before I went to bed.
The BBC Exit Poll was updated by its director John Curtice at 0300 and again at 0530 and I have noted the revised numbers as well as the original numbers. You can verify these numbers by rewatching the BBC election coverage on their iPlayer.
In the graphic here, I have highlighted my 3rd forecast and the 1st exit poll forecast since I regard these as official figures. Note that I count the Speaker as a Labour MP (Lindsay Hoyle representing Chorley) rather than Other so I have included him in Labour for both my forecast and the exit poll. On the night, the BBC were putting the speaker under other. Also, I never analysed any Northern Ireland polls and didn’t bother to make separate forecasts for the two Nationalist (NAT) and two Unionist (UNI) parties. The BBC never gave the exit poll prediction for Northern Ireland so I have assumed their figures were the same as mine.
Who was more accurate?
It should be said straightaway that both of us were very accurate. The most important thing to get right in any election is the outcome which in this case was that the Conservatives won a majority of 80 seats. That makes the number of Conservative seats the most important statistic and I was 4 seats under and the exit poll 3 seats over. Across all forecasts I fluctuated between 4 seats under and 5 seats over and the exit poll varied between 3 seats over and 8 seats under.
There were other parties though and their performance contributes to the narrative of the election so we need a way to measure the overall accuracy. I am using Root Mean Squared Error (RMSE) here. This is calculated by first squaring the errors shown in the graphic here for each party, finding the average (or mean) squared error and then taking the square root of the mean squared error.
For our official forecasts, my RMSE was 3.4 seats and the exit polls was 4.4 seats so I can claim to be more accurate. In fact it looks like my official forecast was in fact worse than my first forecast which had an RMSE of 1.7! It’s worth reading the comments on that tweet as it was the only forecast that predicted Jo Swinson would lose her seat which few people could believe, especially Lib Dem supporters.
If you would like to see each of my seat forecasts compared with the actual results, please download this spreadsheet. GE19 Seat Forecast v2.11 with Results
What is the Institute of Government’s verdict?
In his tweet once the results were known, Gavin Freeguard posted this chart comparing my forecast with the exit poll and 10 other forecasters. He simply stated that I “ … was closer than the exit poll“.
What is my verdict?
I gave an immediate response to Gavin’s tweet in this thread.
When I reviewed my 2017 forecasting error over 2 years, I finished with the following statement.
“As a statistician with over 25 years experience of making forecasts in a wide variety of industries, I have long since learned not to agonise too much over my errors or over-celebrate my successes. I have always viewed any forecast as an opportunity to learn about how to do things better in the future.”
What pleases me far more than getting it right is that I did learn the right lessons of 2017. Specifically there were 4 things I feel got right.
- My underlying model in 2017 was sound and I was able to build on that model which became my NURS model in my official forecast.
- I did not follow the polls for the 9 English regions (except London) which had caused me so much grief in 2017. I had a chance to test my revised approach with the 2019 European Parliament elections in May this year and I concluded my revised approach of splitting England into London and England XL was sound.
- I correctly worked out what were the appropriate scenarios to consider to allow for potential polling error and I followed through on what I said I would do in this post I wrote in 2017.
- My 2017 review also suggested that my tactical voting model was sound and I used a similar model this time around.
That doesn’t mean my model is perfect and I will be undertaking an in-depth review to see what I got right and wrong. More than anything, I will not be assuming that the next election will follow the same dynamics as this one. It now looks like the UK will be leaving the EU shortly and with the immediate political tension caused by that discharged, I believe other factors are more likely to be drivers of the next General Election.
— Want to learn how to be a better forecaster? —
I run a variety of statistical training courses aimed at non-statisticians. One of my popular courses is “Identifying Trends & Making Forecasts” which covers the basics of time series analysis and the key concepts of forecasting. This course is available both in-house and as a public course. The next course is scheduled for February 2020 in Bath, UK, and you can book a place here.
— More on Election Forecasting —
To see other forecasts I have made, please click on these links.