Last year I developed a model for predicting the outcome of the semifinals. You can read what the projection was, and there weren’t any wrong calls, or even any ranking errors. The Top 10 people as scored by the model were the actual finalists. This doesn’t mean that the model is “correct”, so to speak, but it does mean that it is already does a decent job of projecting what will happen. As the saying goes, all models are wrong, but some are useful.
In a sense, though, by not missing anything, we also didn’t really learn anything either. If the parameters of the model could be adjusted to improve the overall accuracy of all years, we might get a sense of whether any of these was surprising. But by missing nothing, no improvement can be made by adjusting them. Indeed, the model accuracy improved to 91% with no adjustments at all.
This also implies that there were no real surprises last year for either the men or the women, which doesn’t exactly make for riveting television. Only once Lazaro Arbos got going in the Top 9 with a string of bad songs did things start to be a bit shocking.
As I’ve said before, despite it being perverse, I actually want the model to get things wrong. I like surprising results, both in sports and in Idol. Who doesn’t? But also, if there were some more misses I could get a better idea of the margin-of-error, which now I only know is “around 3 percentage points”. That means that I have be pretty safe, and not declare people like Amber Holcomb safe when they are too close to the edge, and that’s boring. It’s boring to say “I can’t tell whether I think this person will make it”.