Top 11 projection (updated)

Please read below to see what this prediction is based on.

Updated 10:20 AM EST with new Votefair numbers. Alex gained significantly while Jessica and Dexter fell.

Name Song WNTS DI VF Not-safe Probability
Ben Briley Bennie and the Jets 22 n/a 0 0.562
Dexter Roberts Sweet Home Alabama 48 n/a 2 0.468
M.K. Nobilette Make You Feel My Love 63 n/a 4 0.387
Malaya Watson I Am Changing 65 n/a 4 0.385
C.J. Harris Can’t You See 73 n/a 4 0.378
Jessica Meuse Sounds of Silence 49 n/a 11 0.201
Caleb Johnson Skyfall 82 n/a 11 0.182
Sam Woolf Come Together 28 n/a 15 0.135
Jena Irene Decode 86 n/a 15 0.112
Majesty Rose Let It Go 15 n/a 17 0.111
Alex Preston Falling Slowly 77 n/a 18 0.079

Important!

The methodology for the finals model is described here. The model is 87% accurate on ranking within a margin of error of +/- 3%. Probabilities being what they are, somebody with a not-safe probability of just 0.25 will be in the bottom 3 one out of four times. Please do not comment that the numbers are wrong. They are probabilities, not certainties or even claims. Do not gamble based on these numbers.

Names in green are predicted safe. Names in red are considered at risk for being in the bottom 3. Names in yellow are undecided. Any person not in green is not considered safe by the model. The most probable bottom 3 is Ben, Dexter, and C.J. However, anybody on the list being in the bottom 3 would not be shocking.

I’ve addressed the model’s apparent poor accuracy this year in a couple posts, but let me reiterate here. The model uses history to calibrate its expectations. This works well when things that are happening now are like history. But when the rules change a lot, and this year they have, history is not as good a predictor of the future. At this point, I do not yet have enough information to tell how different this year is. Previous weeks have seen events that the model predicted were improbable, but that doesn’t make them impossible. Once we get some better statistics this year (with a few more shows), I will revisit this topic in great detail.

Nobody this week has a crazy high or crazy low chance of being in the bottom 3. Majesty, whom most people (including myself) thought was awful, has great poll numbers for popularity, and has only about a 10% chance of being in the bottom 3 if history is to be trusted. Ben’s numbers are really bad, but he has proven resilient before this year. He’s about 50/50 to be in the bottom 3.

The two WNTS approval ratings that stick out in this list are that of Majesty and Sam. So why are they so much less likely to be in the bottom 3? Because they appear to have a base of popularity—people who will vote for them no matter what they did on the show. This popularity is measured by Votefair. However, I’m dismayed by how few people are voting in that poll this year. Votefair already had a huge sampling bias, because people weren’t randomly selected to take the poll, they were self-selected. If the total number of voters there drops, the error associated with that is hugely amplified.

Dialidol is once again out for the count. That service works by measuring the busy signals on voting phone lines to try to determine whose lines are being called the most. Tonight, no Dialidol lines identified a busy signal on any of the contestants’ phone lines. That variable has been nulled in the projection above, meaning that data that was available in predictions last year is not available now. Internet voting may have just killed Dialidol, but I’ll withhold judgement until a little bit later in the year. The Dialidol forums are mostly quiet about the issue, but it is addressed in one post.

Bookmark the permalink.

Comments are closed.

Comments are closed