Top 2 projection (updated)

Name Song WNTS MJs VF Not-safe Probability
Caleb Johnson Dream On / Maybe I’m Amazed / As Long As You Love Me 57.667 61.7 32 0.585
Jena Irene Dog Days Are Over / Can’t Help Falling in Love / We Are One 65.333 38.3 68 0.415


The methodology for the finals model is described here (though some modifications have been made to replace Dialidol with MJsBigBlog’s poll). The model is 87% accurate on ranking within a margin of error of +/- 3%. Probabilities being what they are, somebody with a not-safe probability of just 0.25 will be in the bottom 3 one out of four times. Please do not comment that the numbers are wrong. They are probabilities, not certainties or even claims. Do not gamble based on these numbers.

Name in green is most likely to be safe. Name in red is considered most at risk for being eliminated. The most probable elimination is Caleb. However, no result would be shocking.

The contest is far from a runaway for Jena Irene, but it certainly doesn’t look bad for her.

No matter what happens tomorrow, we will see some Idol history made.

Dialidol (which the IdolAnalytics projection model no longer uses) registered no busy signals for Caleb, but did register some for Jena. If she loses, that will be the first time Dialidol did not project the winner correctly. However, there is no particular reason for us to believe it, since Dialidol has been anti-correlated with being safe this year. It will also be a record for Votefair, which has had the winner up only half the time, but never missed by more than 18.5 percentage points (Jena is currently up by 44 points).

On the other hand, Jena would be the first wild-card to win, and the first to win with a trip to the bottom group when her opponent had never been (this result may not be significant, though).

As such, I’m somewhat comfortable with the assigned odds, which I mentioned before I felt were pretty even. I would not be shocked in the least if Caleb wins. The contest is a bit like a coin flip, though one side of the coin is just a bit heavier than the other.

I’ll update later in the day. Note that I’ve flipped MJs poll since she asked “Who Will Win?” instead of “Who will go home?”

Below are the running stats for the model performance this year. The assigned probabilities have been more accurate than I could have hoped for within one season. Contestants projected with a probability of being not-safe in any category with more than 10 projections were usually right in the middle of the range. In the bin from 50-60%, 8 out of 10 were not-safe, making those possibly a little hesitant, but with such a small sample size it’s impossible to tell. The one dim spot was in the very low end, from 10 to 20%, with only 2 of 26 (8%) being not-safe, lower than 15%, but again it’s still well within the confidence interval (binomial proportion estimate at 95%).


Bookmark the permalink.
  • Ava Zinn

    DialIdol did incorrectly predict the winner of The X Factor USA 3

    • Matthew Kitson

      But no one was watching that really