How much of that has to do with the fact that the top teams play each other thus helping their SOS?
Printable View
Next to nothing. The algorithm doesn't know who the top teams are it only looks at performance relative to your opponents. SOS is not a static input into the calculation, it's closer to an output. You don't get rewarded simply for playing good teams. You only get rewarded for playing well against good teams.
Looks like we should be 29 pt favorites according to the Sagarin ratings. 76.45 for #40 NDSU and 44.48 for #197 Weber State assuming they get 3 pts for home field
29?? That's it? Good thing for byu/Texas
Sent from my SCH-I605 using Tapatalk
Man, I'd be scared of the power team at 195. Just for fun, some power programs ranked higher than the community college:
Lafayette
Lehigh
South Dakota
Albany-NY
Indiana State
SE Missouri State
To audit and anyone else that cares to reply:
Any thoughts on which rating to use this year? I'm leaning toward just using the standard rating now that elo_chess is no longer part of it. Changing to the regular rating will help me a bit since I won't have to recalculate the conference scores, but that's not all that big a deal. Thoughts?
(I have everything but the top-25 finished this morning, but we've had meetings during my preps and I actually do have some real classwork to get done today for a change.)
I thought we mostly came to a general consensus after the changes were implemented last year that the standard rating makes the most sense. Though I'll have to go back and reread the thread, because I want to say that Sagarin ended up implementing one other change after we more or less agreed.
Gully - Sagarin now has two measures that are derived from different methodologies that are both supposed to be reasonable predictors of future outcomes. I'm pretty sure that the standard score no longer uses the measure that completely ignores scores of games.
I always thought rating was better early season then predictor takes over later in the year
The rating used to be an average of the predictor and the ELO_Chess (previously labeled something else), which ignores game scores and uses only W/L to determine outcomes. It was the measure that the BCS system used, so he had to use it (against his better judgment, I believe).
The predictor score is now averaged with a new measure (forget the label off top of head), both of which are supposed to be fair predictors of future outcomes.
Am I missing anything, Hammer, BAudit, etc?