Below is a breakdown of our process for determining our In-Season (weekly) Accuracy results. Please note that we also run a separate analysis that evaluates the accuracy of the experts’ Draft (preseason) rankings.
Step 1: Collect the right data
Our analysis aims to determine who provides the most accurate weekly rankings using Half PPR scoring settings. We take a snapshot of every expert’s rankings at the start of the Thursday Night game each week and also at the beginning of the 1 p.m. EST games on Sunday. Players from the Thursday Night game are locked at their rank spots, so experts cannot change their rankings after the game has begun. Once the week concludes on Monday night, we score the predictions and incorporate the results into our Year-to-Date leaderboard that spans the full 17 weeks of the fantasy season. In 2024, we had more than 150 experts in the competition.
Step 2: Determine the player pool
For each position, we evaluate relevant players as determined by our Expert Consensus Rankings (ECR) and the week’s actual fantasy leaders. This ensures that our player pool covers everyone who was fantasy-relevant in a given week, including the players who were surprise studs and busts. In other words, if a player unexpectedly becomes a difference-maker, he will be part of our player pool since we make sure to include all key performers.
On the flip side, because we also use our consensus ranks to create the player pool, anyone who surprisingly disappoints (e.g. Travis Etienne) will also be evaluated. IWe currently grade experts based on the set of players below. For example, at Running Back, we look at the Top 40 RBs in ECR and the Top 40 RBs based on Actual Fantasy Points. An important thing to note is this means we evaluate each expert on MORE than 40 total RBs since some of the Top 40 RBs based on actual production would not have been in the bucket for the Top 40 in ECR.
- Quarterbacks
- Top 20 in ECR
- Top 20 in Actual Points
- Running Backs
- Top 40 in ECR
- Top 40 in Actual Points
- Wide Receivers
- Top 50 in ECR
- Top 50 Actual Points
- Tight Ends
- Top 15 in ECR
- Top 15 in Actual Points
- Kickers
- Top 15 in ECR
- Top 15 in Actual Points
- Defense & Special Teams
- Top 15 in ECR
- Top 15 in Actual Points
- Linebackers*
- Top 40 in ECR
- Top 40 in Actual Points
- Defensive Backs*
- Top 40 in ECR
- Top 40 in Actual Points
- Defensive Linemen*
- Top 40 in ECR
- Top 40 in Actual Points
* These positions are only scored as part of our IDP accuracy competition.
Step 3: Score the experts’ predictions
The experts’ rankings are evaluated by assigning a projected point value to each player based on the actual production of the rank slot the expert gave the player. The projected point value is based on the average fantasy production for that particular rank slot, factoring in bye weeks. We then compare these projected point totals to every player’s actual point production to generate an “Accuracy Gap” for the expert’s predictions. The closer this value is to zero for a player, the better it is for the expert because it indicates their prediction was closer to the player’s actual point production. Another way to think of the “Accuracy Gap” is as the expert’s “Error” for each prediction. A perfect gap would be 0, indicating that there was no error between the expert’s predicted rank and the player’s actual rank.
As an example, if an expert ranks Chris Godwin at WR #28 in Week 1, we’d assign a projected point value (e.g. 8.1 pts) to this prediction based on the historical production that players at WR #28 generate. This value represents the expected point production for the player at that rank slot. In other words, the expert is effectively predicting that Chris Godwin will score 8.1 points for the week. Now, say that Godwin outperformed expectations and finished as WR #11 for the week (14.3 pts). We compare the absolute value of the prediction (8.1 pts) and the actual production (14.3 pts) to assign the expert an Accuracy Gap of 6.2 pts for their Godwin ranking. We repeat this for every other WR in the player pool and sum the scores to get a total WR Accuracy Gap for the expert. As noted above, a lower number is a better score.
If an expert does not have a player in our pool ranked, we assign a rank in one of two ways based on how the player made it into our player pool (i.e. via Weekly ECR or the end-of-week Actual Rank).
- If the player made it into the pool via the ECR cutoff, we assign the player a rank equal to the last player that the expert ranked +1. Therefore, if an expert ranked 70 running backs and failed to include Justice Hill in his rankings, we would slot Hill as the expert’s RB #71.
- If the player made it into the player pool solely based on the Actual Rank cutoffs (i.e. the player exceeded expectations), we assign whichever rank is lower (worse): the player’s ECR +1 rank spot or the expert’s last ranked player +1 spot.
The reason for this distinction is that we do not want to punish experts who have a deep set of rankings unfairly. For example, Tank Bigsby could have a weekly ECR of RB #57 and finish the week as RB #33 based on fantasy points scored. He would qualify for the pool of evaluated players due to his actual production. For an expert who ranked 50 RBs and didn’t include Bigsby, it would be unreasonable to assume that Bigsby would have been his or her RB #51. Instead, we slot Bigsby as their RB #58 since that is a fair expectation of the expert’s valuation based on the industry consensus opinion.
The flip side of the example above occurs when an expert ranks a player within the rank range (e.g. Top 40 RB) that winds up NOT being in our player pool. In other words, the player was not a top 40 consensus RB for the week, and he did not finish among the top 40 RBs based on actual production. In this scenario, we assess a penalty that is equal to the following: The absolute value of the expert’s Accuracy Gap for the player minus the average expert’s Accuracy Gap for the player. The penalty is only applied if the expert’s prediction rates are worse than the average expert’s prediction. This ensures that penalties are only assessed in situations where notably poor predictions have been made.
The scenario above would most commonly come into play if an expert failed to take an injured player out of his or her rankings. In that example, it is important that the expert is penalized for offering advice that could lead fantasy owners to make a poor decision.
Step 4: Drop the Worst Week and Aggregate the Results
For each expert, we add together their weekly results across Weeks 1 to 17 (we exclude Week 18) for each position, to get a season-total positional score. We drop each expert’s worst week in order to give some grace to experts who might miss a week for whatever reason, or give a mulligan when an expert has just one particularly bad week. Typical Accuracy Gap scores vary from week-to-week: in a week with a lot of unexpected results, the whole field of experts might have lower scores than a typical week. Dropping that week for everyone wouldn’t work, and would be unfair to the experts who did the best, relative to the field, in a difficult week.
To solve this problem and allow us to aggregate the weeks with equal weight, we convert the Accuracy Gap scores to z-scores. This reframes the raw Accuracy Gaps in terms of the number of standard deviations above or below the average among all experts. Now, when we drop the week with the worst z-score, it’s guaranteed to be the week that would have the biggest negative impact on the expert’s score for the season. Note that the dropped week can be different for each position.
Now that we have the weekly scores converted to z-scores and have dropped the worst one for each expert, we add together the remaining 16 weeks to get a season-total score for each position. For the Overall assessment, we add up the scores for QB, RB, WR and TE. DST and K are excluded because (a) many experts do not produce rankings for these positions, (b) they represent the widest spectrum of fantasy scoring, which can impact the results, and (c) many fantasy owners believe that predicting performance for these two positions involves much more luck relative to the other positions. We still calculate season-total accuracy for DST and K so we can recognize the experts who were the best at those positions, even though they don’t impact the Overall accuracy competition.
We hope this detailed overview was helpful. Thanks for taking the time to learn more about our accuracy system!