Fantasy Football – Draft Accuracy Methodology

We’ve invested a significant amount of time to make sure we offer an objective and accurate way of assessing fantasy expertise. Below is a breakdown of our process for determining our Draft Accuracy results. Please note that we also run a separate analysis that evaluates the accuracy of the experts’ In-Season (weekly) rankings.

Step 1: Collect the right data.
Our analysis aims to determine who provides the most accurate draft rankings using Half PPR scoring settings. We take a snapshot of each expert’s rankings just prior to the first game of the season to ensure we’re analyzing each pundit’s final set of predictions. In 2019, a total of 160 experts were evaluated for our study.

Step 2: Determine the player pool.
For each position, we evaluate relevant players as determined by our preseason Expert Consensus Rankings (ECR) and the season’s actual fantasy leaders. This ensures that our player pool covers everyone who was fantasy relevant, including the players that were surprise studs and busts. In other words, if a player unexpectedly becomes a difference-maker (e.g. Austin Ekeler), he will be part of our player pool since we make sure to include all key performers. On the flip side, because we also use preseason ranks to create the player pool, anyone who surprisingly disappoints (e.g. David Johnson) will also be evaluated. For 2019, we graded the experts based on the following set of players below. For example, at Running Back, we look at the Top 50 RBs in ECR and the Top 50 RBs based on Actual Fantasy Points. An important thing to note is this means we evaluate each expert on MORE than 50 total RBs since some of the Top 50 RBs based on actual production would not have been in the bucket for Top 50 in ECR.

Top 25 in ECR
Top 25 in Actual Points

Running Backs
Top 50 in ECR
Top 50 in Actual Points

Wide Receivers
Top 60 in ECR
Top 60 Actual Points

Tight Ends
Top 20 in ECR
Top 20 in Actual Points

Top 20 in ECR
Top 20 in Actual Points

Defense & Special Teams
Top 20 in ECR
Top 20 in Actual Points

Top 40 in ECR
Top 40 in Actual Points

Defensive Backs
Top 40 in ECR
Top 40 in Actual Points

Defensive Linemen
Top 40 in ECR
Top 40 in Actual Points

Step 3: Score the experts’ predictions
The experts’ rankings are evaluated by assigning a projected point value to each player based on the historical production (rolling 3-year average) of the rank slot the expert gave the player. We then compare these projected point totals to every player’s actual point production (again, using a 3-year average) to generate an “Accuracy Gap” for the expert’s predictions. The closer this value is to zero for a player, the better it is for the expert because it indicates their prediction was closer to the player’s actual point production. Another way to think of the “Accuracy Gap” is as the expert’s “Error” for each prediction. A perfect gap would be 0, indicating that there was no error between the expert’s predicted rank and the player’s actual rank. We use a 3-year average for these values to smooth out outliers (e.g. Christian McCaffrey going bonkers in 2019).

As an example, if an expert ranks Coley Beasley at WR #50, we’d assign a projected point value (i.e. 105 pts) to this prediction based on the average production of the player that actually finished as WR #50 over the past few years. This value represents the expected point production for the player at that rank slot. In other words, the expert is effectively predicting that Cole Beasley will score 105 points for the season. Now, say that Beasley outperformed expectations and finished as WR #28 for the season (143.1 pts on average). We compare the absolute value of the prediction (105 pts) and the actual average production (143.1 pts) to assign the expert an Accuracy Gap of 38.1 pts for their Beasley ranking. We repeat this for every other WR in the player pool and sum the scores to get a total WR Accuracy Gap for the expert. As noted above, a lower number is a better score.

If an expert does not have a player in our pool ranked, we assign a rank in one of two ways based upon how the player made it into our player pool (i.e. via Preseason ECR or the end-of-season Actual Rank). 1) If the player made it into the pool via the ECR cutoff, we assign the player a rank equal to the last player the expert ranked +1. Therefore, if an expert ranked 70 running backs and failed to include Frank Gore in his rankings, we would slot Gore as the expert’s RB #71. 2) If the player made it into the player pool solely based on the Actual Rank cutoffs (i.e. the player exceeded preseason expectations, such as Raheem Mostert), we assign whichever rank is worse: the player’s ECR +1 rank spot or the expert’s last ranked player +1 spot.

The reason for this distinction is that we do not want to unfairly punish experts who have a deep set of rankings. For example, in 2019, Raheem Mostert had a preseason ECR of RB #103 and finished the season as RB #27 based on fantasy points scored. He qualified for the pool of evaluated players due to his actual production. For an expert who ranked 60 RBs and didn’t include Mostert, it would be unreasonable to assume that Mostert would have been his or her RB #61. Instead, we slot Mostert as their RB #104 since that is a fair expectation of the expert’s valuation based upon the industry consensus opinion.

The flip side of the example above occurs when an expert ranks a player within the rank range (e.g. Top 50 RB) that winds up NOT being in our player pool. In other words, the player was not a top 50 consensus RB for the preseason and he did not finish among the top 50 RBs based on actual production. In this scenario, we assess a penalty that is equal to the following: The absolute value of the expert’s Accuracy Gap for the player minus the average expert’s Accuracy Gap for the player. The penalty is only applied if the expert’s prediction rates worse than the average expert’s prediction. This ensures that penalties are only assessed in situations where notably poor predictions have been made.

The scenario above would most commonly come into play if an expert failed to take an injured player out of his or her rankings. In that example, it is important that the expert is penalized for offering advice that could lead fantasy owners to make a poor decision.

Step 4: Rank the Experts
After the results are calculated for the entire player pool across experts, we rank the experts by position from top to bottom based on their Accuracy Gap. As noted above, a lower gap is considered better because it indicates that an expert’s predictions were closer to the actual production of the players evaluated. For the Overall assessment, we add up the Accuracy Gap totals from the QB, RB, WR and TE positions. DST and K are excluded because (a) many experts do not produce rankings for these positions, (b) they represent the widest spectrum of fantasy scoring which can impact the results, and (c) many fantasy owners believe that predicting performance for these two positions involves much more luck relative to the other positions.

In addition to Overall Accuracy, we’re also able to determine which experts offered the most accurate predictions for each individual player. We simply rank the Accuracy Gaps from top to bottom across experts for each player. The closer a projection is to the player’s actual point total, the smaller the expert’s error is and the better their accuracy rank is for the player.

We hope this detailed overview was helpful. Thanks for being interested enough to read through it!