Which Fantasy Experts Like to Gamble?

Posted by dave on December, 29th 2011

I’m not going to lie, I like gambling (legally, of course). I just returned from a birthday weekend in Vegas and I’ll be heading back to Sin City in a couple weeks for a fantasy sports conference (to give out our accuracy awards, and to give back the money I took from Caesars). For me, not much beats the thrill of putting money down on a 37-1 long-shot…and hitting.
 
The same holds true for fantasy football. I’ve been known to get a little cute with my start/sit decisions just to be able to say I started the right guy when everyone told me to start the other guy. I also usually keep a list of my preseason bold predictions so I can shove it in my league’s face if a few of them come true. Yep, I’m that guy.
 
Hitting and missing on bold predictions and sleeper starts got me thinking about how an expert’s accuracy score relates to his tendency to “gamble” with player rankings that don’t conform to the consensus opinion. To help analyze this, we’ve come up with a measurement called ARD (Average Rank Deviation). ARD measures how different an expert’s rankings are compared to our ECR (Expert Consensus Ranking). ARD’s calculation is described in more detail at the end of this post.
 
A number of you have asked us to share how accurate our ECR is. We’ve checked in on this a few times, most recently in this guest post at The New York Times. Bottom line is that our consensus rankings are very accurate. ECR has a way of minimizing the impact of individual rogue predictions that may be as brutal as they are bold.
 
Since ECR tends to be accurate, it’s logical to assume that experts whose rankings closely resemble ECR would have relatively higher accuracy as well. Based on the data below that compares ARD to PAY (Prediction Accuracy Yield) for each expert, there is indeed a strong correlation. The higher your ARD is, the lower your PAY tends to be. In other words, the more you gamble the less accurate you are.
 

 
Each dot in the scatter chart represents an expert. The cluster in the bottom right corner (high deviation; low accuracy) is made up of primarily computer based projection sites. It’s nice to know the humans beat out the computers for a change!
 
While the data shows an overall inverse relationship between ARD and PAY, we should keep in mind that there’s very little difference in accuracy scores among the top experts. We’ll be reporting on the winner of our Most Accurate Expert competition very shortly; it’s amazing how razor thin the difference is between a large number of experts. Which begs the question, of the experts with higher than average ARD, which of them score the highest for accuracy?
 
Experts with 61% PAY or better through week 15 that have higher than average ARD:
Sigmund Bloom – FootballGuys | ARD: 4.38 | PAY: 61.9%
Alessandro Miglio – ProFootballFocus | ARD: 3.75 | PAY: 61.0%
Brad Evans – Yahoo! Sports | ARD: 3.59 | PAY: 61.1%
Staff Rankings – FFToolbox | ARD: 3.56 | PAY: 62.1%
 
Here are the ARDs for experts with at least 14 weeks of tracked rankings. The experts at the top of the list tend to agree with our ECR the most. The experts at the bottom of the list tend to disagree with our ECR the most. Each expert’s PAY through week 15 is included as a reference point. The average ARD from this group of experts is 3.5 and the median is 3.2.
 
Scroll down to see the experts that like to gamble the most.
 

 
I should note that “gambling” may not be the best term to use since an expert’s player rankings reflect his own opinion (perhaps not a gamble in his eye). If the expert tends to disagree with our consensus, I’m classifying it as a gamble on his part, even if it’s not something he’s consciously trying to do.
 
ARD’s calculation is pretty straightforward. We compare where each expert ranks a player vs. where that player is ranked in our ECR. We take the absolute value of this rank difference and sum it across the minimum ranks we require at each position (e.g. top 40 RBs). We then divide the sum by the total rank spots (40 for RB), and average this number across the four main positions (QB, RB, WR & TE). The resulting number is the Average Rank Deviation. The larger an expert’s ARD, the greater the difference between his weekly player rankings and our ECR. An ARD of zero would mean that the expert’s rankings are identical to our ECR.
 
This was probably pretty interesting data to a few of you and not-so-interesting to a lot of you. What I’m excited about is that we’re just scratching the surface of what can be done with the mountain of data we’re collecting. Once we dive even deeper into rank and accuracy deviations for specific players, I’m confident we’ll find some very useful data on things like over and under valued players, experts that tend to have certain players pegged better than others, and an objective way to measure “sleeper accuracy” across experts.
 
It’s a good thing we have an off-season to crunch the numbers. Stay tuned…