• Answering your sports questions with real analysis of the stats.
  • Be sure to check out the StatDance.com in-depth five part series on the NBA draft!
  • Now Posted! Part I of the Fantasy Basketball Ranking Manifesto

...where the stats always talk

Ranking the GMs Read More ...

Is Tiger really four majors away from catching up with Jack? We rank golf's best from 1860 to today. Read More ...

Analyzing the expected value of a draft pick, and ranking the best and worst draft picks since 2002. Read More ...


Obviously, since I went through all the work to develop these rankings I probably have a reason. In part I, I explained how most websites (Yahoo, ESPN, and Basketball Monster, among others I am sure) do their rankings (Standard Scoring). In part II, I explained the idea of Rarity Rankings. Finally, in part III, I explained in detail my ranking system, which I am calling Modified Rarity Scoring, or MRiS.

The concepts are the same - try to normalize production across all categories. In Standard Scoring, this is done by using standard deviations away from the average, and in MRiS it is based on the rarity of production above a minimum production threshold. Since both systems attempt to compare production in all different categories, it is easy to compare them. 

Standard Scoring assigns each player a negative starting point by subtracting from their production the average production in each category. This means that if they have a zero score overall, they are producing the average amount overall. In a 100 player league, they should be ranked around 50th. However, since every single player gets the same "average production" subtracted, we can ignore it. Like I explained earlier in the Manifesto, if both teams got 30 extra points to start a game, the winner is still determined in the 48 minutes of actual play.

The normalized scoring unit is the standard deviation. A single "Z score" is assigned for every standard deviation of production. In Modifed Rarity Scoring, Equivalent Fantasy Points are the normalized ranking units. So, by setting EFP equal to a Standard Scoring "Z score", we can compare the two systems and see which one makes more sense.

Before we get to the numbers, I'll make a pitch on purely theoretical grounds. This is the important argument - if we only agreed to accept results that we were hoping to see in science, not much progress would occur. Obviously, this is not a rigourous scientific endeavor (as much as I try!) but the idea is the same. In my mind, it does not matter how the statistics are distributed amongst players, only how many stats you can accumulate overall as a team. I see no logical reason that standard deviations should be added together and used to compare players. Fantasy basketball works by accumulating statistics, not players. Modified Rarity Scoring is based on the simple principle of equality of categories.

And now, on to the numbers...

CategoryStandard ScoringMRiSPercent Difference (MRiS/SS)
Points1.571.570.0%
3pt Made8.7610.2517.0%
Rebounds2.92.982.8%
Assists3.454.0116.2%
Steals15.9216.564.0%
Blocks12.7918.5645.1%
FGOP12.9912.3-5.3%
FTOP20.2831.354.3%
Turnovers-9.4-10.137.8%

As you can see, I 'anchored' the scoring systems to points. It looks like most of the categories are more valuable in my system, but really this is demonstrating that Standard Scoring does not account for the high True Zero of points. There is actually a wider difference than Standard Scoring allows between players since we expect even the worst player to score a significant number of points. 

The big differences between the systems are 3PTM, Assists, Blocks, and FT%. These are all categories with high standard deviations - some players score a lot of these, other players score few. The Standard Scoring method would have you believe that blocks are less valuable because blocks are more scattered among players. Does it really matter if you win a category with 1 player scoring 15 blocks one week? 

Standard Scoring is faulted because it does not account for the minimum expected scoring of the worst player worthy of being picked up, and it mistakenly assumes that the more tightly-grouped players are in a category, the more valuable that category is. Most seasoned fantasy players know that it only takes a couple good producers to win blocks for you every week, and a lot of times those same players will guarantee that you lose Free Throw Percentage as well! These two effects are discounted in the Standard Scoring method.

Stay Tuned to StatDance.com for our rankings pages, soon to come! I will post the MRiS rankings in standard leagues with their EFP in each category, along with some Free Agent ideas and players to target if you are tanking certain categories. A lot to come!

In part I, I explained how fantasy players are usually ranked - with Standard Scoring. In part II, I introduced another way, Rarity Scoring. This, part III, is putting the finishing touches on Rarity Scoring by introducing what I call "True Zero" - finally giving us a quality means of ranking fantasy basketball players. In part IV, I will discuss the differences between Standard Scoring and Rarity Scoring (and hopefully show how much better my system is than the standard system).

True Zero


True Zero (t0) is the amount of the statistic that you expect is the baseline value for any player that is good enough to be owned in your league. Pure rarity scoring assigns weights to each statistical category so that they are equal in value, since each category is just as valuable as another. Using True Zero values, we will be able to equally value production accross all statistics above the minimum expected from owned players.

I know that this isn't a simple concept to grasp - at least with how well I described it - so here is an example to try to make it more obvious. Earlier in the Manifesto, to demonstrate pure Rarity Scoring, I grabbed the stats from the top 12 players and ranked a few of their stats. The weights ("Value" in the pictures) were for pure rarity scoring. Since points are by far the most common statistic, the other coefficients ("Values") are much higher, i.e. points is 1.00 and blocks is 29.23.



This example, and these coefficients, aren't applicable to actual leagues since it only accounts for twelve players and six categories. But the concepts are nearly identical. In this league, the worst player scores almost twenty points. If we assume that these are the only players you can play, in order to out-score your opponent, the worst player you can play will score twenty points. That makes a player who can score 30 points much more valuable!


(Click on image to open in larger view)

The relative value ("Coefficient" is the term I will usually use to refer to these values that scale different categories so they are equally weighted) of each category is represented by the blue "Value" row. Since the worst player in this league (like most leagues!) records 0 blocks, the True Zero (t0) is actually zero. The result is that blocks are 6 times (the blue "Value" region) as rare as points scored. In the future, I will refer to the resulting score (Coefficient*Production) as Fantasy Points Equivalent (FPE).

In actual leagues, determining True Zero is much more complicated. We have to figure out what the worst player in each category would produce but still be good enough to be owned. To be clear, a player who scored the t0 value in every category would be a terrible fantasy player. For example, t0 would be the number of blocks we expect a lazy point guard to have or the number of points a pure defensive specialist will score. Using a smooth-line approach gives us what we expect to find on the waiver wires in each category, as a minimum.

True Zero by Category


It turns out that production in fantasy basketball is best represented with exponential decay. Using this knowledge, I smooth the lines and find what the worst player in each category should produce in that category. For categories like Blocks and Threes, these values are basically zero, but for categories like Points and Rebounds, there is significant expected value for everyone in the league.

These plots are FPE for the best 156 players ranked using all of the categories except turnovers. Unfortunately, the way my spreadsheet is set up this is much easier than the actual stats and I'd have to tinker with all of my numbers to get these screenshots to reflect production instead of the equivalent FPE. The important thing to notice is how the smoothed lines match up with production (or don't!) and the general shape of the plots. OK, maybe it's not important, but I thought it was interesting to see the shapes of the stats.

To calculate the weight of each category, I add up all the production with the top players (based on league size, if you have 10 teams of 13 players, your population is 130) , then subtract (number of players in the league)*(minimum expected production), or Population*t0. Then I assign coefficients for each category so they are weighted equally above true zero.

The t0 is only used to develop the coefficients for each category, not in ranking players. This means I don't subtract the t0 value from production individually when I rank players - it would have no effect. If you took that value away from everyone, it would be like giving every NBA team 30 points to start the game. It changes the total score at the end, but the winner is still the person who scored the most during the game. All players start at zero and get the same credit for each point, steal, block, FGOP, or assist as every other player.

I also do not find a t0 for the percentage categories like Field Goal Percentage and Free Throw Percentage, since FGOP and FTOP true zeros are actually 0, which is what an empty spot on your roster scores and the average shooters score.

Finishing the System


When developing these numbers, the coefficients for the traditional counting categories stay pretty constant (points, rebounds, steals, assists, etc) no matter what time period you analyze over, but the percentages vary wildly. After some panic, I realized this is due to it being common for players to get in shooting streaks, so high values of FGOP and FTOP happen in short time periods compared to points scored. However, these values tend to flatten out in longer time periods. This means high and low shooting percentages are more rare if you look at season-long stats, and therefore FGOP and FTOP are "worth more" compared to points scored. For roto leagues that compare statistics over the course of the entire season, we should use the larger coefficients, but for more common weekly leagues, we should use the smaller coefficients.

It would simplify the numbers to use a flat 1.00 for points scored. I have decided, instead, to normalize the numbers to have a total of 10 FPE above the t0 value for points. So in my system, the average player would get 10 FPE in every category above the true zero. This makes total scores more consistent across different leagues, but is wholly unnecessary for analysis - using 1.00 would work identically; to convert my numbers listed below simply divide all the coefficients by the points coefficient. The end result is that the average score, for each category and no matter what your league size or settings are, is 10 FPE above t0.

So, How Do I Use This?


Now, for the numbers! The numbers below are for a league size of 156 (12 teams of 13 players) and 8 categories - Points, 3PM, FG%, FT%, Rebounds, Assists, Steals, and Blocks. I then computed the value of a Turnover, but the players were not ranked using turnovers. The process for creating these is the same as I went over in Part II, but using the t0 values.

CategoryStatCoefficientTrue Zero
Field Goal PercentageFGOP12.300
Free Throw PercentageFTOP31.300
Three Pointers Made3PTM10.250.10
Points ScoredPTS1.578.17
Total ReboundsREB2.982.17
AssistsASTS4.010.83
StealsSTLS16.560.44
BlocksBLKS18.560.08
TurnoversTO-10.130.95
Some notes on these numbers:

1. The t0 values listed above are actually stats, not Fantasy Point Equivalents.

2. FGOP over the season has a coefficient of 23.1, FTOP has 51.0. These numbers I used here are over the past 7 days, which we are assuming are average values (these are a little low, but not very far away).

3. An FTOP of over 31 does seem really high, but imagine how hard it is to have an entire free throw made over average percentage (78.8% so far in this example league) per game. You would have to average 5/5 from the line, or 9/10 for a full FGOP. It is just as helpful to your fantasy team to have a player score 31.3 FPE's in FTOP or Points - which is going 9/10 from the line or scoring 20 points (31.3/1.57=20). That sounds about right to me.

4. Note that some plays help you in multiple categories - making shots (free throws or regular) helps you in points and percentages. Some leagues have FTA or OREB as categories, which help in multiple categories as well.

5. I use yahoo's in-game average stats, so .245 blocks is calculated the same as .155 blocks, both show up as .2. These errors will average out almost all of the time, so I haven't tweaked my spreadsheet to fix this.

6. Due to normalizing for 10 FPE over t0, the total amount of FPE in every category will equal (10+t0*coeff)*Players_Owned.

7. Blocks are much more rare than steals, but since they are so much more spread out, they are nearly equal in true rarity. It is approximately the same to average 1 block as it is to average 1.5 steals, for fantasy valuation purposes.

Using These Results


To close, I've created a WolframAlpha widget to calculate players values using these coefficients. Disclaimers: different sized leagues and different league settings would have to change the numbers to be perfectly accurate. For leagues not using Turnovers, enter 0. For leagues using different categories not listed here, and different sizes - stay tuned, I will be posting results for all the categories I've heard of eventually!


Ranking fantasy basketball players, and determining how valuable the players are in general, is not as easy as it seems due to the number of categories head-to-head and rotisserie leagues compete in. As I discussed in part I, Standard Scoring is the mainstream method used by big players in fantasy sports such as ESPN and Yahoo sports. Rarity scoring is a simpler method of ranking players, giving value to stats based on how rare they are. As I will discuss in part III, this is a much superior method than Standard Scoring.

Rarity Scoring Basics


The premise is simple - every category matters as much as another. If there are 1,000 points scored and 500 rebounds total, then each rebound will be twice as valuable as each point. As an illustrative example, I'll look at Yahoo's current (January 25th) top 12 players over this season's average stats.

The "population" of our league has 12 players. The most common statistic is points scored, so we will put all the other statistics in terms of points scored. Since no category is more important than another, we normalize each category so the total "fantasy points" are equal.


We just have to divide total points scored (286.5) by the total amounts of the rest of the categories and then we get a coefficient we can use to normalize all the categories. In this example, the coefficients are in the "value" column at the bottom. For blocks, the value coefficient is roughly 30 points. This means it is just as valuable for your player to score 1 block as it is to score 30 points.

Now we will use these coefficients to see how we rank and value the 12 players in our population. The coefficients and ranks won't work in any league with more than 12 players and 6 categories, but the concepts are the same.

(click image to expand)

Normalizing Percentages


Free throw and field goal percentages should be equally important as the other categories in fantasy basketball. In part I of this series, I went over FTOP and FGOP - my method of turning percentages into simple countable statistics. But even after making field goal percentage and free throw percentage countable, normalizing FGOP and FTOP is more difficult than the other categories, since half the scores are negative.

To account for how much more the worst shooters affect your team than the best shooters, I only normalize the positive shooting to points scored. The statistics FTOP and FGOP already ensure that there is equal value above and below the average shooter. The total effect of FGOP and FTOP coefficients is the same as the other categories, even though half of the effect is negative.

Ranking Players with Rarity Scoring


1. Determine the size of your league (your population).
2. Nominally rank all NBA players - we will have to re-rank players over and over again, so the way we rank them the first time doesn't matter. It helps to use a logical ranking so it takes fewer iterations though.
3. Assign nominal coefficients to all of the categories in your league. Again, the more logical your starting point the faster the process finds your solution.
4. Find the sums of all the countable statistics of players that would be owned in your league (a 13 player, 10 team league you would sum the top 130 players - your "population"), then determine the coefficients to normalize those scores back to points.
5. Using the new coefficients you just calculated, find the total fantasy production of each player in the NBA, then re-rank the players.

Since every time you re-rank your players, it will change the totals of all the players in your population, you have to re-calculate the coefficients and total values over and over again. Usually by 5 iterations, the ranks will have already stabilized and you have results. Please note that this whole process is a lot of work in excel, so before you try this make sure you have some time to invest. The last installment of this series will have a downloadable spreadsheet to calculate stats for your league if you want me to do the work for you!

Using average statistics for generating ranks and coefficients gives a lot better results - the number of games played or games missed will greatly affect results. In order to be more accurate, calculate the averages yourself using total stats and total games played - a spreadsheet carries a lot more decimal places than websites designed to be easy to read.

How to Use Rarity Scoring Coefficients


I am not posting results for most of the categories here because there is so much more to the Fantasy Basketball Manifesto that I haven't covered yet, but here are the coefficients. FTOP and FGOP are more difficult than posting a coefficient due to things we will discuss in part IV. Using these coefficients is easy - multiply a box score line by the coefficient below to see how valuable it is for fantasy.

Points: 1
Rebounds: 2.5
Assist: 4.5
3PM: 15
Steal: 10
Block: 20

These are rough estimates of the values using pure rarity scoring. A 30 point, 5 rebound game with no treys, blocks, steals, or assists doesn't help very much compared to a 12 points, 2 treys, 5 assist, 3 steal, and 1 block line, even though the first one has a better chance of getting you on SportsCenter!

Armed with this I hope you have better luck in your leagues trying to pick up new players or ripping off your friends in blockbuster trades! Keep looking here for the rest of the StatDance.com Fantasy Basketball Manifesto!


Every fantasy basketball player tries to rank players on their own. There are many ways to figure out what players give you a better chance of winning. The best way is to watch a lot of basketball and judge who is performing well, and anticipate who will get better, come back from injury, or get more playing time. Obviously, most fantasy basketball players aren't going to be doing this as a full-time job, so that isn't going to cut it. Reading a lot about the NBA helps a lot, gaining knowledge from the people that follow the NBA professionally (or semi-professionally at least). But the way most fantasy players add players and make trades is by looking at box scores then trying to compare them, combined with following the league a bit.

A great tool to use is the in-game player ranks – it’s not telling you how a player will perform in the future, but it is supposed to tell you how much players contributed to fantasy teams over the specified time period in the past.

How do they come up with these ranks? To many players, it is a magic black box that comes up with a number that doesn't always make sense. But often, they do - Kevin Love is putting up his usual monster numbers, and sure enough, he’s ranked in the top 5. 

They aren't hiding how they do the ranks though - Yahoo, Basketball Monster, and ESPN all rank their players using “Standard Scoring” (also referred to as a Z-Score). Basketball Monster links to the same Wikipedia page I do and posts the individual Standard Scores for each player. ESPN at least lists the score of each category. If you use the same time periods, league size, and categories, Basketball Monster will give you very similar ranks to Yahoo and ESPN. 

There are slight differences, indicating added minor wrinkles, but the ranks are almost never more than a few ranks different. Yahoo appears to use a population of 130 (standard yahoo league size) and Basketball Monster as a default uses a population of 156 - 12 teams of 13 players, but is very flexible. ESPN's player rater seems to use different population sizes for different statistics. But, the results for all three websites are based on Standard Scoring.

How to Properly Weigh Percentages 


In order to compare percentages properly with each other, they have to be a countable stat, like blocks or rebounds. Obviously shooting 9 of 10 is better than 2 of 2 - but shooting 100% is better than 90%. Instead, you should calculate the number of made shots above your opponents field goal percentage.

If your opponent shoots 50%, we count how much each player helps us beat that percentage. If Durant shoots 9 of 20, then he is one field goal short of the goal percentage (50%). Conversely, if LeBron shoots 12 of 16 then he is four field goals above 50% for the night. 

Free throw percentage is calculated the same way. This creates new, countable stats that we can substitute in for the percentages. I call these stats "FGOP" and "FTOP" (pronounced "Ef-Gop" and "Ef-Top") - Field Goals Over Percentage and Free Throws Over Percentage. These are countable stats just like treys and blocks. In the above example, Durant scored -1 FGOP and LeBron had +4 FGOP. 

Obviously, everyone's opponent has different shooting percentages, so we have to use average field goal percentage (or free throw percentage) - which is, on average, what everyone's opponent shoots. I have been using this method for years, but as I researched this article, I found that Basketball Monster and ESPN do as well. Given how similar the ranks are, I believe Yahoo does too. But most importantly, this method makes logical sense to me.

Your opponents, on average, will shoot the league average. Lets say this week your opponent shoots exactly the league average. Going into the last night of the matchup, your team has scored cumulatively -1 FGOP. If on the last night, your team scores greater than +1 FGOP, then your team will win the matchup by scoring a positive number of baskets above your opponents percentage.

The player who amasses the most FGOP or FTOP will almost always win the matchup. If there is a large difference in attempts, then it is more likely the player scoring fewer FGOP will win. On the extreme side, if your team made its only free throw, you would win that category despite only having approximately .25 FTOP.

FGOP and FTOP would always predict the winner of a matchup if you based the statistic on your opponents percentages (a positive sum would mean you win, negative you lose) - but using average percentages is a great way compare a player's overall contribution to field goal and free throw percentage across all teams. 

The Ranking Process


Using the Standard Scoring method is simple in theory, but gets complicated to actually do. The general process has to be iterative in order to get the averages of only the top players.

1. Rank all the NBA players in descending value. How you rank the players is not really important.
2. Determine the population size – the size of your league. Our example will be a 13-player, 10-team – a population of 130.
3. Find the average production in each category of the “own-able” players – in our example, the top 130.
4. Find the standard deviation of each category in our population.
5. Calculate the Standard Score of each player – all players in the NBA. Each player gets 1 point for each standard deviation he is from the average of our own-able population. If the average player in the top 130 scores 10 points per game and the standard deviation is 5 points, and you score 5 points per game, your score is -1. Conversely, if you score 15 points per game, your score is +1.
6. Re-rank all of the players by the resulting sums of Standard Scores of each category your league competes in.
7. Repeat steps 3-6 until the ranks no longer change.

If computing standard scores isn't clear, and Wikipedia did not help you, I have a fun example to possibly help explain it.

Seven NBA stars have a sock contest at the All-Star game. They go to their luggage and count the pairs of socks of each color they have with them. The winner is the player with the highest Standard Score for their contest’s three categories – Red, Blue, and Black. I'm sure Cliff Paul would have won had he been invited.




(And yes, those are actually approximations of Points, Assists, and Steals)

As an aside, ESPN’s player rater and Yahoo’s in-game ranks do not change based on your league’s settings (Yahoo uses 9 categories including turnovers, ESPN uses 8 categories). Every custom league that has changed the categories and uses the in-game ranks to evaluate trades and waiver-wire pickups is using numbers that don’t actually apply to their league. To determine the Standard Scores for your league’s settings, I recommend using Basketball Monster’s tools.

This is the fourth installment of the StatDance.com NBA Draft Analysis series. In Part I, I established the basis of how the process works - how to value players and establish an Expected Value based on their draft position. In Part II, I ranked the last ten draft classes by strength. And Part III was ranking the best-drafting teams over the last ten drafts.

Now, we get to the decision-makers and analyze the General Managers themselves. Originally, this was why I started the NBA Draft analysis project. I looked back at who was the GM for every team at every draft over the last ten years (2002-2011 drafts) and assigned each of them credit (or blame) for their team's draft that year. I do realize that every GM has help and/or orders on draft night - the team owner or another executive might force some picks and the GM has no choice but to take responsibility for it.

For example, Michael Jordan has been widely ridiculed for running the Bobcats poorly, but he has never actually been the GM in Charlotte. Therefore, he is not eligible for the rankings, despite most likely having a lot of influence over who gets drafted. Another example is Pat Riley, who was hired as the head coach and team president of the Heat in 1995, but has only had the General Manager title since the 2009 draft. While I assume that Riley was calling the shots since he got there, if I make a judgement call for any team, I have to look in to every team - and it still wouldn't be fair. Even Pat Riley still has a boss, and it could have been the owner's decision to draft Wade in 2003. The point is, there is no fair way to look at this unless we use a black-and-white system of who had the GM title.

I highly recommend looking back at Part I and Part III for further information about how these numbers were generated if you are interested. Now, on to the rankings!

The Top 10 Drafting Tenures as GM (2002-2011)

 The formula I came up to rank the top GM tenures with is overly complicated, but basically it values drafting well over a large sample size. To qualify, you must have at least 5 draft picks. This is a ranking I plan on taking another look at in the future with a wider historical window, since ten years doesn't really do this justice, especially if you consider how many players can still change their evaluation.


The Worst 10 Drafting Tenures as GM (2002-2011)

Much simpler ranking - total raw value short of Expected Value. I did not include Jerry Krause since Jay Williams' motorcycle accident is the only reason he would have made the list. This list favors failed GMs from early in the 2002-2011 window, but the more recent GMs might argue their picks still might turn out, and I tend to agree.


Ranking the Current GMs from 1 to 30

The simplest  rating so far - just ranking by percentage of Expected Value they have drafted as GM from 2002-2011. Some of these executives have multiple tenures during this period, but they have all been combined to give a overall drafting performance over the last decade.


Before Pat Riley emails me and complains that he doesn't deserve to be on the bottom of this list, let me just say that its not really fair to say Riley has drafted worse than Kahn and Pritchard. They have a much larger raw deficit, this is just simple percentage based ranking.

Some other interesting notes:
  • Kevin Pritchard has a new job for the Pacers after he showed up in the top 10 worst tenures in the last decade ranking. He made the mistake in Portland a lot of people would have made, drafting Oden over Durantula. He was not far below average other than that pick.
  • Michael Jordan was only GM for one year that I looked at in Washington (2002) - he got 70% of his expected value with four picks, two in each round. 
  • 17% of the league's current GMs were not in charge of a team from 2002-2011
  • Donnie Nelson looks like a poor drafter in this analysis, which might be true - but he has been in charge of the Mavericks for 10 years and has only had 1 pick in the top 24 in that time (he picked Devin Harris 5th, who has outperformed his EV). During that tenure they are second in the NBA in winning percentage. That's hard to do even if you find talent late in the draft - which he obviously can't do.
Thanks, as always, to Basketball-Reference.com for their data (both player stats and executive listings), this would be even more time-intensive if not for that resource.


This was Part IV of the StatDance.com NBA draft analysis.
Part I: Determining the expected value of a draft pick
Part II: Ranking the Strongest NBA Drafts
Part III: The best and worst drafting teams
Part V: Who did they miss? Looking at the undrafted free agents in the NBA - Coming Soon


This is part III of the StatDance.com NBA Draft Analysis series. In part I, I went over how to fairly evaluate a draft pick. Basically, the contribution of each pick is measured using Player Efficiency Rating and the number of minutes played every year. The first eight years of a career are weighted, and then measured against the Expected Value of that draft pick. The Expected Value is a smooth-line historical average, based on years after being drafted and the draft position. So, if a player performs better than the average player drafted at his spot, he gets a positive evaluation, and vice versa. In part II, I looked at which drafts over the last ten drafts (2002-2011) were the strongest and the weakest.

While it is true that there are other ways to build your team from year-to-year, the draft is the only organic way to acquire talent. Free agency is great - you get a known commodity (usually!) but it is dangerous to count on, since you never know for sure what players you are able to sign. In order to field a strong team, you need to acquire talent through the draft - if you trade them away to land other assets, it still took an astute evaluation of the talent to get the players you need.

How are championship teams actually put together? Lets look at the last 5 NBA champions and see how they built their teams. We will look at the top 3 or 4 players in PER*MP for each team.

2012 - Miami Heat
LeBron James - 71408 - Free Agency
Chris Bosh - 37932 - Free Agency
Dwane Wade - 42737 - Drafted

2011 - Dallas Mavericks
Dirk Nowitzki - 58594 - Drafted
Tyson Chandler - 37886 - Trade
Shawn Marion - 38301 - Trade
Jason Terry - 40768 - Trade

2010 Los Angeles Lakers
Andrew Bynum - 39935 - Drafted
Kobe Bryant - 62086 - Drafted
Pau Gasol - 55029 - Traded

2009 - Los Angeles Lakers
Kobe Bryant - 72224 - Drafted
Pau Gasol - 66578 - Trade
Lamar Odom - 38445 - Trade

2008 - Boston Celtics
Kevin Garnett - 58898 - Trade
Paul Pierce - 56330 - Drafted
Ray Allen - 43033 - Trade

2007 - San Antonio Spurs
Tim Duncan - 71149 - Drafted
Manu Ginobili - 49646 - Drafted
Tony Parker - 53479 - Drafted

Only 8 of the 17 players that were the major contributors to a title had been drafted by the team that they took to a championship, but only 2/16 were actually signed in free agency.  This just shows that (at least over the last six years) it is vitally important to acquire assets in the draft so you can play them, or at least trade them for who you want.

In today's NBA, free agency greatly favors the stronger teams - great players want to win. And in order to trade for the players you need to complete your team, you need to have assets - thats where drafting wins you championships. The Heat don't land LeBron without drafting Wade. Every championship team is built by acquiring talent, and the biggest part of that is on draft night.

I've analyzed the NBA draft for the last ten drafts - 2002-2011 (2012 would be useless to analyze, since they haven't played yet). I compared the results for each team with their winning percentage over the last ten years. For each team, I compared the value they got out of their draft with the Expected Value of the picks they got "credit" for. To get "credit" for a draft pick, the team must either draft with their own pick and keep the player, or acquire a player's draft rights near draft time (usually on draft night, but occasionally afterwards).

Over the ten year span, I have winning percentages for each team, ranking from .358 (Charlotte) to .706 (San Antonio). Then I have the draft successes. Of the 30 teams, 15 have gotten over 100% of their Expected Value, and 15 have gotten less. The most successful team (Boston) has gotten 149% of its expected value. The least successful team drafting, the Clippers, has only gotten 68.5% of their Expected Value.

Can You Win Without Good Drafting?

While everyone knows that drafting better players correlates with winning basketball games, its nice to know that the system works, so lets analyze the results. Only 4 teams have a winning record over the last 10 years without also getting over 100% their Expected Value on draft night. The teams are: Dallas, Denver, Phoenix, and San Antonio Houston (Luis Scola was credited to Houston mistakenly and is now credited to the Spurs - once I switched him, the Rockets went below 100% and the Spurs went over 100%). The Mavs (Dirk), and the Spurs (Duncan/Parker/Manu) all drafted huge stars before the first year of my analysis. If I expanded my analysis another five years, the Spurs would be one of the strongest drafting teams. The Mavericks are only at 81%, after adding Nowitski they might still be close. The rest of that team is assembled with pieces that they got at a discount in trades.

Denver has only had two bad drafts in the last decade: 2002 (picking Nikoloz Tskitishvili 5th) and 2005 (picking Julius Hodge 20th). Since then, they have consistently had good drafts or gotten rid of all their picks (you can't mess up a draft if you trade the picks for proven commodities). They also turned Carmelo Anthony into a lot of assets. Still, definitely an exception with overall poor draft performance and a .551 winning percentage.

Phoenix is an even better example of poor drafting and yet still winning, having had only two good drafts in the last decade. Their Nash aqcuisition really propelled them a long ways, with a .595 winning percentage and one great pick (Amare).

Houston has drafted just below 100% of their Expected Value and yet has a winning percentage of .562. They have been good enough since they drafted Yao Ming in 2002 to avoid any high draft picks. While they are at 98%, they are only 13966 points away from being at 0%, the closest of any team to breaking even.

So, to recap - only 4 teams posted winning records without good draft results, and one of them was very close to breaking even.

Can You Lose While Drafting Well?

There are ways to win in the NBA without drafting well - so of course there are other ways to lose, too. It only takes a few really horrible free agent signings to completely tank a franchise. Fortunately for my analysis, it seems the teams that draft well are more likely to run their team well - there are only 5 teams that have a positive draft record and have managed to post a losing record: the Wizards, Knicks, Hornets, Kings, and 76ers.

The Wizards are barely positive with the draft at 102% of Expected Value, but they suffered from Michael Jordan's mismanagement from 2000-2003 and then the Gilbert Arenas era (four productive years for the Wizards and two contracts signed worth a total of $170 Million).

The Knicks are the classic case of great drafting and horrible management. To quote Bill Simmons (pretending to quote Isiah) "If you look at what I've done over the years, I always drafted well: Stoudamire, T-Mac, Camby, Frye, Ariza … you want to stockpile as many assets as possible, only because it gives you more options to do something dumb." What more can I say?

The Hornets (winning percentage: .488) have gotten an impressive 131.8%  of their Expected Value over the last ten years. Their winning percentage the last five years (after they moved back to New Orleans) is .530 - they had some bad years when they were in Oklahoma City.

Sacramento, despite their winning percentage over the last ten years of .456, has drafted from the second round or late in the first round the first five years of our analysis, 2002-2006. Since then, their first picks have been 10, 12, 4, 5, and 10. Their drafting record is stellar, having had only 1 year the past 10 under 100% of Expected Value (2006 when they picked Quincy Douby 19th with their only pick). Their winning percentage the last 5 years is an atrocious .320, even worse than the 10 year mark. Either the Kings are going to start winning titles now, or they are one of the worst-managed teams in the history of the league.

Philadelphia is the best example of good drafting and a bad record - boasting a ridiculous 147.3% value from drafting while posting a .474 winning percentage. The 76ers are a study of mediocrity - always playing well enough to avoid drafting too high (Iguodala 9th in 2004 being their only top 10 selection), but never having the talent to really start winning.

Ranking The Best Drafting Teams

So, to summarize the last two sections: 3 teams drafted poorly with winning records, and 5 teams drafted well and had losing records. That means 23 teams either posted winning records with positive draft results, or posted losing records with negative draft resuluts (under 100% Expected Value).

So we are left with the results! Here are the teams that have gotten the best value for their picks from 2002-2011. Of course, these rankings could all change a lot since the majority of the players are still playing, but this is how it stands today.

  1. Boston Celtics 148.9% 
  2. Philadelphia 76ers 147.2% 
  3. Sacramento Kings 138.0% 
  4. New Orleans Hornets 131.8% 
  5. Cleveland Cavaliers 131.5% 
  6. Miami Heat 128.1% 
  7. Los Angeles Lakers 117.8% 
  8. New York Knicks 116.2% 
  9. Indiana Pacers 113.6% 
  10. Utah Jazz 112.8% 
  11. Detroit Pistons 111.6% 
  12. Orlando Magic 105.5% 
  13. San Antonio Spurs 104.4%
  14. Washington Wizards 102.2% 
  15. Chicago Bulls 101.4% 
  16. Houston Rockets 98.32% 
  17. Milwaukee Bucks 97.37% 
  18. Atlanta Hawks 93.61% 
  19. Oklahoma City Thunder 91.45% 
  20. Memphis Grizzlies 90.72% 
  21. Phoenix Suns 89.28% 
  22. Charlotte Bobcats 88.19% 
  23. Denver Nuggets 82.86% 
  24. Toronto Raptors 82.82% 
  25. Brooklyn Nets 80.83% 
  26. Dallas Mavericks 80.55% 
  27. Portland Trail Blazers 79.77% 
  28. Golden State Warriors 77.49% 
  29. Minnesota Timberwolves 72.65% 
  30. Los Angeles Clippers 68.46% 

And here is each team, with every pick they get credit for over the last ten years. 

(note: Houston no longer has credit for Luis Scola)
(note: San Antonio now has credit for Luis Scola)

This was Part III of the StatDance.com NBA draft analysis.
Part I: Determining the expected value of a draft pick
Part II: Ranking the Strongest NBA Drafts
Part IV: We evaluate every NBA GM since 2002 - Coming Soon
Part V: Who did they miss? Looking at the undrafted free agents in the NBA - Coming Soon

In Part I of the NBA Draft Analysis series, I went through the methodology of determining a players worth, and listed some of the best value picks of all time. In this second installment (of five) I'll go through each of the last ten drafts (2002-2011) and look at some of the best and worst picks from each draft, and then rank the drafts in order of overall strength.

To briefly recap the value system, for every pick in the last ten drafts we average out the performance in an eight-year weighted average and determined the expected value from each of the picks. If you haven't read Part I yet, I wrote a pretty detailed explanation of the system.

You can view a gallery of the drafts (from Part I) directly on imgur from here. The overall rankings are based on the total production of all players drafted divided by the total expected value.

2002
Overall: 83.16%

Best Picks: Yao Ming (1), Amare Stoudamire (9), Carlos Boozer (34)
Worst Picks: Nikoloz Tskitishvili (5), Dajuan Wagner (6)

Only 11 players performed over 125% of their expected value. This draft was pretty weak across the board with the top 10, 11-30, and 31-57 slices under-performing.

2003
Overall: 120.21%

Best Picks: LeBron (1), Carmelo Anthony (3), Chris Bosh (4), Dwyane Wade (5)
Worst Picks: Darko Milicic (2), Mike Sweetney (9)

There were lots of standout picks in this draft, but with the four greats I have listed, it doesn't seem fair to list the others. David West at 18 performed at the Expected Value of a #2 pick, Josh Howard (29) at a #3 pick, and Mo Williams (47) close to #4 value.

Interestingly, this draft was not exceptionally deep. The overall 120% value mostly due to the players I've already mentioned. Overall, over a third (23/58) of the picks gave less than 25% of their Expected Value overall, an average number over the last decade.

It should be noted that the Darko pick by the pistons was made exponentially worse by the other members of the top 5. The Pistons had just won the NBA Championship - imagine if they had added a Carmelo or a Wade to that team.

2004
Overall: 100.28%

Best Picks: Dwight Howard (1) Andre Iguodala (9), Josh Smith (17), Kevin Martin (26), Al Jefferson (15)
Worst Picks: Shaun Livingston (4), Rafael Araujo (8), Luke Jackson (10)

This draft is exceptional for having a very shallow talent pool. With the names listed above, there was a lot of talent trafted in the first round. Three players went on to have production consistent with being drafted first overall (Howard, Iguodala, and Josh Smith), and four more produced number two overall values - Okafor (2), Ben Gordon (3), Luol Deng (7), and Al Jefferson (15). Thats seven players that could have been drafted #1 or #2 and been a worthy selection. But unlike most drafts, there were almost no players drafted in the second round that went on to have significant careers - Trevor Ariza (43) and Chris Duhon (38) were the only two exceptions.

2005
Overall: 114.04%

Best Picks: Chris Paul (4), Danny Granger (17), David Lee (30), Monta Ellis (40)
Worst Picks: Yaroslav Korolev (12), Julius Hodge (20)

The high overall rating of this draft is amazing considering the careers of the first and second picks (Andrew Bogut and Marvin Williams) - both have under-performed their draft position. The draft is bolstered by Deron Williams (3) and Chris Paul (4), and an exceptionally solid 17-40, with players like Monta Ellis and Louis Williams getting drafted in the second round.

2006
Overall: 77.91%

Best Picks: LaMarcus Aldridge (2), Rajon Rondo (21)
Worst Pick: Adam Morrison (3)

This draft was so weak it seems hard to call many of the picks bad - there just wasn't that much talent available. Only 4 players have given the Expected Value of a #4 pick, compared to a similarly weak 2002 draft when 7 players performed at that level. Only 7 players have given the Expected Value of a top 10 player.

2007
Overall: 95.75%

Best Picks: Kevin Durant (2), Marc Gasol (48)
Worst Pick: Greg Oden (1)

The only player to really stand out in this draft is Durant, who was half of the obvious #1/#2 pairing with Oden. Gasol as a late round pick was a great pick since he wasn't going to play the next year, which obviously paid off and he has given the third highest value of his draft class so far, despite missing a year.

2008
Overall: 117.06%

Best Picks: Westbrook (4), Love (5), Brook Lopez (10)
Worst Picks: Joe Alexander (8), Alexis Ajinca (20)

An exceptionally deep draft, with only the two "worst" picks not producing well among the first 29 picks. A very high 114% overall performance, without a group like the 2003 draft (LeBron/Bosh/Wade/Carmelo) makes this draft unique among the last ten. An amazing 34 players drafted performed at least at 75% of their Expected Value, the most in the ten years of this survey.

2009
Overall: 108.32%

Best Picks: Brandon Jennings (10), Jrue Holiday (17), Ty Lawson (18), Darren Collison (21), Marcus Thornton (43)
Worst Pick: Hasheem Thabeet (2)

With only three seasons to look at, many of these picks are still works-in-progress. Ricky Rubio has given almost nothing back to the Timberwolves, but the glimpse we saw of him last year shows he could still be a good investment of a #5 pick. This draft, much like 2008, appears to be very deep with 32 players overall performed at 75% or better of their Expected Value, the second-most in the last ten drafts. However, no player has given, so far, performance equal to that of a #1 pick. Blake Griffin is closest, having been drafted in the spot, and he missed a season due to injury - so all expectations are that he will exceed his Expected Value soon.

2010
Overall: 76.73%

Best Picks: Greg Monroe (7), Landry Fields (39)
Worst Picks: Evan Turner (2), Cole Aldrich (11)

These players have only had two seasons to perform, so its not very fair to be evaluating the draft already. But so far, it is remarkable that Landry Fields has been able to contribute the Expected Value of a #4 pick from the 39th pick. Greg Monroe has put up exceptional value, producing more than John Wall - and both of them over the EV of a number one overall.

2011
Overall: 96.39%

Best Picks: Kyrie Irving (1), Isaiah Thomas (60)
Worst Picks: N/A

While this is way too early to evaluate the draft using the metrics that I have designed, it should be noted that what Thomas did, as the 60th pick in the draft, is pretty remarkable - having the second-most productive rookie season of the draft class.

Here is a summary of the overall results:


This was Part II of the StatDance.com NBA draft analysis.
Part I: Determining the expected value of a draft pick
Part III: Team-by-team NBA draft performance - Coming Soon
Part IV: We evaluate every NBA GM since 2002 - Coming Soon
Part V: Who did they miss? Looking at the undrafted free agents in the NBA - Coming Soon