Every year I compile statistics from around the NBA world and try to use them to project win totals for every team, with a special focus on the Raptors. I try to improve the process every year, and have settled onto a basic approach over the past few seasons, with the difference being which statistics I look at. This year versus last is the most stable this has been from year to year.
For a detailed breakdown of the process and the results from last year, take a look at last year’s article (which you might find I’ve pulled a lot of text from for this article, and which has links to older versions). I’ll give a quick summary here as a refresher though, if you don’t want or need the full breakdown again. The approach is nearly identical to last year.
First off, the statistics. For every iteration of these projections, I’ve used Win Shares (WS, from Basketball Reference) as the proxy for production, which personally I differentiate from impact, two different ways to evaluate players. Production is the ability of a player to put up raw points, rebounds, assists, with good efficiency and without committing turnovers. Win shares are a catch-all box score aggregation statistic meant to tie those production values into a single measure that represents each player’s contribution to generating wins.
For impact stats, I use Box-Plus-Minus (BPM) from Basketball Reference and Real Plus Minus (RPM) from ESPN. Both are box-score correlated impact stats. They use adjusted plus minus data over several seasons to evaluate the impact each player has had on winning their games. (In other words, what the team’s point differential is with each player on the floor, adjusted for quality of teammates and quality of opposition.) Then they correlate box score values to that data to be able to generate more static numbers from single season samples. They are middle ground stats — as impact stats, they are driven by box score metrics to some degree. Over the past couple of seasons I’ve also used Player Impact Plus Minus (PIPM). It’s got more actual plus-minus data in it, relying less on box score priors (as far as I can tell anyway).
I’ve played with how to weight the various stats over the years, but in the interest of keeping my preferences out of the model as much as possible, I’m just weighting all four equal this year.
The results for this method will be not only a win projection but also an attempt at a projection for the offensive and defensive rating for each team. In the case of the impact stats, they directly project plus-minus impact, which can be applied to the league average ORTG (or DRTG), which was 110.3 last season, to get predicted ORTG and DRTG for each team. From there a simple Pythagorean win formula gives you the win total. For the Win Shares calculation, it works backwards — I get total wins by adding up individual win shares, then back calculate what the teams’ ORTG and DRTG would have to be to result in that win total, using the split of offensive and defensive win shares to figure out how much of their success will come on each end.
As with prior years, I will attempt to keep my paws out of the process as much as possible once it’s settled on. Meaning that I won’t be making rotation decisions for teams, nor deciding that one player had an off-year and to use different data for them (with only extreme case exceptions to that rule).
So, the mostly automated process is as follows. Each team’s roster is assembled based on contracts signed so far. Each player is assigned exactly the same minutes played as the year prior, the team’s total minutes are summed, and if they exceed or fall short of 19,680 minutes (48 minutes times five positions times 82 games), every player on the team is adjusted up or down proportionally to make that total match. The exception being that every player is capped at 3,000 minutes (over 82 games) maximum, as any more than that is unreasonable on the face of it. All adjustments for a shorter season are done at the end.
One thing I did this year that I haven’t done in prior years is to just remove small-sample players (<200 minutes played) entirely. Generally, the effect of those players was limited as their new minutes would be low as well, but this reduced the data I had to work with and likely stabilized the results some.
Once the new expected minutes played for each player are assigned, they also get their WS/48 (per-48-minute Win Share rate), BPM, RPM and PIPM from the previous season assigned to them. For each team, a total Win Share number is summed from the various players’ WS/48 and minutes. Each team also has the minutes-weighted average BPM, RPM and PIPM calculated.
The Win Share sum for each team is the win projection. Easy enough. The impact numbers are more difficult, as they are represented as a point differential (margin of victory). Since there are five players on the court at any time, the average point differential determined by each statistic is multiplied by five, then a Pythagorean wins calculation (I use the exponent 14 variant) is used to translate that point differential to a win-loss record.
One last adjustment is made, which is to make sure the league average win total for each projection is 41 wins. That adjustment is made first on the predicted offensive and defensive ratings, ensuring that the predicted league average for both is 110.3. Then, for each win statistic, we just subtract (or add) the excess (or missing) average win total from every team.
Predicting The Raptors
As we do every year, we’ll walk through the Raptors as an example. The following chart shows the input values — the players on the roster, the minutes totals from last year, and each player’s WS/48, BPM, RPM and PIPM.
In any case, here is that chart, with the 2019-20 minutes and production inputs, as well as the pro-rated 2020-21 expected minutes, and the contribution of each individual to the team level Win Shares, BPM, RPM and PIPM numbers. You’ll notice the scales completely change between the graphs — in the team contribution chart, the WS numbers are much bigger, as they are total contributed wins instead of a per minute rate. While the plus-minus numbers are much smaller, as they are pro-rated down to the fraction of team minutes each player will play.
With those inputs (you just sum up the values in each column), the team totals are calculated. Then the plus minus numbers are added to the league average ORTG to get the team’s ORTG and DRTG, the league wide adjustments are made so the league average ORTG and DRTG and win totals are the right numbers, and you have the following results —with the win totals adjusted down for a 72-game season:
Ranking the League
After doing the same thing for every team in the league, we get the following standings. I’ve listed the win projections, showing projected playoff teams, as well as the ORTG and DRTG rankings league wide to give a sense of how strong each team might be offensively or defensively.
Those rankings include the following special exception assumptions: I set some players who missed last season to their latest season with a significant sample. In the case of Steph Curry and Kevin Durant, that was 2018-19, where they played full seasons and should be pretty optimistic assumptions for their impact. In the case of John Wall and DeMarcus Cousins, their last real sample seasons were also 2018-19, but they only played partial seasons, so they will be assumed to play a partial season again at the reduced impact numbers they had that season. Otherwise, I just left players as calculated.
The projections are pretty flat this season. But the ranges for the offensive and defensive ratings are not too far off what we saw last season. The model might just struggle to predict those outlier best and worst teams that pace to 70 wins — it kind of requires everything to go right to get a result like that. Which, sometimes it does, I guess.
Sound off below about which of these results are way off, and let me know if you have specific questions (say, what happens in a particular James Harden trade, if one were to happen) and I can try to plug them in and see what numbers tumble out.