clock menu more-arrow no yes mobile

Filed under:

Predicting the Raptors’ win total for the 2019-20 season

The Raptors lost a lot of talent over the summer. Let’s take a look at just how much that might hurt them statistically, and figure out where Toronto stands compared to the rest of the NBA.

NBA: Finals-Golden State Warriors at Toronto Raptors John E. Sokolowski-USA TODAY Sports

Every year I compile statistics from around the NBA world and try to use them to project win totals for every team, with a special focus on the Raptors. I try to improve the process every year, and have settled onto a basic approach over the past few seasons, with the difference being which statistics I look at.

For a detailed breakdown of the process and a look at the results from last year, take a look at last year’s article (which you might find I’ve pulled a lot of text from for this article), or even the year prior. I’ll give a quick summary here as a refresher though, if you don’t want or need the full breakdown again, and note what has changed from last year.

First off, the statistics. For every iteration of these projections, I’ve used Win Shares (WS, from Basketball Reference) as the proxy for production, which personally I differentiate from impact, two different ways to evaluate players. Production is the ability of a player to put up raw points, rebounds, assists, with good efficiency and without committing turnovers. Win shares are a catch all box score aggregation statistic meant to tie those production values into a single measure that represents each player’s contribution to generating wins.

For impact stats, I started with just using Box-Plus-Minus (BPM) from Basketball Reference, as it was easy to cull from the same site. Two years ago, I added Real Plus Minus (RPM) from ESPN. Both are box-score correlated impact stats. They use adjusted plus minus data over several seasons to evaluate the impact each player has had on winning their games. (In other words, what the team’s point differential is with each player on the floor, adjusted for quality of teammates and quality of opposition.) Then they correlate box score values to that data to be able to generate more static numbers from single season samples. They are middle ground stats — as impact stats driven are by box score metrics to some degree.

Last year I added one more, a new one from Jacob Goldstein called Player Impact Plus Minus (PIPM). It’s got more actual plus-minus data in it, relying less on box score priors, as far as I can tell anyway. The values assigned to different players also align more with my personal opinions, so there’s also a little bit of confirmation bias in why I like the stat. I also use one year of data for the first two impact stats and WS, but use multi-year data for PIPM, so we should be getting a wide variety of inputs to mash all together.

I’ve played with how to weight the various stats over the years, but in the interest of keeping my preferences out of the model as much as possible, I’m just weighting all four equal this year.

The Method

In past years, I’ve driven mainly to get total win projections out of this approach, but this year I’m also pulling projected offensive and defensive ratings, using the same data. In the case of the impact stats, they directly project plus-minus impact, which can be applied to the league average ORTG (or DRTG), which was 109.7 last season, to get predicted ORTG and DRTG for each team. From there a simple Pythagorean win formula gives you the win total. For the Win Shares calculation, it works backwards — I get total wins by adding up individual win shares, then back calculate what the teams’ ORTG and DRTG would have to be to result in that win total, using the split of offensive and defensive win shares to figure out how much of their success will come on each end.

As with prior years, I will attempt to keep my paws out of the process as much as possible once it’s settled on. Meaning that I won’t be making rotation decisions for teams, nor deciding that one player had an off year and to use different data for them (with only extreme case exceptions to that rule).

So, the mostly automated process is as follows. Each team’s roster is assembled based on contracts signed so far. Each player is assigned exactly the same minutes played as the year prior, the team’s total minutes are summed, and if they exceed or fall short of 19,680 minutes (48 minutes times five positions times 82 games), every player on the team is adjusted up or down proportionally to make that total match. The exception being that every player is capped at 3,000 minutes maximum, as any more than that is unreasonable on the face of it.

Once the new expected minutes played for each player are assigned, they also get their WS/48 (per-48-minute Win Share rate), BPM, RPM and PIPM from the previous season assigned to them. In previous years the total values were applied — this season I pulled the offensive and defensive data separately. For each team, a total Win Share number is summed from the various players’ WS/48 and minutes. Each team also has the minutes-weighted average BPM, RPM and PIPM calculated.

The Win Share sum for each team is the win projection. Easy enough. The impact numbers are more difficult, as they are represented as a point differential (margin of victory). Since there are five players on the court at any time, the average point differential determined by each statistic is multiplied by five, then a Pythagorean wins calculation (I use the exponent 14 variant) is used to translate that point differential to a win-loss record.

One last adjustment is made, which is to make sure the league average win total for each projection is 41 wins. This year, that adjustment is made first on the predicted offensive and defensive ratings, ensuring that the predicted league average for both is 109.7. Then, for each win statistic, we just subtract (or add) the excess (or missing) average win total from every team. This is a much smaller adjustment compared to previous years having fixed the ratings first.

Predicting The Raptors

We’ll walk through the Raptors as an example. The following chart shows the input values — the players on the roster, the minutes totals from last year, and each player’s WS/48, BPM, RPM and PIPM.

Unlike previous years, I haven’t done any work to modify results for players who missed a lot of time last year. The inputs are the inputs.

In any case, here is that chart, with the 2018-19 minutes and production inputs, as well as the pro-rated 2019-20 expected minutes, and the contribution of each individual to the team level Win Shares, BPM, RPM and PIPM numbers. You’ll notice the scales completely change between the graphs — in the team contribution chart, the WS numbers are much bigger, as they are total contributed wins instead of a per minute rate. While the plus-minus numbers are much smaller, as they are pro-rated down to the fraction of team minutes each player will play.

With those inputs (you just sum up the values in each column), the team totals are calculated. Then the plus minus numbers are added to the league average ORTG to get the team’s ORTG and DRTG, the league wide adjustments are made so the league average ORTG and DRTG and win totals are the right numbers, and you have the following results:

Ranking the League

Do the same thing for every team in the league, and you get the following standings. Now, these do not jive with my personal predictions for the season, but that’s kind of the point of this hands off approach, to give you something besides my opinion on things. I’ve listed the win projections, showing projected playoff teams, as well as the ORTG and DRTG rankings league wide to give a sense of how strong each team might be offensively or defensively.

One thing that is likely worth doing, in terms of dealing with injuries, is to remove players who won’t be on the team at all this season. For example, Kevin Durant in Brooklyn. He’s included in these calculations because he played minutes last year. But he won’t play at all this year. There are only a few players who are legitimately being reported as being out for the year, so I’ll just list a few examples here, but if you have a specific example you want checked, let me know in the comments and I can estimate the impact of that for you.

And absent Durant moves the Nets from a 50 win team to a 43 win team, which seems a more likely projection for them this year. Jusuf Nurkic is projected to play about 2,000 minutes for the Blazers, but he is out until February, at least. If he were to miss the entire year, the projection for Portland goes from 47 wins to 39 wins. If he plays 1,000 minutes late in the season, 43 wins.

One weird example is Josh Jackson of Memphis. He’s awful, but plays a lot of minutes. Reports just came out that he’s not going to be in camp for the Grizzlies and is starting the season in the G League. If they were to keep him there all year (and therefore remove him from the calculation), Memphis’ win total rockets from 27 wins to 33.

There are loads of examples of tweaks and adjustments you could make to the projected minutes and rosters, but (a) that’s a lot of work and (b) as stated, the goal of this is to keep my assumptions out entirely, or as much as possible. Besides Nurkic and Durant, the other players I found with significant injuries don’t move their teams’ projections by more than a couple of wins.

Anyway, as mentioned, let me know if you want any specific examples tested. Otherwise, what do you think of these rankings? A few things surprised me, that’s for sure. The model doesn’t consider fit at all, so Houston, for example, where there are all sorts of questions about how Westbrook and Harden might fit together, sees a huge boost from the raw impact and talent each player represents on their own.

Similarly, Milwaukee projects low, mostly because Wes Matthews isn’t very good anymore and had a heavy minutes load last year. If he plays very little, their number jumps to 55 wins. I have no explanation for how OKC is in the playoffs. I guess the numbers think Chris Paul, Danilo Gallinari, and Steven Adams are enough to stay about .500 — but the model also assumes all three will play over 2,000 minutes, and that’s the bigger question mark. Also, wow, Cleveland. Just wow.

Now, one other thing to keep in mind: rookies didn’t play last year, so the model ignores them entirely. So if you think a team will benefit from their high profile rookie (say, the Pelicans with Zion Williamson), that impact won’t be captured here. But keep in mind, most rookies are bad, and really good rookies are usually still low impact players, with rare exceptions. Zion may be one of those, but tough to bank on that before seeing him play in the NBA.

For the Raptors, one thing that does stick out is that the lowest win predictor (and ORTG predictor) for them was the raw production stat, win shares. This is a team that will struggle to put points on the board — they lack primary scorers, and unless several players further step up their play in that area from last season, they might well be significantly below average offensively — resulting in that projected 22nd rank in ORTG.

But, ooh, that defense. Looking like it might be special. And with Durant not playing in Brooklyn, has them projected as a strong enough team to be in a virtual tie for third in the East. If they can get that offense clicking at all, this might be a pretty fun victory lap year.

Source statistics from, and