clock menu more-arrow no yes

Filed under:

CSSI Re-Look: Team Defense vs. Goaltending

New, comments

Since Chris Osgood's retirement on July 19th, the talk of his Hall of Fame credentials got me thinking. No, I don't want to re-open the case as everybody has had plenty of opportunity to weigh in on that subject, but the talk about Osgood's save percentage being a huge limiting factor made me take another look at Howard's save percentage this season and to bounce that off of the going discussion in regards to last year and the Red Wings' ability (or lack thereof) to keep the puck out of their own net.

The average save percentage of an NHL goalie who played in at least 30 games last season was .913. For comparison, Jimmy Howard's was .908. If Howard had provided an average save percentage, he would have given up 9 fewer goals last season. That would have jumped Detroit up from 23rd in the league to a tie for 17th in goals allowed. So why aren't we, as a collective of fans known for being very hard on our goalies, running Jimmy Howard out of town on the first bus available?

Because the eyeball test tells us that the team defense in front of him played like crap.

Ok, really, it's an exaggeration to say that people aren't saying Howard has to play better. I think we can all agree that he deserves some blame for the defensive woes last season. I don't think it's a stretch to say that people feel the defense in front of him is MORE at fault than he was though. Ultimately, I think the CSSI stats back that up, too.

Keep reading to find out how.

When I wrote my goaltender adjustments wrap-up post and asked for input on how to improve the system, I stated that the one part of the tracking I did all season with which I was happy was in keeping count of bad goals allowed. I truly did want to break down which goals were "on the goalie" and which ones were the fault of the guys playing in front of him. Looking back and putting it into the context of the save percentage debate, I think we have a good basis here for looking at exactly how much fault we can assign to goaltending and how much we can give team defense. 

As a team, the Red Wings gave up 237 goals last season. Of those, 39 of them were considered "bad" goals (counting half-bad goals as part of the overall total). That's roughly 16.5% of all goals scored against Detroit (or 1 in 6). Now, this isn't to say that the defense had nothing to do with any of those bad goals being scored, but it also isn't to say that the goalie wasn't perhaps a bit at fault for any of the ones that weren't considered "bad". Still, it's a fair starting point.

The immediate trouble we run into is that we have zero basis for comparison. I have no foggy clue what the average or even acceptable level of percentage of bad goals given up should be. 16.5% of all goals being bad could be the worst in the league or it could be five-best for all I know.  I can't even really say about what percentage of goals the Red Wings scored should have been considered "bad" by the opposition (except for that one that Drew Miller got against Halak... hoo boy).

What I can tell you is this though: Even with the Red Wings goaltending corps playing as perfect as a goalie could be expected to play, the Red Wings gave up 198 goals which were not considered soft. They still would have been only the 8th-best team in the league for goals against under those circumstances.  Zero bad goals against the Red Wings was still 18 more goals against than the Canucks surrendered (in only 49 more shots allowed). It's pretty clear that team defense was a big part of this problem.  What's more is that zero bad goals given up by Jimmy Howard (instead of his actual 27) would have given him a 0.923 save percentage and a GAA of 2.33. That's roughly in line with Carey Price's numbers. Even better with the Price comparison, the Red Wings squad with 0 bad goals against would have been just ahead of Price's Canadiens for total goals against on the season (198 to 206).

What this tells us is that either Carey Price gave up zero bad goals or his defense did a much better job limiting the amount of good goals against him.  Now, either one of those extremes is possible and we know the truth lies somewhere in between. My guess is that it's a lot closer to the idea that Montreal's team defense played that much better than Detroit's rather than Carey Price outplayed Jimmy Howard that severely.  This is a case where the eyeball test seems to confirm the numbers. Again, it's completely possible that I'm dead wrong here, as it's nothing but a logical progression that I'm using to compare Howard to Price. I will say however that I have never seen a goalie that I felt was so infallible as to never give up a softie. If Howard's numbers across the board were fairly average, then it's safe to assume that the average goalie in the league will give up pretty close to the same number of "bad goals".  Even comparing the exceptional Tim Thomas' goals allowed (112) to Howard's, we get a factor that Thomas allowed only 2/3rds as many goals as Howard. Well 2/3rds as many goals without a great statistical differential in the amount that are "bad"  would also mean only 2/3rds as many bad goals. That would still be 18 bad goals given up by the best goalie in the league last season.While I can't confirm that, the eyeball test tells me that even Tim Thomas is more likely to give up 18 bad goals over the course of a season than zero.

So what does this tell us about the stats? Well, it tells us what the eyeball test told us. We know that Jimmy Howard could have played better and that the players in front of him could have as well. The correct answer on how to allow fewer goals is always going to be "allow fewer shots." However, I think as the CSSI goes on, we should be able to get more data about how often "bad goals" get through and, as a corollary, how many quality shots a team can be expected to allow as a percentage of the total.

(as a giant asterisk on this entire thought experiment, I feel that to appease the stats guys out there who get very angry when you try to draw conclusions from things like this, I must remind everybody that in the overall scheme of things, this falls into that deadly category of "small sample size". This is a very small and very specific numbers on which I'm basing some educated guesses. Feel free to dismiss them if you're not comfortable with them. It's no skin off my back.)