On the evening of June 21, 2019, Red Wings fans worldwide sat in front of their television to see which top prospect would don the winged wheel. The team would pick 6th, and after consensus top two selections Jack Hughes and Kaapo Kakko were selected, Detroit would have their pick of several exciting young forwards.
A certain writer, who will remain nameless, also sat down to watch. This writer had spent a great deal of time researching the players likely to be available and talking to writers that spend the year watching prospects.
Earlier that day, JD Burke had tweeted out that Detroit was reportedly really interested in some defenseman named Moritz Seider. Seider was thought to be a late first round pick by most, so our intrepid writer dismissed it out of hand.
Fast forward to the moments following Detroit’s surprising pick. They actually did select Seider at 6th overall. The writer was stunned. But additionally, for some reason, he was angry. That doesn’t really make sense, right? He wrote an article blasting the pick and lashed out at commenters when they disagreed.
People who were readers of this site at that time know that, of course, I am the writer to whom I am referring. In the days that followed, I tried to figure out why anger was the emotion that rose to the surface, when it seems so out of place.
The short version is that I had spent so much time and energy trying to learn about the different prospects, and it felt like, in the end, it didn’t matter. I felt at the time that it was all a waste, and there was no point in trying to learn about the different prospects. That might not really make sense, but emotions aren’t always logical.
Additionally, I think I was trying to guard against blindly trusting new general manager Steve Yzerman. Combine these two things, and you get the bulk of my reaction. It was still wrong, obviously, and if I could do it all over again, I would in a heartbeat. I did say that night that I would be ecstatic if I was wrong, and it’s looking very much like Seider will be a very good, if not great addition to Detroit’s blueline in the near future.
It’s always a good idea to reflect on things that happen to learn how to do better the next time. The rest of this article will be an attempt to lay out a more effective way to spend the time leading up to the 2020 draft.
I reached out to three people to get their input:
Prashanth Iyer, who in the last year has started to focus on prospects. He had a prospect model last year that he made public before the draft.
Dylan Galloway, the Head Scout for Eastern Canada for the draft site Future Considerations.
Will, who runs the site Scouching. Will watches video of prospects and combines that with statistical analysis. Will does not use his last name publicly.
Lesson One - No Matter How Much Work You Do, It’s Impossible To Have a Complete Picture
Even people writing about prospects who are able to talk to the people evaluating prospects for NHL teams don’t have all the information. If you read a quote that Team X is really looking hard at Player Y, that could be entirely accurate. It could also be misinformation that Team X is giving to a reporter to keep other teams from knowing what they are really thinking.
Basically, it’s always important to keep in mind that teams control what information gets reported, and that’s for nationally known reporters who have a plethora of contacts.
For everyone else, it’s even harder. Even if every major prospect writer had the exact same ranking for the top 10 picks, it’s never going to go in that order. Teams each have their own draft boards, and it’s not uncommon for a team to have a player ranked very highly on their board who is ranked far lower on another team’s board. Reporting following the 2019 draft indicated that Detroit was not the only team that had Seider much higher than the general consensus.
I asked Prashanth for his thoughts on the difficulty of trying to predict prospects based on publicly available data:
I think the main difficulty are the data available. Across leagues, the information available to the public is inconsistent and often far less than we need to paint an accurate picture of a player. Often times, comparisons across leagues are based on league strength, age, player height, player weight, goals, assists, and points. That doesn’t mean it’s impossible to do great work with prospect data. There’s a laundry list of people such as Iain Fyffe, Rhys Jessop, Josh Weissbock, Cam Lawrence, Jeremy Davis, Garret Hohl, Namita Nandakumar, Hayden Speak, Evan Oppenheimer, and Emmanuel Perry who have done outstanding work with the available prospect data.
However, I believe that the accuracy and precision of prospect models are ultimately limited by the available data and are generally better utilized to identifying potential over- or underperformers as opposed to generating a pure ranking list.
I’m going to come back to that last point later.
Additionally, when trying to predict where players will go, even early on in the draft, it’s important to remember that each team is different. Not every team has the same draft philosophy. As part of his job, Dylan Galloway participates in mock drafts, so I asked him what he takes into consideration when doing that:
From the outside looking in, a team’s draft history tells you not only a bit about things that they value in a player, but also which are their more strongly covered regions. For mock drafts, my personal approach is to take into account a team’s recent history, at least as much as I can from public information, as well as looking from the perspective of team needs and depth at each position. At the same time, mocks are an opportunity for a me as a scout to add my own opinion on the order of the draft and take a player that I personally like, or feel is the best player available higher than the consensus might have them.
From talking to Dylan previously for the Fer Sure podcast, I know that there can be some difficulty in making one list for the site that combines the opinions of different scouts from different regions. Imagine if we took 30 people, who were each equally qualified to evaluate NHL talent, and had each watch one different NHL team for a period of time. If we then brought them all together and asked them to make a combined ranking of the players they watched, it’s easy to see why that would be difficult. It’s similarly difficult for the people at Future Considerations to do, and teams have to attempt to do the same thing. As Dylan says:
Comparing players from different regions is fairly difficult, and we generally defer to the people in that region. We will always weight in-rink views more heavily than video viewings of players. For comparing players across leagues on a skill basis, we take into account the context of their playing situation to help adjust where they fall among players of similar skill. What I mean by that is, Euro prospects could be playing fewer minutes, getting fewer points than players of similar skill in the OHL. Understanding which players are succeeding in “easier” vs “harder” leagues is a small piece that can be added to the puzzle to provide an overall picture of each player. International tournaments like the Hlinka, WJC, WJAC, 5 Nations, and U18s are an excellent opportunity for cross checking views, which can really help our scouts gauge the skill level of players in their own region.
If it’s hard for people who watch these players, there shouldn’t be an expectation that a writer can do this with any level of certainty.
This leads to another related point:
Lesson 2 - Attempting to Predict the Order of Players Outside of the Top One or Two is an Exercise in Futility
Don’t get me wrong: mock drafts are fun. There is definitely some value to them as well. At the same time, for the reasons mentioned above, you have to know going in that they are attempting to do something that is pretty close to impossible.
Because of this, I have become a proponent of a tiering system when evaluating prospects. I think it’s a much better way of looking at these players heading into the draft.
Additionally, even though I am personally a proponent of taking the best player available, regardless of position, teams sometimes value position more. If a team is confident that they have a solid defensive corps for years to come, they may take a forward over a defenseman, even if that forward may be considered not as good as the defenseman. They could also think that the forward IS the best player available.
Lesson 3 - Watching Players Live Is Important
While Will does see some prospects play live, much of his experience watching players is through video. With technology getting better each season, people increasingly have the ability to watch many prospects without leaving their house.
It seems pretty obvious that watching a player in person gives one insight that is impossible to get otherwise. It’s also pretty obvious that it’s cost-prohibitive for the vast majority of people to pay out of pocket to see a large number of these prospects live, especially outside of tournaments and showcases.
I asked Will what he thinks people miss out on by watching players on video as opposed to in person:
You get a real sense of pace watching a game live. I find that TV often slows down the pace of the game in my mind, and going in person helps you really discern who’s really moving out there and who isn’t. I also find that I miss a lot of details that might not happen on camera. Where are defender’s positioned? How are they anticipating breakouts? How is their mobility patrolling bluelines? Watching live gives you a quick scan of who’s on the ice doing what at all times which many broadcasts just can’t provide through no fault of their own.
Lesson 4 - The Best Results Will Come From Combining Data and Watching Players
As Prashanth said above, there are people doing good work with prospect statistics. A common fallacy I’ve seen when discussing the concept of using analytics to evaluate a player is that sometimes people treat it as an either/or. Statistics and watching players are both valuable, and each relies on the other to give one a full picture of a player.
Prashanth mentioned in the previous quote that rather than using statistics to create a ranked list, prospect models are “generally better utilized to identifying potential over- or under-performers.”
I asked Will what he thought about this idea.
I’d argue that yes, it can be used to identify players later. Video and data gives you the ability to identify players who are objectively playing the game well i.e. driving scoring, suppressing goals against, involving themselves in offense, etc. From there, you can check out those players to track their data and apply your “eye test” to see HOW objectively positive results are generated.
There are plenty of guys who have high production who don’t work out, and in my experience, video is a great way to explore why that might be. The opposite is also true. There are players who have excellent data but their production isn’t great, so exploring why that might be can lead you to value them properly, be it positive or negative.
There is always talent that slips, especially overseas. People are still really sleeping on the Finnish, Swedish and even Russian junior leagues. They’re fast, skilled, high pace leagues and the players may be getting experience playing against men, which is highly valuable to me.
Most late round picks don’t work out anyway, and my argument is always that if normal teams land 1.5 full time players from a draft on average, they’ve done great. If I can land 2.5, I’m still missing on 5 picks a year, but results are almost twice as positive which has major advantages on the NHL roster.
Data and video gives you easily accessible tools to get a good feel for the talent out there that people might be sleeping if they don’t lean on the tech they can access easily.
Prashanth has similar thoughts about how best to use data to evaluate prospects:
I think the best recommendation I can give is to use the data to guide who to scout when considering players outside of the top-30 to top-40. There are thousands upon thousands of players eligible for each draft and it’s impossible for a scout to see them all.
I think scouting assessments are incredibly important, but I think it’s ultimately difficult to see all of the players necessary and form purely objective opinions based on the small samples they’ll have for certain players. They may only get to see a player 1 or 2 times and maybe he/she had an off game that night and didn’t perform as well as he/she could.
This is where I think adoption of prospect analytics is beneficial. If a player consistently stands out from a statistical standpoint, then they are a player that warrants further investigation. I think scouts that do this will ultimately get a more complete picture and will be able to form a more objective rank list.
So while more information than ever before is available on prospects, the overall lesson I have learned from last year’s draft is that while you as a fan of a team may think that your team should draft a certain player, take a step back and remember that evaluating prospects is very tough even for those best at it.
So, while a pick might not seem to make sense, the best option is to try to figure out why that selection was made. Of course, it could end up being a bad selection, but it’s definitely a good idea to not form any definite opinions right away.
Draft night is not a great time to make a strong judgment of your team’s selections. It’s a good thing that nobody around here would make that mistake (again).