• The Cutback
  • Posts
  • Communicating Player Performance: Subtle radars' differences in football analytics

Communicating Player Performance: Subtle radars' differences in football analytics

How choices in visualization building influence interpretation of player performance

In this post, I’ll delve into an effective and conventional yet questioned method of evaluating football players — radars. I’ll provide a quick overview followed by a detailed breakdown, allowing you to explore the content at your own pace.

Why Radars?

Radars are a popular visualization tool, especially on social media, for communicating player performance. As a former PES enthusiast, I find them intuitive once you understand how to interpret them. My take on radars stems from the desire to create a simple yet effective player analysis tool that can potentially lead to performance ratings – which is something I'm really big on since I’d like to work in recruitment given the opportunity. Radars encapsulate the complexity of various metrics, making them accessible for interpretation and communication — a critical aspect in my opinion given my background in the humanities.

Standard Radars and My Approach

StatsBomb's radars set the gold standard, and I’m a big fan of their work. Inspired by their approach – as in many other cases –, I’ve created six position groups based on my position clustering project.

Each group uses specific metrics to profile to evaluate players.

Goalkeepers:

Centerbacks:

Wingbacks:

Defensive and Center midfielders:

Attacking midfielders and wingers:

Strikers:

Comparative Analysis and Visualization

To ensure fair player comparisons, I consider players across the same positions and leagues over multiple seasons (up to five years of data), this is totally my decision as I don’t know what type of slicing StatsBomb and other companies use, but assume they do only one season comparison. By normalizing z-scores values on a scale from 0 to 100 using CDF instead of min max scaling or percentiles, and plotting them on the radar, we mitigate the influence of outliers and provide a clearer view of player performance.

Here’s an example of how this looks in practice:

  • Z-Score Normalized Radar: Highlights a player’s strengths in key areas, making it easier to interpret their suitability for specific roles or teams.

  • Raw Values Radar: Offers context with actual performance values, complemented by z-score normalized values for deeper insight.

I think this case explains clearly why plotting normalized values instead of underling ones is key here. The z-score normalized radar shows Khusanov very strong ability in tackling, his solidity in passing, and his interesting press resistance, aligning well with what Manchester City’s would search for in the market. That’s not what you’d grasp at first glance from the non normalized one.

Key Adjustments in Metrics

There are some mistakes in the annotation under the title, in parentheses we find the value per 98 minutes while the plotted one is the normalized z_score, plus where I’ve written VAEP is indeed the atomic version of the VAEP model, which is different.

General Adjustments

  • Data Normalization: Metrics are adjusted for 98 minutes of play as explained here.

  • Metric Selection: Metrics are chosen to reflect key performance indicators relevant to each position.

Position-Specific Adjustments

For all the positions groups that need it – centerbacks, wingbacks and strikers – I’ve scrapped Aerials and Aerials W%. That’s mostly because my data format doesn’t allow me to do it, but also because I don’t think Aerials have huge value unless it’s headed shots in specific situations or you can maintain possession after the aerial. Plus I plan on using second balls as substitutes sooner or later.

  • Goalkeepers: For long balls – passes or goalkicks of more than 20m – I’ve decided to use the difference between expected completion and actual completion. I’ve also scrapped positioning error because my data can’t allow me to do it, while for Shot Stopping % – which is StatsBomb’s version of Goals - Post Shot xG – I’ve created a proxy which is Goals - xG when on pitch.

  • Centerbacks: I’ve scrapped Block/Shot for data format reasons, then I’ve added the amount of progressive passes and carries per 98 because I think it’s important. I’ve also used my proxy for passes under pressure and to add context to fouls I’ve added the % of those that are tactical – done in a window of 5 seconds from a turnover – finally I’ve swapped Tackles/Dribbles Past % for True Tackles Win Rate – which is the same as that of Tom Worville but doesn’t consider challenges because my data format doesn’t have that event type.

  • Wingbacks: Added Tactical Fouls %, swapped Tackles/Dribbles Past for True Tackles but also scrapped Pressures to add Defensive Actions per 98 as a proxy.

  • Defensive and Center Midfielders: As for wingbacks took out Tackles/Dribbles Past and Pressures and substitute them with True Tackles and Defensive Actions. I’ve also scrapped Fouls Won for data format reasons.

  • Attacking Midfielders and Wingers: Scrapped Fouls Won and swapped Pressure for Defensive Actions.

  • Strikers: Substituted Pressures for Defensive Actions for Strikers too.

I think this approach combines clarity with complexity, making radars an even more powerful tool for player evaluation in football analytics and a solid ground to continuously build on with better metrics in general and more specific one for tailored needs, but always keeping easy communication at its core.