- The Cutback
- Posts
- Dixon and Coles Improvements: Global Team Ratings & Cross-League Player Comparisons
Dixon and Coles Improvements: Global Team Ratings & Cross-League Player Comparisons
How optimization and context-aware metrics help scouting players
I've returned to my Dixon and Coles model with significant improvements and optimizations. By adjusting calculations and refining the code's efficiency, I've reduced hardware demands on my PC, enabling simultaneous rating calculations for all teams in my dataset. This comprehensive approach computes ratings based on more extensive data, producing a complete ladder of all teams I've trained on.

The green dot is Juventus for obvious reasons, if the first time you’re reading me I’m a Juventus supporter.
Some limitations remain. I couldn't load Turkey's Super Lig data, creating a gap in the rankings. Teams appearing exclusively in Champions League and Europa League competitions have less precise ratings. Similarly, teams from the Russian Premier League, Brazilian Serie A, and MLS require additional data – in which they actually play against the others – for accurate comparison with other leagues. This underscores my need to incorporate Club World Cup games from this summer if possible.
With this enhanced model, I can now simulate again the leagues of the most competitive title races in Europe, particularly in Serie A and La Liga, based on their performances so far


The most significant advancement is the impact on my radar notebook analysis. Rather than isolating players within individual leagues, I can now aggregate players across all leagues within a given season for direct comparison. While not strictly conventional from a statistical standpoint, I can weight quality metrics (Atomic Vaep metrics, xG metrics, etc.) against the defensive or offensive ratings of their opposition. This means creating 0.5 xG against Liverpool carries different weight than the same metric against Osasuna.
This approach has allowed me to develop position-specific rating calculations with real-world applications. Now we can definitively identify the world's best striker as of Tuesday, March 4th. In the case of striker we calculate the rating by doing a weighted average (not really weighted for now, I’ve given each one the same weight as I don’t want to find a specific profile) of this metrics:
non penalty xG assisted after normalization
non penalty xG per shot after normalization
atomic vaep from passes after normalization
atomic vaep from carries after normalization
atomic vaep from the actions of opposition in the zone of competence after normalization
shot per average duration of games in the season after normalization
Here’s the list of the top 15 – as said, MLS data is skewed by not having data against other teams in the dataset:

So let’s see a couple of radars. Mbappé’s, Isak’s and Promise David’s – being from Saint Gilloise you know he’s one to watch.



Conclusion
These improvements represent a significant step forward in my modeling capabilities, bridging the gap between team performance and individual player evaluation across diverse competitions. By accounting for opposition quality, we move beyond raw statistics toward contextual performance metrics that better reflect real-world value. This method of using teams ratings based on Dixon and Coles to weight performance has also been talked about by Ian Graham in How to Win the Premier League, further certifying is importance in the football industry.
The next phase will focus on incorporating additional datasets to address current gaps and further refine the position-specific ratings. I'm particularly interested in exploring how these models might predict performance when players transfer between leagues of differing competitive intensity.