We’ve had three international majors in the RLCS 2021-22 season. One for each of the Fall, Winter, and Spring splits. Teams for various regions competed in these majors for points to qualify for the World Championships, coming up in a week’s time.
16 teams were invited to each of these Majors: 5 from North America, 5 from Europe, 2 from Oceania, 2 from South America, 1 from Middle East/North Africa, and 1 from Asia.
In these Majors, we saw a total of 79 players compete. 14 players made an appearance in all 3 splits, with Joyo, rise, and Vatira leading the total games played count with 95 games played despite not playing in the Fall split.
Do we know who was the best? If you’ve been following the season, come up with a few guesses and let me know if they match what we’ll find out below.
Evaluating Rocket League Gameplay From First Principles
In this series of articles trying to evaluate player gameplay, we’ve introduced Situational Player Value (SPV) and found that it’s great at pointing out players that keep themselves in relevant positions where they can have a positive impact on the play. While doing so, SPV can punish players seemingly unfairly in some situations. To fix that, we can introduce a separate measure addressing the weaknesses of SPV.
Evaluating Rocket League Gameplay From First Principles
Rocket League gameplay is complicated. From the subtle reasons behind every action to the intricate details you have to consider when analysing movements, analysis is not easy. When tackling this problem before, I’ve chosen to look at a very narrow slice of gameplay; expected goals only considers shots and goals. While this made results easier to interpret, the results were not a reflection of full gameplay. Can we evaluate all aspects of gameplay? How?
The first article in this series can be found here. It goes through the general trends across seasons via the stages of motor learning.
While some of the gameplay differences between players performances (especially those from different seasons) are a result of the meta changing, some of the differences between players quantify their role within the team, individual tendencies, and areas of focus.
Here, I’ll look at the factors discussed previously and how player performances can be quantified. In particular I’ve highlighted the outliers and tried to explain trends.
The Evolution of Competitive Rocket League Through the Lens of RLCS League Play
Rocket League gameplay has evolved greatly over the RLCS seasons: from its infancy in Season 1 when the air dribble was a new mechanic, to the current high-flying demo-heavy meta.
How can this progress be broken down?
The stages of motor learning 1, can quite cleanly be applied to describe the various seasons.
They was lookin’ for us, but they didn’t catch us.
Bim-bam teams (BBTs) can be characterised by their fast-paced gameplay that seeks to overwhelm the opponents with pressure. This can be seen in their count of hits per game, average speed, boost usage, and time spent close to the ball.
Part I was the origin story for the expected goals (xG) model, while Part II covered the stats in detail.
Makings of a Winner
Here, we will take a look at what characterises a winning team and draw some conclusions about where Rocket League. This article will make references to stats which can be found here.
Looking at Dignitas
Having a team named Dignitas means you have a 66.6% chance of winning RLCS. Being Turbopolsa means you win 50% of all RLCS’s1.
What separates Dignitas from the pack is their great individual shooting and goalkeeping ability. Dignitas went on a 7-0 streak, dropping very few games. Their striking was on point, although led by Kaydop, both Turbopolsa and ViolentPanda can count themselves among the top shooters in the world. Not wanting for any offensive or defensive skill, Dignitas managed to do with a bang-average xGa per game.
However, simply going off (goals per xG − goals conceded per xGa) does not provide a good measure of a team’s abilities. Whether you do that based on the exact numbers (which puts PSG, compLexity, NRG, and WDG on the bottom while Ghost is 3rd), or based on the normalised numbers where standard deviations of the two stats are both set to 1 (which puts EG second-last and Rogue 6th, above NRG and G2), neither of these combinations look great.
The first article of the series (basically the origin story for the model) can be found here. To summarise, I’ve trained an expected goals (xG) model and applied it to RLCS Season 6 League Play.
The previous article covered the generation of expected goals (xG). In this article, I will be going over highlights from the data, and what conclusions can be drawn from them. I’ll compare regions, teams, and individual players. Now, let’s have a look at the data!
Regions
Players’ and teams’ xG stats are roughly equal between regions, usually with a difference of <2 standard errors. There is a trend, however, of EU having fewer chances (xG per game and xGa per game) compared to NA. This reflects the way EU plays more defensively than NA, and with a less-direct approach.
Teams
Chance conversion: goals per xG. Dignitas were the kings of converting their chances while preventing opponents from converting theirs, having both the highest goals per xG average (1.7) and the lowest goals conceded per xGa average (0.7). On the other side of the spectrum, compLexity struggled to score, averaging 0.9 goals per xG, while PSG faced problems preventing shots from entering their net, with an average of 1.3 goals conceded per xGa.
Expected goals (xG) is a stat (commonly used in football and hockey) that quantifies the quality of a scoring opportunity (a chance). For every shot in every game, the probability of that shot becoming a goal is predicted. In the simplest of models, this is usually based on the distance to the goal. Intuitively, the further you are from the goal, the less likely your shot will be a goal.
Quantifying the quality of scoring opportunities is a pathway to many analyses I consider to be crucial. Consider a situation where you and your solo standard teammates all take 5 shots, scoring no goals. You point out that your shots were flip reset double touches musty flicks where you just failed to beat the last defender, whereas theirs were open-goal misses. The end-game scoreboard stats sadly misrepresent the situation and fail to reflect that it was your teammates’ failed shooting that cost you the game1. Here’s where expected goal (xG) stats come into play! You can point at your low xG, which reflects the quality of the chances you had – it was to be expected that you would not manage to convert them. Their xG, on the other hand, will show that they were on the receiving end of significantly more/better chances and should have scored more.
Over a large number of games, players’ goals per xG will represent their ability to convert chances, and their average xG per shot will indicate the difficulty of shots they’re taking. Also, players’ average xG per game represents the scoring chances they get (and when comparing between teammates, this number can show their respective usage percentages on the offensive end of the field).
This is a very interesting problem, since the skill level at the professional level has clearly gone up. We’ve had similar ideas floating around the calculated.gg developer Discord for a while.
Then, recently, there was a reddit post on this topic on /r/RocketLeagueEsports:
The general consensus in the post was that they would be around Champ 1 or Champ 2. Is this correct?
No one in Season 1 was hitting ceiling shots or flip resets, and its even quite easy to see that the speed of play has increased dramatically. Since we have data all the way from Bronze 1 to 2000 MMR Grand Champion, we should be able to figure out where on that scale a given Season 1 replay sits. That’s what we’re going to talk about today. But first, lets step back and talk about how we would even quantify skill level in the first place.
Statistics
If you’ve used calculated.gg before, you know that we offer a range of stats relating to positioning, ball possession, boost, and team play for every game submitted. In the database, we have access to the skill level (rank/MMR) and the stats of each player in each replay. Therefore there’s an easy way to relate skill level to each stat. We can simply place rank on the X axis, and the mean stat for that rank on the Y axis. This gives us easy-to-read graphs such as this one:
We even tweeted about it back in October 2018, with a link to an album of all the stats at that time compared to ranks. At the time, we were lacking in bronze-gold, and that is still somewhat true today. Luckily, that most likely won’t be an issue when ranking Season 1 games. Surprising, right?