The Vancouver Whitecaps' 2018 season was dull. Their biggest highlight last season wasn't Alphonso Davies's dazzling dribbles, nor his record-breaking sale to Bayern Munich. It certainly wasn't their dirty laundry washing exercise in the form of a season-ending press conference. The players must have found out the management had decided to clean house before they publicly lashed out at each other. The decision to rebuild is painful, but it was also Vancouver's best accomplishment last year.Read More
The last few years the Vancouver Whitecaps haven’t exactly optimized their attack. They’ve been a cross happy team, ranking in the top third of crossing attempts back to 2015. Still, they have usually not possessed the attacking third talent to regularly win those attempted crosses.
Octavio Rivero, Masato Kudo, Erik Hurtado, Giles Barnes and Fredy Montero have not exactly been the dominant aerial ball winners the organization has needed. In fact, none have even a career aerial win percentage of 50%. Basically, they’re not especially good at jumping up and heading the ball either towards the goal or in a manner where their team maintains possession. Unsurprisingly, this is also the primary tactic in which Carl Robinson, Vancouver's head coach, has chosen to create goals and win games.Read More
Expected Narratives is our weekly look at what you can expect to read, write, and discuss about Major League Soccer this week. We take a look at each prospective narrative and rate it based on its strength and whether or not it has any actual merit.
Anybody that was hoping for a quiet weekend of MLS action last week will surely have been disappointed. If I’m being honest, it’s likely that I’m the only person that kind of was, as outside commitments prevented me from indulging in my usual 20 something hours of soccer. If you like goals (and oh I do so like goals), this last weekend was an absolute treat. Heck. The three Canadian teams alone conceded 16 amongst themselves. Many MLS fans are feeling pretty high on the hog at the moment, but let’s spare a thought for those who suffered the ignominy of nearly losing by a football (American) score.Read More
Those of you that have been hanging around American Soccer Analysis for a while might recall a metric that measures a team’s tactical proactivity. Despite efforts to come up with something catchier, it’s been dubbed “PScore”, and the goal is to develop a simple way to examine the aggressiveness of teams from both an offensive and defensive point of view. In essence it separates the bunker and counter teams from the Liverpool-esque possession oriented teams, and also calls out the teams with no discernible identity.
PScore has undergone many tweaks over the years but now it’s been scored consistently for MLS covering the last three seasons. The following is a look at how the league is shifting tactically at a macro level and also how specific teams have been evolving over the years.Read More
Bucking most analytical trends, last season was an unexpected success for the Whitecaps. Beyond a new starting striker, they haven't changed much for 2018, so will hope their luck continues.
2017 in Review
My view going into the season last year was that Vancouver was not a playoff team. They surprised me and many others by finishing third in the Western Conference, only a point behind Western Conference leaders Seattle and Portland. The Whitecaps went on to beat San Jose 5-0 in the first round of playoffs, and finished the season with a disappointing conference semi-finals loss to Seattle. After they tied nil-nil at home, they went to Seattle and never looked like a threat, losing by two.
My concern for Vancouver going into last season was that after losing Pedro Morales, they didn't have anyone to maintain possession. Despite their final record, this was accurate, as Vancouver only had 39.2% possession, lowest in MLS. They also took an incredible 131 fewer shots than their opponents, a disparity better than only Colorado and Minnesota. Furthermore, their expected goal differential was -5.95, 16th in the league. Their PDO was 1089 (3rd highest in MLS), suggesting they got lucky in their offensive productivity. In sum, this team defied nearly all the analytics that tell us if a team is good or not.Read More
Let’s talk nerdy for a second and look at the atrocious and visually unappealing Seattle-Vancouver 0-0 tie from the standpoint of game theory and probabilities.
On the broadcast, Sounders head coach Brian Schmetzer stressed during the pregame interview the importance of his team earning a clean sheet. That telegraphed to both viewers at home and his opponents' that he planned to take a defensive first approach on the road. Hardly a surprising move.
Likewise, Carl Robinson, the Vancouver Whitecaps head coach, made it clear through social media leading up to the match that he would be utilizing his depth given health issues for Jordy Reyna and Cristian Techera, limiting an attack that ranked 15th in total expected goals.Read More
As a Canadian the struggle of the Vancouver Whitecaps is probably more personal than someone who follows MLS from elsewhere. After Toronto FC became the joke of the league mainly through the miss-management of ownership, the Whitecaps expansion to MLS was a huge hope for Canadian soccer. Because of their past NASL, CPL and USL success, as well the Vancouver region being known as a Canadian hotbed for soccer, expectations were high. Whitecaps president Bobby Lenarduzzi might also be the biggest name in Canadian Soccer for his success with a number of Vancouver soccer teams and the Canadian National Team.Read More
On September 19th, 2015 the Vancouver Whitecaps led the race for the MLS Supporters’ Shield. From then, the team fell victim to an almost-comical trend of league leaders performing like cellar dwellers, collecting five points from their last six games and backing into the playoffs (inasmuch as a second seed can back into anything). Vancouver bowed out of the playoffs on their own turf, losing 2-0 against Portland to follow up on a scoreless draw down south, landing only 5 of 22 shots on target over the two-leg series. At their best, the Whitecaps are a dangerous counterattacking team that overwhelms opposing defenses with an athletic attacking midfield and aggressive passing (note the high total shot ratio of 0.532). At their worst, the team looks much the same… but wastes the ball with poor shot selection and lost possession (note the possession ratio at 0.469, third worst in the league).
2015 in Review
Drew’s 2015 ASA preview called attention to a young and promising attack, but raised questions concerning Vancouver’s defensive strength with a new pair of centerbacks. Ultimately, the Whitecaps defense significantly improved from 2014, ranking second in goals allowed and first in xGA, on the strength of Matias Laba, Kendall Waston, and an outstanding year from goalkeeper David Ousted. Waston and Laba together account for roughly 34-35% of the team’s defensive actions (excluding recoveries and fouls), reflecting the former’s physical dominance (particularly in the air) and the latter’s exceptional activity rate in the defensive midfield. No individual attacker stepped up as a consistent scoring threat across the full season, with streaky production from forward Octavio Rivero and midfielders Kekuta Manneh, Pedro Morales, and Christian Techera.
More on the keepers and defense after the jump.Read More
By Matthias Kullowatz (@mattyanselmo)
In preparation for the beginning of the MLS Playoffs on Wednesday, we're rolling out projections for each subsequent round. Throughout the playoffs, you can find them under the "Projections" tab in the upper right. First, let's take a look at what our simulation spit out, and then I'll explain what the simulation was thinking.
The simulation is designed to follow the new MLS Playoffs format. Two-legged series, which occur in the conference semifinals and finals, are modeled using simulated scores from a bivariate Poisson model. This allows us to both precisely project outcomes, and to update the probabilities after game one of such a series. 50,000 iterations of the MLS Cup Playoffs are run, and the outcomes from those iterations are summarized to produce the projections you see above.
It should come as no surprise that the Red Bulls are far and away the most probable team to win the Cup. They have dominated our power rankings for weeks, and their 32.6% chances at winning the cup line up very closely with what we gave 2014's favorite LA Galaxy (33.4%) and 2013's favorite Sporting KC (30.2%). New York led the league in both actual goals scored and expected goals scored, and the model has found that goal scoring is more predictive of future success than goal allowing. This is why they have topped our power rankings for so long.
It should also come as no surprise that D.C. United received our worst probability of winning the Cup. Despite home-field advantage, DCU is only given 50.4% chances of beating New England in their play-in game. DCU's expected goal differential is bad, and their actual goal differential is surprisingly bad. They are the only playoff team with a negative xGD, and the only playoff team with a negative GD. In other words, even if you don't subscribe to how xGoals handles DCU, actual goals doesn't like them either.
I think seeing Columbus and Montreal with the next-best chances of winning the Cup is a bit confusing at first, but it actually makes perfect sense. If either of those teams has to face NYRB, they will do so in a two-legged series where home-field advantage is largely stripped away. On the other coast, whichever Western Conference team makes the final has a good chance (44.5%) of playing in New York in that one-game championship. Essentially, when and how you play New York largely determines your probability of winning the Cup.
Speaking of home-field advantage, we account for it with two processes. First, the model knows who's playing at home, and adjusts outputs accordingly. That has been true with our Playoff Push all season. Second, the two-legged series are set up such that if teams tie on goals, and on away goals, they will play two 15-minute overtime periods followed by penalty kicks if necessary. Additionally, that will only happen on the higher seed's turf. Our simulation determines if such an aggregate-tie occurs, and then indirectly gives the home team (also the higher-seeded team) a slight advantage in extra time. We regress the home team's 90-minute probability of winning, conditional on not-tying, halfway back toward 50%. This is an approximation to what FiveThirtyEight has done with extra time, where the better teams are still given advantages in what is not a 50-50 outcome.
Anyway, enjoy the playoffs! And check back for updated projections.
By Kevin Minkus (@kevinminkus)
While beginning a season 0-3-0 does not a happy fan base make, Sunday's win over Philadelphia has some Chicago Fire fans feeling at least a little better about the team's rebuilding process. Throughout the beginning of the season, coach Frank Yallop has frequently stressed that the team needs time to adjust to each other. After all, they brought in three new designated players during the off-season, and are returning players who accounted for only 63% of last year's minutes (the league average over the last four seasons is around 71%). It should take a while for all of those new pieces to mesh from the somewhat disjointed side we've seen into a coherent whole. But, given the Fire's level of roster turnover, how long should we expect the meshing process to take?
The term “meshing” is a slippery one, and can be defined in any number of ways. Is it when a team's roster turnover no longer informs its results? Is it when a team's results sufficiently indicate its performance for the rest of the season? Is it when a team reaches the level of performance it will remain at throughout the rest of the season (if, in fact, a team can ever be expected to do so)?
Each of these definitions could be argued as valid, and I'm sure there are many other possible definitions not considered here. As it stands, though, these are the three I will analyze, using MLS data since 2011, in hopes of arriving at an answer to the question of how long it takes a team to mesh.
Let's start with the first definition- meshing defined as the number of games in which roster turnover still directly informs a team's results.
This graph shows the correlation between points after x number of games and the percentage of a team's field minutes returned from the previous season.
A positive correlation suggests that as roster stability increases, so does points earned. Numbers below the red line are not considered statistically different from zero (at 90% confidence). Note that the correlations in general aren't huge, but they do exist. As you can see, the correlation between roster stability and points peaks at game three, and remains statistically significant until game five (after which it remains insignificant until close to the end of the season).
A similar pattern exists if we look at defensive stability, though the correlation becomes doesn't become insignificant until after 8 games:
These two graphs, then, suggest (though perhaps not convincingly), that it may take as few as three or four games for a team in general to mesh, while it may take as many as eight for a defensive unit to come together.
Now let's take a look at the second definition- meshing defined as the point at which a team's results through some number of games “sufficiently” indicate what its results will look like for the rest of the season.
To do this, I've split teams into two groups- those with “high” roster turnover (in the top 50%), and those with “low” roster turnover (in the bottom 50%). I then regressed the team's final points total on the team's points total after x games, for each of the two groups. The Rsquared values for each of these regressions are graphed below, with the linear models from the set of all teams included as well. So essentially what we are looking at it is how well we can predict how a team will finish the season, based on what they've done after a given number of games.
Through six games, each game is about as predictive for each group, meaning that how well a team with high roster turnover does through six games is just as indicative of how that team will finish as how well a team with low roster turnover does through six games. That is to say, we don't gain any extra predictive power by knowing a team's level of roster turnover.
By game seven, though, high turnover teams begin to out-pace low turnover teams- by game seven we have a better idea of how high turnover teams will finish the season than low turnover teams.
By game nine, the R2 value for high turnover teams is at .546, which is pretty high. We would expect predictions made using this nine game point total to be on average only about seven points off the final season total. That gets us pretty close for being barely a quarter of the way into the season.
Though it's a normative statement not a positive one, and you could really draw the line anywhere, I would probably suggest that nine games is as good a place as any to set the limit on meshing based on our second definition. At the very least, we can say that after nine games we should have a decent idea of whether the rebuilding process will be successful in year one.
Finally, let's turn our attention to the third definition- meshing as the point at which a team reaches its consistent level of performance.
Let's investigate this phenomenon a little bit.
Here's a graph of the three game rolling expected goal difference (at x = 4, the value on the y axis is the xGD from games two, three, and four, for example) for Sporting Kansas City last season- a decently representative mid-table team. Expected goal differences provide a pretty reasonable statistic for gauging how good a team is.
It's pretty much all over the place.
A three game rolling points per game graph of another mid-table team from last year, the Vancouver Whitecaps, tells a similar story:
These graphs point to something which I think is an important (though perhaps obvious) point to make; it's mostly unreasonable to expect game by game measures of a team's strength to converge over the course of a season. (Metrics like xGR (expected goal ratio), TSR (total shot ratio), and points per game will converge, but usually only when they're being calculated on aggregate.) There are a lot of reasons for this. Injuries, international call-ups, strength of schedule, and mid-season transfers are all factors which affect a team's consistency of performance. Teams, save maybe the very dominant and the very bad ones, just go through peaks and valleys throughout the year. They have good games and bad games.
What does this mean for meshing, then?
Well, we've already seen that how a team performs at the start of the year can be predictive of where it finishes, particularly for teams with high turnover. The point above, though, suggests that how a team starts the year isn't necessarily indicative of how it will perform throughout the year.
For teams who haven't quite come together yet, then, there is certainly still hope of righting the ship. Given the above analysis, I would expect the effects of having new players brought in to the system to begin to wear off by game four or five (though this may take a bit longer this season because of international call-ups). By game nine or ten, a team should have a decent idea of how well it has done in rebuilding its roster. If things remain bleak at that point, there is still the possibility of finding some success, but it may come only in limited doses.