Youth Soccer Rankings ?

OK, I have clarification on how the Club ratings are calculated. It's the average of the ratings of the top teams, as if two clubs playing each other all brought only their single best teams in each age group, then aggregated the scores to see who's on top. Any second or lower team in any age group should have no effect on a club's rating.

The FAQ has been updated to explain the club rankings.

Q: How are club rankings calculated?

A: We use the average of the ratings of the top teams. This is equivalent to two clubs playing their top teams against each other and then aggregating the scores to see who is best.

We include the core competitive age groups of U11 - U17. If a club doesn’t have a team in an age group, then it is given zero credit in the average.
 
The Club rankings algorithm has been tweaked a bit; I think it's been a good improvement. It takes the top teams from U11 - U17 (1 top team from each age group), and calculates the average rating. Clubs need at least 5 teams to be ranked. It no longer averages zeroes in if there is no team in a specific age group. It updates every day. Here's what it's showing today for Girls, Boys, and Combined (in California).

Girls clubs.jpg Boys clubs.jpg Combined.jpg
 
The Club rankings algorithm has been tweaked a bit; I think it's been a good improvement. It takes the top teams from U11 - U17 (1 top team from each age group), and calculates the average rating. Clubs need at least 5 teams to be ranked. It no longer averages zeroes in if there is no team in a specific age group. It updates every day. Here's what it's showing today for Girls, Boys, and Combined (in California).

View attachment 14946 View attachment 14947 View attachment 14948
Interesting, theres clubs from ECNL, GA, Next, etc + Norcal and Socal. Also several clubs that that consistantly deliver quality but arent always recognized for it.

It wonder if this will drive clubs into their closed leagues or less.

It will definately make tournaments like Surf Cup where clubs play each that normally dont more interesting
 
Where YSR's algorithm failed was in over rewarding blowouts......meaning teams in Socal league who were winning 6-0 every weekend were ranked ahead of ECNL teams who were winning 2-1 or tying the majority of their games......I am assuming that is still the case.....YSR was still the best but this was a definite flaw at the older ages U-13+++
 
I think any significant inaccuracies there may be more of a symptom of lack of inter-play, then a decision to overweight (or underweight) blowouts. If in a hypothetical higher league, games are often 1 point affairs, while in a lower league, there are often blowouts - within each of the leagues themselves, relative ranking should remain accurate. If a high scoring team in the lower league is 6 goals better than the lower scoring teams in that same league, they should have a ranking that is 6 goals better. And if a high scoring team in the higher league is only 3 goals better than the lower scoring teams in that same league, they should have a ranking that is 3 goals better.

Now if none of the higher league ever play any of the lower league in any recorded games or tournaments, it becomes a definition problem. Do you set the top league starting point 3 points higher to start? 6 points higher? 20 points higher? To establish the first reference point, it's probably a bit of finger held up in the wind. After only 5 or 6 games within that league, there is plenty of info to rate the individual teams against each other, even if there is zero history prior of any of them. But if there is no relative history of those teams playing with any of the teams in the other league, it can be stuck at whatever initial rating was assumed. And in this hypothetical construct - a very high scoring team in the lower league might be expected to show a rating higher than an average team in the higher league.

So it becomes a bit of a self fulfilling prophecy, where leagues that are completely separated from play from other leagues, can't objectively and mathematically be ranked against each other - simply because there aren't enough events to point to where teams from the leagues are compared (via actual games) with each other. However - every time that even a single team within a closed league does go outside to play a tournament or similar against outside teams, that result is then applicable back to the source league, to help balance and normalize the rankings of even teams that never go outside for opponents. It appeared that one good example of this were the teams in Alaska earlier this year. The ratings of some of them were uncomfortably high. They tend to only play each other (it's Alaska!) so there is very little inter-play outside. But looking through them all recently, it seems there were enough tournaments that pulled just enough of them out to play outside, that the ratings of the entire state appear much more normalized with what one might expect.

With all of this, it becomes a bit intuitive that national ratings numbers between a team in Dubuque, a team in Boston, and a team in Seattle, may be a bit off - if none of them every play each other, and they don't play anyone else who plays them, and they don't play anyone else who plays them who plays them, etc. It's like trying to compare AAA baseball here in the US to the 2nd level baseball league in Japan. You can try - but without direct play, it's going to be mostly conjecture no matter how you try to do it. But - the relative rankings in the case of this app are necessarily much more relevant when you're using them to compare against teams in your area or conference who you actually do play, or will play, or have played.
 
I think any significant inaccuracies there may be more of a symptom of lack of inter-play, then a decision to overweight (or underweight) blowouts. If in a hypothetical higher league, games are often 1 point affairs, while in a lower league, there are often blowouts - within each of the leagues themselves, relative ranking should remain accurate. If a high scoring team in the lower league is 6 goals better than the lower scoring teams in that same league, they should have a ranking that is 6 goals better. And if a high scoring team in the higher league is only 3 goals better than the lower scoring teams in that same league, they should have a ranking that is 3 goals better.

Now if none of the higher league ever play any of the lower league in any recorded games or tournaments, it becomes a definition problem. Do you set the top league starting point 3 points higher to start? 6 points higher? 20 points higher? To establish the first reference point, it's probably a bit of finger held up in the wind. After only 5 or 6 games within that league, there is plenty of info to rate the individual teams against each other, even if there is zero history prior of any of them. But if there is no relative history of those teams playing with any of the teams in the other league, it can be stuck at whatever initial rating was assumed. And in this hypothetical construct - a very high scoring team in the lower league might be expected to show a rating higher than an average team in the higher league.

So it becomes a bit of a self fulfilling prophecy, where leagues that are completely separated from play from other leagues, can't objectively and mathematically be ranked against each other - simply because there aren't enough events to point to where teams from the leagues are compared (via actual games) with each other. However - every time that even a single team within a closed league does go outside to play a tournament or similar against outside teams, that result is then applicable back to the source league, to help balance and normalize the rankings of even teams that never go outside for opponents. It appeared that one good example of this were the teams in Alaska earlier this year. The ratings of some of them were uncomfortably high. They tend to only play each other (it's Alaska!) so there is very little inter-play outside. But looking through them all recently, it seems there were enough tournaments that pulled just enough of them out to play outside, that the ratings of the entire state appear much more normalized with what one might expect.

With all of this, it becomes a bit intuitive that national ratings numbers between a team in Dubuque, a team in Boston, and a team in Seattle, may be a bit off - if none of them every play each other, and they don't play anyone else who plays them, and they don't play anyone else who plays them who plays them, etc. It's like trying to compare AAA baseball here in the US to the 2nd level baseball league in Japan. You can try - but without direct play, it's going to be mostly conjecture no matter how you try to do it. But - the relative rankings in the case of this app are necessarily much more relevant when you're using them to compare against teams in your area or conference who you actually do play, or will play, or have played.
Just to throw more gas on the fire often teams play up a year in tournaments. How does this track back to rankings?

If teams are able to compete a year or two up from their natural age how does this translate to age appropriate ranking?
 
Just to throw more gas on the fire often teams play up a year in tournaments. How does this track back to rankings?

If teams are able to compete a year or two up from their natural age how does this translate to age appropriate ranking?
This one is well accounted for because everything is based only on team-to-team results. If a team plays up, they will be playing against teams with higher starting scores which will be reflected in their results.
 
Just to throw more gas on the fire often teams play up a year in tournaments. How does this track back to rankings?

If teams are able to compete a year or two up from their natural age how does this translate to age appropriate ranking?

This one is well accounted for because everything is based only on team-to-team results. If a team plays up, they will be playing against teams with higher starting scores which will be reflected in their results.

Yep, that's exactly right. The ratings are independent of age. A team of a certain age gets a rating. If that team plays up a year, they still have the same rating - and they are still the same team. If that team plays up 2 years, they still have the same rating. It's expected that the teams they will be playing in the higher ages will, on average, have higher ratings themselves - so if the team playing up does well against them, their own rating will improve.

The only way this works though is if the team names used for a single team remain consistent. In most cases, they do, for league play and for major competitions. If the "2011B Firecrackers" play up in the 2010 bracket, they are still named the "2011B Firecrackers", while playing in that bracket. In the app - they remain in the 2011 age group, because the team is defined as a 2011 age team. If it turns out that the team is really playing 2010 brackets exclusively - you can just change the team profile in the app to 2010 if you choose, so you can see how that team is ranked with other teams in the 2010 age group. Or it can stay a 2011 team if it really is made up of 2011 kids, and it just plays up occasionally. Changing the age has zero effect on the rating for the team - the ratings occur just due to the opponents the team has and the relative performance against them.

The issue that can come up though, is if the coach/mgr submits the team to a tournament, and calls the team "2010B Firecrackers Tournament version (2011)" or anything else that is substantially different than the original team name of "2011B Firecrackers". If the tournament system doesn't have a field to enter in the GotSoccer/GotSport team ID, and the team name is a one-off, it gets pulled in as a new team - and now there is an unranked team in the standings tied to those specific tournament results. The coach, manager, or anyone else with a Pro account can find those results once they are pulled in, and assign them to the main "2011B Firecrackers" team if they are pretty sure that the results are actually from that same team.
 
This one is well accounted for because everything is based only on team-to-team results. If a team plays up, they will be playing against teams with higher starting scores which will be reflected in their results.

I’m confused. Aren’t the rankings and ratings only meant for within that age bracket?
 
So I've noticed the email contacts are no longer working. I wrote for a correction of a Showcase event that had been double reported and got it got pinged back to me. Anyone know anything about this?
 
I’m confused. Aren’t the rankings and ratings only meant for within that age bracket?

That's not correct. The ratings are on a single scale across all age brackets. A 2010 boys team rated 45 and a 2011 team rated 45 would be expected to be dead even if they played each other. And a 2010 boys team rated 45 and a 2011 girls team rated 45 would also be expected to be dead even. A 2010 team rated 45, playing a 2011 team rated 40, would be expected to win a typical game by 5 goals.

Now since these ratings are adjusted continuously by teams playing each other, the relative differences in rating within an age group are almost certainly more accurate, than the relative differences between different age groups, if teams rarely (if ever) actually play each other across the particular age difference. Saying that a 2006 team ranked 52 would be a 2013 team ranked 31 by 21 goals is a bit silly for any number of reasons. It might be by 10 goals, it might be by 150 goals if the older team was going for the world record - it's a silly hypothetical that wouldn't actually happen in the real world.
 
I think any significant inaccuracies there may be more of a symptom of lack of inter-play, then a decision to overweight (or underweight) blowouts. If in a hypothetical higher league, games are often 1 point affairs, while in a lower league, there are often blowouts - within each of the leagues themselves, relative ranking should remain accurate. If a high scoring team in the lower league is 6 goals better than the lower scoring teams in that same league, they should have a ranking that is 6 goals better. And if a high scoring team in the higher league is only 3 goals better than the lower scoring teams in that same league, they should have a ranking that is 3 goals better.

Now if none of the higher league ever play any of the lower league in any recorded games or tournaments, it becomes a definition problem. Do you set the top league starting point 3 points higher to start? 6 points higher? 20 points higher? To establish the first reference point, it's probably a bit of finger held up in the wind. After only 5 or 6 games within that league, there is plenty of info to rate the individual teams against each other, even if there is zero history prior of any of them. But if there is no relative history of those teams playing with any of the teams in the other league, it can be stuck at whatever initial rating was assumed. And in this hypothetical construct - a very high scoring team in the lower league might be expected to show a rating higher than an average team in the higher league.

So it becomes a bit of a self fulfilling prophecy, where leagues that are completely separated from play from other leagues, can't objectively and mathematically be ranked against each other - simply because there aren't enough events to point to where teams from the leagues are compared (via actual games) with each other. However - every time that even a single team within a closed league does go outside to play a tournament or similar against outside teams, that result is then applicable back to the source league, to help balance and normalize the rankings of even teams that never go outside for opponents. It appeared that one good example of this were the teams in Alaska earlier this year. The ratings of some of them were uncomfortably high. They tend to only play each other (it's Alaska!) so there is very little inter-play outside. But looking through them all recently, it seems there were enough tournaments that pulled just enough of them out to play outside, that the ratings of the entire state appear much more normalized with what one might expect.

With all of this, it becomes a bit intuitive that national ratings numbers between a team in Dubuque, a team in Boston, and a team in Seattle, may be a bit off - if none of them every play each other, and they don't play anyone else who plays them, and they don't play anyone else who plays them who plays them, etc. It's like trying to compare AAA baseball here in the US to the 2nd level baseball league in Japan. You can try - but without direct play, it's going to be mostly conjecture no matter how you try to do it. But - the relative rankings in the case of this app are necessarily much more relevant when you're using them to compare against teams in your area or conference who you actually do play, or will play, or have played.
Blowouts and lack of cross-play seem to be a feature of closed leagues. Other than Southwest, the top team in each division is in their own little silo.

On the girls side, the top ranked team in most of the age groups is winning their league games by an average of 6 goals or so.

It’s like watching Bayern in Bundesliga. The league games don’t tell you anything. The real information comes when they leave their small pond. (Nationals, Surf cup, Jefferson Cup, etc.)
 
Youth soccer rankings are a trivial pursuit for adults basically, kids just want to play the game.

My players have generally always been on "top ranked" teams by the end of the season(s) but don't really matter to them.

It really goes down to coaching and the environment,” the league you play in whatever is was/is: DA, ECNL, MLS-next it doesn’t really matter as much as the coaching and environment, Pseudo titles and rankings don't play the games.

As long as you have a good influence, coaches, and a good mentor that you believe in and buy into their philosophy, methodology, and the environment that they are creating a player can get better, improve and benefit for that environment.

Beyond Youth , Coaches rankings would seem to be predictable but its always a moving target and they either do those infrequently or the committees are limited numbers which sometimes produces a bias to the more established or well known programs.

Being in the top 10 matters in college NCCA for post season berths and the competition is pretty fierce and of course it helps with recruiting a bunch since players are attracted to "winning" programs.

Have fun with it, spot trends or whatever but rankings are trivial and many teams play the same limited set several times and its very geo bound. If you want to worry about the rankings, don't' sweat the youth ones, Plenty of time for that later in college or after.

Good youth coaches generally don't need rankings to tell them or objectify how a team or players are doing, they should know and make the adjustments accordingly
 
On the girls side, the top ranked team in most of the age groups is winning their league games by an average of 6 goals or so.

Where are you seeing this? I just looked at all the age groups in Girls (California), and while that is mostly the case for some of the youngers (2011 and younger), by the time they get to 2010 it's just not the case at all. The very top girls team in the state (2010) looks like this for past 20 games:

1-0, 1-0, 3-2, 5-0, 1-0, 3-1, 9-0, 6-0, 7-0, 4-1, 3-0, 2-0, 8-1, 1-0, 6-0, 9-1, 2-2, 6-1, 0-2, 5-1, 2-0

Going older to the 2009, 2008, and beyond, they all look similar. If at one point in time - it looked like blowouts were both happening often, and being over-rewarded by this algorithm - that's certainly not what the ratings are showing now.

It’s like watching Bayern in Bundesliga. The league games don’t tell you anything. The real information comes when they leave their small pond. (Nationals, Surf cup, Jefferson Cup, etc.)

Maybe, but this shouldn't stay that way for long, or forever. If a team is blowing away other lesser teams in their own pond, they should have a higher rating that corresponds to that. The ratings are driven by results - there is no magic to them. If it turns out they are a "fake" 48, and when they go to a large tournament against a "real" 48, they get shellacked - their rating will be affected significantly, which then in turn affects all the ratings over time back at the pond. The more crossplay over time - the less the drift; the less crossplay - the higher chance of a closed pond having ratings that are not calibrated well with other ponds.

Youth soccer rankings are a trivial pursuit for adults basically, kids just want to play the game.

Yes, and Yes. But it's adults that are choosing where their kids should play, it's adults choosing what leagues and tournaments their teams pursue, it's adults choosing/accepting how to be bracketed in those tournaments, and it's adults dealing with other adults when managing a team long-term for success of its players, the teams, and the club as a whole.

Effective ratings help those adults with useful information about all of those decisions, and they can highlight when those decisions are being made poorly. It's certainly not the only useful information - it's just data. A good coach vs. a bad coach, and ultimately choosing what coach/org is best for your child throughout their youth soccer career, has many, many factors that all need to be understood and evaluated over time.
 
What I'm seeing with this app + club rankings is that Tournaments that host teams from different clubs (Surf Cup, etc) is the most risk/reward for leagues/clubs.

Considering the way clubs setup closed leagues to guarantee results logically it would make sense that they do the same thing with Tournaments. Meaning GA clubs would only do GA tournaments, ECNL clubs would only do ECNL tournaments, etc

Just a matter of time before GA/ECNL make a rule that member clubs only play in approved Tournaments with teams in their associated league.
 
You may very well be right, and that would be a shame. Then comparing teams in different leagues turns into an argument at the bar where you're convinced your hockey team is better than their badminton team. :)

On a separate but related note, the club rankings are likely going to change significantly soon as well. Turns out that allowing 5 out of 7 teams in that competitive age area means that it will filter for clubs that only have 5 teams in the top 5 ages. The top National clubs, and even some of the top State clubs in large states like California become ones that simply don't have any 2011 and 2012 teams. Having those lower aged teams lowers the overall average of the club enough to make a large difference in the rankings - even if their top 5 teams are the same as comparable clubs, or even better. Averaging in zeros for clubs that don't have teams at that age level hurts the ranking so much that the club drops so far below clubs with all teams, that the ranking is unusable. He's working on something to adjust between these two extremes, so we'll see what it comes up with next.
 
Where YSR's algorithm failed was in over rewarding blowouts......meaning teams in Socal league who were winning 6-0 every weekend were ranked ahead of ECNL teams who were winning 2-1 or tying the majority of their games......I am assuming that is still the case.....YSR was still the best but this was a definite flaw at the older ages U-13+++
Agree. Also think shutouts should count more as well. I'll take a 2-0 win over 4-2 win. Still lots of fun to follow.
 
Agree. Also think shutouts should count more as well. I'll take a 2-0 win over 4-2 win. Still lots of fun to follow.

Check out the separate offensive and defensive ratings that are given to teams now on the app - it helps take into account the differences that you've laid out. A team that wins 4-2 consistently isn't the same team that wins 2-0 consistently, which I think is your point - and it's an accurate one. This app made by the same guy who developed YSR, and it's based on the same mechanisms and principles - but it's changed and improved in a number of different ways. Even if some of the criticism of YSR in past years was 100% valid - all of that doesn't translate into problems with the app currently. It might have some of the same ones, and it might even have new ones.
 
Back
Top