Youth Soccer Rankings ?

Where are you seeing this? I just looked at all the age groups in Girls (California), ...


Maybe, but this shouldn't stay that way for long, or forever. If a team is blowing away other lesser teams in their own pond, they should have a higher rating that corresponds to that. The ratings are driven by results - there is no magic to them. If it turns out they are a "fake" 48, and when they go to a large tournament against a "real" 48, they get shellacked - their rating will be affected significantly, which then in turn affects all the ratings over time back at the pond. The more crossplay over time - the less the drift; the less crossplay - the higher chance of a closed pond having ratings that are not calibrated well with other ponds.
Look outside of Southwest. Other regions get more blowouts, and it shows up in the rankings.

It doesn't quite wash out. The average pond rating will be about right. But the distribution of that rating within the pond will be off. The big fish has a fake 48 (real 46), but the little fish have fake 38s (real 40).

A small amount of cross play doesn't quite fix this. The overrated teams lose a bit when they underperform at Jefferson Cup. Underrated teams gain a bit when they overperform at some lesser tournament. The fake 48 becomes a 47. The fake 38 becomes a 39. Then you have another 9-10 league games, and the fake 48 and fake 38 return.

Still a great system. But take it with a grain of salt whenever you see a string of 6-0 and 12-0 games against lower ranked teams.
 
Top 2010G RL team is losing ranking despite being undefeated because the algorithm is predicting they have to win their games 7-0 and 5-0 on a consistent basis. Scores are going to be low when opponents are stacking 10/11 in the box the whole game.
 
It doesn't quite wash out. The average pond rating will be about right. But the distribution of that rating within the pond will be off. The big fish has a fake 48 (real 46), but the little fish have fake 38s (real 40).

I see what you're saying, but I think I'm looking at the exact same data you describe and making a different conclusion. Within that pond, in that context, nothing is fake. If there are blowouts from big fish to little fish - that difference is representative of actual games between those fish (sorry, little fishies). The average pond rating doesn't really matter. If that pond continues to play among each other for a month or two - there is nothing fake about the relative ratings within pond.

Now if you take that numerical rating and bring said big fish to a separate tournament (say, a great lake?), and it is made clear that the pond rating was way off, there will be an effect. I take your point, that the effect isn't significant enough - that if the vast majority of the games remain back in the home pond and there just isn't enough cross-play to move the needle, the pond ratings will correspond accurately to performance at home pond - but are less meaningful as applied outwards. For it to be significantly incorrect through - it turns out that the average pond rating wouldn't be about right. It was set too high / too low, or it drifted too high / too low. Pond-based big fish aren't getting a "fake" rating due to some ponds being blowout rich - they are getting a "fake" rating due to not enough fish on fish competition with other ponds to sort things out over time.

But regardless of the actual math differences - it does illustrate the larger point. These particular ratings are more accurate if they are fed a bunch of relevant data, and are less accurate if they have too little relevant data. Using them to see the ratings of an upcoming tournament or league game that you're actually entering - it may surprise you how accurate it can predict good games and bad games, strong competition and weak competition. Using them to compare ratings across teams and leagues that have nothing at all to do with each other, and there is a bunch more wiggle room.
 
Top 2010G RL team is losing ranking despite being undefeated because the algorithm is predicting they have to win their games 7-0 and 5-0 on a consistent basis. Scores are going to be low when opponents are stacking 10/11 in the box the whole game.

I'm assuming a lot here, but looking at the data of what I think you're referring to - I don't see the issue. Here are the standings for the RL G10 Mojave group. I've added a column to that spreadsheet, with the current rating as of today:

RL Standings.jpg

The rating tracks to the standing in the conference almost exactly. There are small nuances between #4 & #5 and #8 seems a little low, but in general - it maps pretty closely. Which makes sense, the ratings come from the performance in games. But more importantly - if you look at the history for the #1 team, and look back on their last 20 games, the app clearly shows that they performed exactly as expected 18 times, and underperformed twice. That's a ringing endorsement that the rating that they have corresponds to the results they are seeing because, again, it's circular. They get the rating that they perform at, and they perform at the rating they get. Now look forward, to see what's likely to happen in their next two games. They should beat HB Koge by about 2 goals, and they should beat Pateadores by about 4. Will they do that in either? Who knows. But the app gives them a 62% chance of winning against HB Koge, a 16% chance of tieing, and a 22% chance of losing. Against Pateadores, they have a 78% chance of winning, a 9% chance of tieing, and 13% chance of losing.

So the question is, is that 41.00 low compared to what it could be if they played outside the conference with more challenging teams? Who knows. #1 team in the state is showing a 45.47 in ECNL, and Slammers RL is showing a 41.00 as the #25 team in the state. Considering two months ago they lost 4-1 to Beach ECNL (43.44), and 2-1 to Slammers ECNL (42.39), it doesn't seem terribly off at all to me.
 
I am not sure how team rating would help parents decide whether they should find another club for their child or not. The coach and club may care for marketing and perhaps even coach evaluation but not very useful for parents.
If the team is rated very high but your kids are playing 20% of the time, how does it matter?
Whether the team has high or low rating, I think any child should move to different team after 1 year of playing less than 50%.
 
I am not sure how team rating would help parents decide whether they should find another club for their child or not. The coach and club may care for marketing and perhaps even coach evaluation but not very useful for parents.

I'm not sure that a rating alone is the strongest signal, or even a strong signal, about whether an individual player should leave the team they are on.

If the team is rated very high but your kids are playing 20% of the time, how does it matter?

Whether the team has high or low rating, I think any child should move to different team after 1 year of playing less than 50%.

I agree completely. If you're not getting enough playing time for an extended period - it doesn't matter how good the team is, the player isn't getting any benefit of it, and isn't improving themselves at a sufficient rate to change the situation. It's likely time to find another team.

On the other end of the bench, if the player is the lone star or one of the few clear stars on a team - with more than sufficient playtime - yet the team isn't improving, and there's nowhere appropriate to move up in the club; it may be time for that player to find another team as well.

When either of those players is looking for a new place - the quality of the team/club/coach/cost/logistics are all a factor, and knowing how all of those compare to their peers seems useful.
 
I am not sure how team rating would help parents decide whether they should find another club for their child or not. The coach and club may care for marketing and perhaps even coach evaluation but not very useful for parents.
If the team is rated very high but your kids are playing 20% of the time, how does it matter?
Whether the team has high or low rating, I think any child should move to different team after 1 year of playing less than 50%.
You don’t use ratings to find a team to play for. You use ratings to find a team to play against.

If your team is a 37 and the other team is a 37, it’s probably a decent game. Worth setting up the scrimmage.
 
I'm assuming a lot here, but looking at the data of what I think you're referring to - I don't see the issue. Here are the standings for the RL G10 Mojave group. I've added a column to that spreadsheet, with the current rating as of today:

View attachment 14959

The rating tracks to the standing in the conference almost exactly. There are small nuances between #4 & #5 and #8 seems a little low, but in general - it maps pretty closely. Which makes sense, the ratings come from the performance in games. But more importantly - if you look at the history for the #1 team, and look back on their last 20 games, the app clearly shows that they performed exactly as expected 18 times, and underperformed twice. That's a ringing endorsement that the rating that they have corresponds to the results they are seeing because, again, it's circular. They get the rating that they perform at, and they perform at the rating they get. Now look forward, to see what's likely to happen in their next two games. They should beat HB Koge by about 2 goals, and they should beat Pateadores by about 4. Will they do that in either? Who knows. But the app gives them a 62% chance of winning against HB Koge, a 16% chance of tieing, and a 22% chance of losing. Against Pateadores, they have a 78% chance of winning, a 9% chance of tieing, and 13% chance of losing.

So the question is, is that 41.00 low compared to what it could be if they played outside the conference with more challenging teams? Who knows. #1 team in the state is showing a 45.47 in ECNL, and Slammers RL is showing a 41.00 as the #25 team in the state. Considering two months ago they lost 4-1 to Beach ECNL (43.44), and 2-1 to Slammers ECNL (42.39), it doesn't seem terribly off at all to me.
I guess for me, there should be some type of cap for a margin of victory before it affects your score going down. 3-0 win shouldn’t count against your team. All you’re asking for is for coaches to run up the scores now..
 
I guess for me, there should be some type of cap for a margin of victory before it affects your score going down. 3-0 win shouldn’t count against your team. All you’re asking for is for coaches to run up the scores now..

For all any of us know, there are mechanisms to minimize that effect. You are assuming quite a bit about the algorithm, and looking at this specific data for this specific team - there is no clear requirement for them to run up the score in all but one game. If they are playing a particularly unfortunate team, yes, the expected score is going to be high. There is a team in that bracket that has 2 goals for, 42 goals against so far this year. If the best team in the league beats them 3-0, it's just not a good showing, and provides some data that the unfortunate team's opponent may not have performed at their best either - compared to its competitors who win with a significantly higher margin. In this case - it looks like the Slammers rating went down from 41.00 to 40.98 after this weekend's game against such a team. It might feel unfair, but it's also so minor that it isn't a big deal. Conversely, do you believe that their rating should go up by beating the last place team 3-0? So here's the info for what this particular team has to do for every remaining league game to maintain their rating (assuming ratings of each stay relatively consistent with today):

Slammers FC HB Koge RL 2-0
Pateadores RL 3-0
LAFC So Cal RL 2-0
Eagles SC RL 2-0
LA Breakers FC RL 4-0
Eagles SC RL 2-0
Phoenix Rising RL 3-0
San Diego Surf RL 3-0
Heat FC RL 7-0
Sporting California USA RL 3-0
So Cal Blues ECNL RL 1-0
LAFC So Cal RL 2-0
Sporting California USA RL 3-0
Utah Royals FC-AZ RL 4-0
Beach FC RL 1-0
Legends FC RL 1-0

Blowouts aren't required in all but 1 case. And - either achieving all of these or none of these or some of these isn't an objective success or a failure of the team. But what it does mean, is that their performance rating will go up compared to their peers if these are exceeded. And if they aren't met, their performance rating will go down compared to their peers. Their rating might also go up a bit throughout the season even by matching these expectations, as the average rating for that particular league goes up by any external play. It also might go down - but that's less likely; typically scores continue to go up both throughout the year and from year to year until U17 or higher.
 
I see what you're saying, but I think I'm looking at the exact same data you describe and making a different conclusion. Within that pond, in that context, nothing is fake. If there are blowouts from big fish to little fish - that difference is representative of actual games between those fish (sorry, little fishies). The average pond rating doesn't really matter. If that pond continues to play among each other for a month or two - there is nothing fake about the relative ratings within pond.

Now if you take that numerical rating and bring said big fish to a separate tournament (say, a great lake?), and it is made clear that the pond rating was way off, there will be an effect. I take your point, that the effect isn't significant enough - that if the vast majority of the games remain back in the home pond and there just isn't enough cross-play to move the needle, the pond ratings will correspond accurately to performance at home pond - but are less meaningful as applied outwards. For it to be significantly incorrect through - it turns out that the average pond rating wouldn't be about right. It was set too high / too low, or it drifted too high / too low. Pond-based big fish aren't getting a "fake" rating due to some ponds being blowout rich - they are getting a "fake" rating due to not enough fish on fish competition with other ponds to sort things out over time.

But regardless of the actual math differences - it does illustrate the larger point. These particular ratings are more accurate if they are fed a bunch of relevant data, and are less accurate if they have too little relevant data. Using them to see the ratings of an upcoming tournament or league game that you're actually entering - it may surprise you how accurate it can predict good games and bad games, strong competition and weak competition. Using them to compare ratings across teams and leagues that have nothing at all to do with each other, and there is a bunch more wiggle room.
Within the pond is the wrong question. You don’t need YSR to compare teams in the same league. You have league standings.

YSR is more useful for comparing teams from different leagues. It helps with questions like “how should I flight this tournament?” and “who should I call for a scrimmage?”

For that, the important question is the accuracy of ratings between ponds. If this strong GA team plays that weak ECNL team, is it a reasonable match? A system which over-values goal differential will give you the wrong answer.

Totally agree that it would be more accurate if we didn’t divide up all the fish into their own little ponds. You get weird things like Solar 2010 ECRL being ranked 7 points higher than Sting Austin ECNL. Either the ratings system is off, or those two teams really ought to switch leagues. (or both.)
 
Within the pond is the wrong question. You don’t need YSR to compare teams in the same league. You have league standings.

Yes, this is a fair point. Those two sets of data should correlate so closely in most cases that you don't need both. Yes - it may give a better guess on goal differential for an in-league game than a guess not based on game results - but in the big picture it doesn't matter much.

YSR is more useful for comparing teams from different leagues. It helps with questions like “how should I flight this tournament?” and “who should I call for a scrimmage?”

For that, the important question is the accuracy of ratings between ponds. If this strong GA team plays that weak ECNL team, is it a reasonable match? A system which over-values goal differential will give you the wrong answer.

Agreed, I also believe that is the main benefit. But the assumption that the team comparisons across different ponds is already wrong, benefits those that don't want to admit that there may be some truth to those rating differences. The statement that goal differentials are emphasized too much or too little isn't a matter of opinion, it's a math and data matter of actual results of what happens when teams meet and validate/invalidate those ratings. When your team goes and plays a team that they haven't seen before - are the results as expected, or are they way off. Look down at the last 30-50 games in the history, and identify which games were so far off of expectation that the results show up as Red (significantly lower than expected), or Green (significantly higher than expected). Do the same for handful of other teams in your league, in other leagues, or even leagues that you are suspicious of such higher ratings. If the amount of results that are unexpected is pretty small, it's a decent indicator that the ratings are reasonably close to results, and are reasonably predictive.

Totally agree that it would be more accurate if we didn’t divide up all the fish into their own little ponds. You get weird things like Solar 2010 ECRL being ranked 7 points higher than Sting Austin ECNL. Either the ratings system is off, or those two teams really ought to switch leagues. (or both.)

That's a good example to hash out. I see that Solar SC RL has a rating of 42.82. In addition to the ECNL RL league, they played the Premier Cup, the National Championships, a separate (unknown) league, the Texas State cup, the Dallas International Girls Cup, The Frontier Conference, The DTSC Fall Festival, and the Girls Classic league. There are over 100 games of history that tie into their current rating. Up until July this year, they generally either matched their rating or exceeded their rating in most games. This would have continued to elevate their rating. From August onwards, they more often underperformed against their rating - only exceeding it twice, which would have lowered their rating. Just looking at the various team names from all over the country that they've played over this time period, I don't think the statement that blowouts in their own league is inflating their rating is fair or accurate. There are just too many games for that to be significant at this point, and the recognition of that is clear in their history (and the red scores)

EDIT: was looking at Sting ECRL - give me a sec
 
continued.....

I see that Sting Austin ECNL is sitting at a 36.26. I see a history going back 35+ games. In addition to ECNL, they have done some local tournaments in Austin, the Directors cup, WDDOA, the Bat City Cup, and 1 or 2 more. I also see in that record, that have won 1 (1) game this year. And in all of these results, it turns out they overperformed 3 times, and underperformed 16 times. For every other game - they performed as expected. And the expectation was not good.

This isn't a ratings problem - you stated the actual solution to satirize it (swap teams/leagues), but it actually turns out to be true. Sting Austin ECNL can expect to lose every game they play in league, and the rating reflects that. Sadly, their current rating shows they'd likely to have the same results in ECRL if they started playing there tomorrow.
 
continued.....

I see that Sting Austin ECNL is sitting at a 36.26. I see a history going back 35+ games. In addition to ECNL, they have done some local tournaments in Austin, the Directors cup, WDDOA, the Bat City Cup, and 1 or 2 more. I also see in that record, that have won 1 (1) game this year. And in all of these results, it turns out they overperformed 3 times, and underperformed 16 times. For every other game - they performed as expected. And the expectation was not good.

This isn't a ratings problem - you stated the actual solution to satirize it (swap teams/leagues), but it actually turns out to be true. Sting Austin ECNL can expect to lose every game they play in league, and the rating reflects that. Sadly, their current rating shows they'd likely to have the same results in ECRL if they started playing there tomorrow.
Satire? I was serious. Top RL teams should move up, and bottom NL teams should move down.

Otherwise, you get games where the losing side parks the bus for 70-90 minutes. No one learns anything from the game, and the kids on the losing end of it feel awful.
 
OK, then it actually sounds like we're mostly in agreement here. There's reasonable information to show that that particular Sting team really is that, uh, not strong. And that particular Solar team really is that strong.

You get weird things like Solar 2010 ECRL being ranked 7 points higher than Sting Austin ECNL. Either the ratings system is off, or those two teams really ought to switch leagues. (or both.)

Solar ECRL could likely do quite well in ECNL - but it turns out that their existing ECNL team would best them by ~ 2 goals. The weird thing isn't due to any ratings, or standings, or anything else. It's just a reality that the top teams in the lower league would likely do quite well against the bottom teams in the upper league. The leagues aren't so far apart that there is this uncrossable gap between them without overlap.
 
OK, then it actually sounds like we're mostly in agreement here. There's reasonable information to show that that particular Sting team really is that, uh, not strong. And that particular Solar team really is that strong.



Solar ECRL could likely do quite well in ECNL - but it turns out that their existing ECNL team would best them by ~ 2 goals. The weird thing isn't due to any ratings, or standings, or anything else. It's just a reality that the top teams in the lower league would likely do quite well against the bottom teams in the upper league. The leagues aren't so far apart that there is this uncrossable gap between them without overlap.
If clubs are able to field a top level ECRL and top level ECNL team ECNL should just give them 2 ECNL teams.

ECNL isn't doing anyone any favors propping terrible teams up or holding talent back.

If both teams don't win at the ECNL level the lower of the 2 goes back to ECRL.
 
I thought there was more of a difference between NL and RL than just team quality. Don't the NL teams need to travel quite a bit more than RL, with more of a commitment from the players/parents/etc.? I'd think it's possible that there are kids (and families) that are comfortable with the RL commitment but not the NL commitment, that may be at similar skill levels. That could be one reason there are some phenomenal teams sitting at the top of RL without moving up (as players or as teams).
 
For all any of us know, there are mechanisms to minimize that effect. You are assuming quite a bit about the algorithm, and looking at this specific data for this specific team - there is no clear requirement for them to run up the score in all but one game. If they are playing a particularly unfortunate team, yes, the expected score is going to be high. There is a team in that bracket that has 2 goals for, 42 goals against so far this year. If the best team in the league beats them 3-0, it's just not a good showing, and provides some data that the unfortunate team's opponent may not have performed at their best either - compared to its competitors who win with a significantly higher margin. In this case - it looks like the Slammers rating went down from 41.00 to 40.98 after this weekend's game against such a team. It might feel unfair, but it's also so minor that it isn't a big deal. Conversely, do you believe that their rating should go up by beating the last place team 3-0? So here's the info for what this particular team has to do for every remaining league game to maintain their rating (assuming ratings of each stay relatively consistent with today):

Slammers FC HB Koge RL 2-0
Pateadores RL 3-0
LAFC So Cal RL 2-0
Eagles SC RL 2-0
LA Breakers FC RL 4-0
Eagles SC RL 2-0
Phoenix Rising RL 3-0
San Diego Surf RL 3-0
Heat FC RL 7-0
Sporting California USA RL 3-0
So Cal Blues ECNL RL 1-0
LAFC So Cal RL 2-0
Sporting California USA RL 3-0
Utah Royals FC-AZ RL 4-0
Beach FC RL 1-0
Legends FC RL 1-0

Blowouts aren't required in all but 1 case. And - either achieving all of these or none of these or some of these isn't an objective success or a failure of the team. But what it does mean, is that their performance rating will go up compared to their peers if these are exceeded. And if they aren't met, their performance rating will go down compared to their peers. Their rating might also go up a bit throughout the season even by matching these expectations, as the average rating for that particular league goes up by any external play. It also might go down - but that's less likely; typically scores continue to go up both throughout the year and from year to year until U17 or higher.

These ratings are already changed. Slammers RL was at 41.48 and was being asked to blow every team out and that’s just not going to happen at this level. The 3-0 this weekend wasn’t a bad showing, ball just didn’t go in, over 50 opportunities. Hey, it happens. But it’s not just the 3-0, rating started to go down when we beat a team 4-1 instead of 4-0 and etc.C6E68447-F7EB-493B-8195-7964265C43B1.png
 
I thought there was more of a difference between NL and RL than just team quality. Don't the NL teams need to travel quite a bit more than RL, with more of a commitment from the players/parents/etc.? I'd think it's possible that there are kids (and families) that are comfortable with the RL commitment but not the NL commitment, that may be at similar skill levels. That could be one reason there are some phenomenal teams sitting at the top of RL without moving up (as players or as teams).
Maybe this is the case. However it would be better for the Club to make that decision (using feedback from the parents) than the league continuing to allow deserving players to languish.
 
Back
Top