Youth Soccer Rankings ?

I'm assuming a lot here, but looking at the data of what I think you're referring to - I don't see the issue. Here are the standings for the RL G10 Mojave group. I've added a column to that spreadsheet, with the current rating as of today:

View attachment 14959

The rating tracks to the standing in the conference almost exactly. There are small nuances between #4 & #5 and #8 seems a little low, but in general - it maps pretty closely. Which makes sense, the ratings come from the performance in games. But more importantly - if you look at the history for the #1 team, and look back on their last 20 games, the app clearly shows that they performed exactly as expected 18 times, and underperformed twice. That's a ringing endorsement that the rating that they have corresponds to the results they are seeing because, again, it's circular. They get the rating that they perform at, and they perform at the rating they get. Now look forward, to see what's likely to happen in their next two games. They should beat HB Koge by about 2 goals, and they should beat Pateadores by about 4. Will they do that in either? Who knows. But the app gives them a 62% chance of winning against HB Koge, a 16% chance of tieing, and a 22% chance of losing. Against Pateadores, they have a 78% chance of winning, a 9% chance of tieing, and 13% chance of losing.

So the question is, is that 41.00 low compared to what it could be if they played outside the conference with more challenging teams? Who knows. #1 team in the state is showing a 45.47 in ECNL, and Slammers RL is showing a 41.00 as the #25 team in the state. Considering two months ago they lost 4-1 to Beach ECNL (43.44), and 2-1 to Slammers ECNL (42.39), it doesn't seem terribly off at all to me.
I guess for me, there should be some type of cap for a margin of victory before it affects your score going down. 3-0 win shouldn’t count against your team. All you’re asking for is for coaches to run up the scores now..
 
I guess for me, there should be some type of cap for a margin of victory before it affects your score going down. 3-0 win shouldn’t count against your team. All you’re asking for is for coaches to run up the scores now..

For all any of us know, there are mechanisms to minimize that effect. You are assuming quite a bit about the algorithm, and looking at this specific data for this specific team - there is no clear requirement for them to run up the score in all but one game. If they are playing a particularly unfortunate team, yes, the expected score is going to be high. There is a team in that bracket that has 2 goals for, 42 goals against so far this year. If the best team in the league beats them 3-0, it's just not a good showing, and provides some data that the unfortunate team's opponent may not have performed at their best either - compared to its competitors who win with a significantly higher margin. In this case - it looks like the Slammers rating went down from 41.00 to 40.98 after this weekend's game against such a team. It might feel unfair, but it's also so minor that it isn't a big deal. Conversely, do you believe that their rating should go up by beating the last place team 3-0? So here's the info for what this particular team has to do for every remaining league game to maintain their rating (assuming ratings of each stay relatively consistent with today):

Slammers FC HB Koge RL 2-0
Pateadores RL 3-0
LAFC So Cal RL 2-0
Eagles SC RL 2-0
LA Breakers FC RL 4-0
Eagles SC RL 2-0
Phoenix Rising RL 3-0
San Diego Surf RL 3-0
Heat FC RL 7-0
Sporting California USA RL 3-0
So Cal Blues ECNL RL 1-0
LAFC So Cal RL 2-0
Sporting California USA RL 3-0
Utah Royals FC-AZ RL 4-0
Beach FC RL 1-0
Legends FC RL 1-0

Blowouts aren't required in all but 1 case. And - either achieving all of these or none of these or some of these isn't an objective success or a failure of the team. But what it does mean, is that their performance rating will go up compared to their peers if these are exceeded. And if they aren't met, their performance rating will go down compared to their peers. Their rating might also go up a bit throughout the season even by matching these expectations, as the average rating for that particular league goes up by any external play. It also might go down - but that's less likely; typically scores continue to go up both throughout the year and from year to year until U17 or higher.
 
I see what you're saying, but I think I'm looking at the exact same data you describe and making a different conclusion. Within that pond, in that context, nothing is fake. If there are blowouts from big fish to little fish - that difference is representative of actual games between those fish (sorry, little fishies). The average pond rating doesn't really matter. If that pond continues to play among each other for a month or two - there is nothing fake about the relative ratings within pond.

Now if you take that numerical rating and bring said big fish to a separate tournament (say, a great lake?), and it is made clear that the pond rating was way off, there will be an effect. I take your point, that the effect isn't significant enough - that if the vast majority of the games remain back in the home pond and there just isn't enough cross-play to move the needle, the pond ratings will correspond accurately to performance at home pond - but are less meaningful as applied outwards. For it to be significantly incorrect through - it turns out that the average pond rating wouldn't be about right. It was set too high / too low, or it drifted too high / too low. Pond-based big fish aren't getting a "fake" rating due to some ponds being blowout rich - they are getting a "fake" rating due to not enough fish on fish competition with other ponds to sort things out over time.

But regardless of the actual math differences - it does illustrate the larger point. These particular ratings are more accurate if they are fed a bunch of relevant data, and are less accurate if they have too little relevant data. Using them to see the ratings of an upcoming tournament or league game that you're actually entering - it may surprise you how accurate it can predict good games and bad games, strong competition and weak competition. Using them to compare ratings across teams and leagues that have nothing at all to do with each other, and there is a bunch more wiggle room.
Within the pond is the wrong question. You don’t need YSR to compare teams in the same league. You have league standings.

YSR is more useful for comparing teams from different leagues. It helps with questions like “how should I flight this tournament?” and “who should I call for a scrimmage?”

For that, the important question is the accuracy of ratings between ponds. If this strong GA team plays that weak ECNL team, is it a reasonable match? A system which over-values goal differential will give you the wrong answer.

Totally agree that it would be more accurate if we didn’t divide up all the fish into their own little ponds. You get weird things like Solar 2010 ECRL being ranked 7 points higher than Sting Austin ECNL. Either the ratings system is off, or those two teams really ought to switch leagues. (or both.)
 
Within the pond is the wrong question. You don’t need YSR to compare teams in the same league. You have league standings.

Yes, this is a fair point. Those two sets of data should correlate so closely in most cases that you don't need both. Yes - it may give a better guess on goal differential for an in-league game than a guess not based on game results - but in the big picture it doesn't matter much.

YSR is more useful for comparing teams from different leagues. It helps with questions like “how should I flight this tournament?” and “who should I call for a scrimmage?”

For that, the important question is the accuracy of ratings between ponds. If this strong GA team plays that weak ECNL team, is it a reasonable match? A system which over-values goal differential will give you the wrong answer.

Agreed, I also believe that is the main benefit. But the assumption that the team comparisons across different ponds is already wrong, benefits those that don't want to admit that there may be some truth to those rating differences. The statement that goal differentials are emphasized too much or too little isn't a matter of opinion, it's a math and data matter of actual results of what happens when teams meet and validate/invalidate those ratings. When your team goes and plays a team that they haven't seen before - are the results as expected, or are they way off. Look down at the last 30-50 games in the history, and identify which games were so far off of expectation that the results show up as Red (significantly lower than expected), or Green (significantly higher than expected). Do the same for handful of other teams in your league, in other leagues, or even leagues that you are suspicious of such higher ratings. If the amount of results that are unexpected is pretty small, it's a decent indicator that the ratings are reasonably close to results, and are reasonably predictive.

Totally agree that it would be more accurate if we didn’t divide up all the fish into their own little ponds. You get weird things like Solar 2010 ECRL being ranked 7 points higher than Sting Austin ECNL. Either the ratings system is off, or those two teams really ought to switch leagues. (or both.)

That's a good example to hash out. I see that Solar SC RL has a rating of 42.82. In addition to the ECNL RL league, they played the Premier Cup, the National Championships, a separate (unknown) league, the Texas State cup, the Dallas International Girls Cup, The Frontier Conference, The DTSC Fall Festival, and the Girls Classic league. There are over 100 games of history that tie into their current rating. Up until July this year, they generally either matched their rating or exceeded their rating in most games. This would have continued to elevate their rating. From August onwards, they more often underperformed against their rating - only exceeding it twice, which would have lowered their rating. Just looking at the various team names from all over the country that they've played over this time period, I don't think the statement that blowouts in their own league is inflating their rating is fair or accurate. There are just too many games for that to be significant at this point, and the recognition of that is clear in their history (and the red scores)

EDIT: was looking at Sting ECRL - give me a sec
 
continued.....

I see that Sting Austin ECNL is sitting at a 36.26. I see a history going back 35+ games. In addition to ECNL, they have done some local tournaments in Austin, the Directors cup, WDDOA, the Bat City Cup, and 1 or 2 more. I also see in that record, that have won 1 (1) game this year. And in all of these results, it turns out they overperformed 3 times, and underperformed 16 times. For every other game - they performed as expected. And the expectation was not good.

This isn't a ratings problem - you stated the actual solution to satirize it (swap teams/leagues), but it actually turns out to be true. Sting Austin ECNL can expect to lose every game they play in league, and the rating reflects that. Sadly, their current rating shows they'd likely to have the same results in ECRL if they started playing there tomorrow.
 
continued.....

I see that Sting Austin ECNL is sitting at a 36.26. I see a history going back 35+ games. In addition to ECNL, they have done some local tournaments in Austin, the Directors cup, WDDOA, the Bat City Cup, and 1 or 2 more. I also see in that record, that have won 1 (1) game this year. And in all of these results, it turns out they overperformed 3 times, and underperformed 16 times. For every other game - they performed as expected. And the expectation was not good.

This isn't a ratings problem - you stated the actual solution to satirize it (swap teams/leagues), but it actually turns out to be true. Sting Austin ECNL can expect to lose every game they play in league, and the rating reflects that. Sadly, their current rating shows they'd likely to have the same results in ECRL if they started playing there tomorrow.
Satire? I was serious. Top RL teams should move up, and bottom NL teams should move down.

Otherwise, you get games where the losing side parks the bus for 70-90 minutes. No one learns anything from the game, and the kids on the losing end of it feel awful.
 
OK, then it actually sounds like we're mostly in agreement here. There's reasonable information to show that that particular Sting team really is that, uh, not strong. And that particular Solar team really is that strong.

You get weird things like Solar 2010 ECRL being ranked 7 points higher than Sting Austin ECNL. Either the ratings system is off, or those two teams really ought to switch leagues. (or both.)

Solar ECRL could likely do quite well in ECNL - but it turns out that their existing ECNL team would best them by ~ 2 goals. The weird thing isn't due to any ratings, or standings, or anything else. It's just a reality that the top teams in the lower league would likely do quite well against the bottom teams in the upper league. The leagues aren't so far apart that there is this uncrossable gap between them without overlap.
 
OK, then it actually sounds like we're mostly in agreement here. There's reasonable information to show that that particular Sting team really is that, uh, not strong. And that particular Solar team really is that strong.



Solar ECRL could likely do quite well in ECNL - but it turns out that their existing ECNL team would best them by ~ 2 goals. The weird thing isn't due to any ratings, or standings, or anything else. It's just a reality that the top teams in the lower league would likely do quite well against the bottom teams in the upper league. The leagues aren't so far apart that there is this uncrossable gap between them without overlap.
If clubs are able to field a top level ECRL and top level ECNL team ECNL should just give them 2 ECNL teams.

ECNL isn't doing anyone any favors propping terrible teams up or holding talent back.

If both teams don't win at the ECNL level the lower of the 2 goes back to ECRL.
 
I thought there was more of a difference between NL and RL than just team quality. Don't the NL teams need to travel quite a bit more than RL, with more of a commitment from the players/parents/etc.? I'd think it's possible that there are kids (and families) that are comfortable with the RL commitment but not the NL commitment, that may be at similar skill levels. That could be one reason there are some phenomenal teams sitting at the top of RL without moving up (as players or as teams).
 
For all any of us know, there are mechanisms to minimize that effect. You are assuming quite a bit about the algorithm, and looking at this specific data for this specific team - there is no clear requirement for them to run up the score in all but one game. If they are playing a particularly unfortunate team, yes, the expected score is going to be high. There is a team in that bracket that has 2 goals for, 42 goals against so far this year. If the best team in the league beats them 3-0, it's just not a good showing, and provides some data that the unfortunate team's opponent may not have performed at their best either - compared to its competitors who win with a significantly higher margin. In this case - it looks like the Slammers rating went down from 41.00 to 40.98 after this weekend's game against such a team. It might feel unfair, but it's also so minor that it isn't a big deal. Conversely, do you believe that their rating should go up by beating the last place team 3-0? So here's the info for what this particular team has to do for every remaining league game to maintain their rating (assuming ratings of each stay relatively consistent with today):

Slammers FC HB Koge RL 2-0
Pateadores RL 3-0
LAFC So Cal RL 2-0
Eagles SC RL 2-0
LA Breakers FC RL 4-0
Eagles SC RL 2-0
Phoenix Rising RL 3-0
San Diego Surf RL 3-0
Heat FC RL 7-0
Sporting California USA RL 3-0
So Cal Blues ECNL RL 1-0
LAFC So Cal RL 2-0
Sporting California USA RL 3-0
Utah Royals FC-AZ RL 4-0
Beach FC RL 1-0
Legends FC RL 1-0

Blowouts aren't required in all but 1 case. And - either achieving all of these or none of these or some of these isn't an objective success or a failure of the team. But what it does mean, is that their performance rating will go up compared to their peers if these are exceeded. And if they aren't met, their performance rating will go down compared to their peers. Their rating might also go up a bit throughout the season even by matching these expectations, as the average rating for that particular league goes up by any external play. It also might go down - but that's less likely; typically scores continue to go up both throughout the year and from year to year until U17 or higher.

These ratings are already changed. Slammers RL was at 41.48 and was being asked to blow every team out and that’s just not going to happen at this level. The 3-0 this weekend wasn’t a bad showing, ball just didn’t go in, over 50 opportunities. Hey, it happens. But it’s not just the 3-0, rating started to go down when we beat a team 4-1 instead of 4-0 and etc.C6E68447-F7EB-493B-8195-7964265C43B1.png
 
I thought there was more of a difference between NL and RL than just team quality. Don't the NL teams need to travel quite a bit more than RL, with more of a commitment from the players/parents/etc.? I'd think it's possible that there are kids (and families) that are comfortable with the RL commitment but not the NL commitment, that may be at similar skill levels. That could be one reason there are some phenomenal teams sitting at the top of RL without moving up (as players or as teams).
Maybe this is the case. However it would be better for the Club to make that decision (using feedback from the parents) than the league continuing to allow deserving players to languish.
 
What I find odd is that Solar RL remains pretty much unaffected by their 4 losses this league season and being towards the bottom of their league.. I guess 41 goals in 13 matches outweighs the 4 losses and being in 5th place? In contrast, Slammers has 8 wins out of their 8 matches and 30 goals for the season and only 3 goals conceded
 

Attachments

  • 82C5431A-5217-47E7-91E5-A14542472447.png
    82C5431A-5217-47E7-91E5-A14542472447.png
    391.7 KB · Views: 8
  • 64822976-D31B-4A63-B58F-BDA69A20A5B0.png
    64822976-D31B-4A63-B58F-BDA69A20A5B0.png
    560.1 KB · Views: 8
I'm happy that we're looking at the same data, but I'm not sure why we are seeing it so differently. Those predictions are what is necessary for a team to maintain their current rating according to their peers. Go above it, rating moves up. Go below it, rating moves down. All the ratings of all teams that play each other are interrelated. Over time - the average rating of an age group as a whole moves up in tandem (compare U11 to U12 to U13 etc.)

You're wildly exaggerating the requirements for your team to maintain the rating. For the rest of the league games all year, there is one game where it is expected to be blowout (7-0 against Heat). Only two games require a 4 goal difference. Everything else is 3 or lower. I just reloaded each team, and here are those predictions again:

Slammers FC HB Koge RL 2-0
Pateadores RL 3-0
LAFC So Cal RL 2-0
Eagles SC RL 2-0
LA Breakers FC RL 4-0
Eagles SC RL 2-0
Phoenix Rising RL 3-0
San Diego Surf RL 3-0
Heat FC RL 7-0
Sporting California USA RL 3-0
So Cal Blues ECNL RL 1-0
LAFC So Cal RL 2-0
Sporting California USA RL 3-0
Utah Royals FC-AZ RL 4-0
Beach FC RL 1-0
Legends FC RL 1-0

If you go out of league to play other teams, and you are wildly better than them - the expectation is that you will beat them soundly. If you go out of league to play other teams, and they are in fact rated higher than you - if/when you beat them your own rating will go up and theirs will go down.

If you think there is a special "if team name = Solar then do something hinky with ratings" is baked into the app, I'm not sure what to tell you. There are hundreds of thousands of teams keeping track of literally millions of games, all fed into the same program - and the data being shown is what it comes up with.
 
That standings page of ECRL Texas is a bit broken as well. It looks like someone decided to sort it by PPG instead of PTS. If you sort the teams by points instead, it looks like below: (I added the current ratings for each team as well)

texas rl.jpg

The position in the league closely tracks the rating, as one would expect. Solar shows a bit higher, so does FC Dallas RL. Any number of reasons for this - but the only one that matters is that for the opponents they played, they achieved a certain goal differential (pos or neg), and that factors in to their score every time. One indicator is how many more goals those teams have scored (41 & 39). And their goal differential is much better than anyone else in the league (+24 and +28).
 
As of today, it looks like the average for the Socal RL division is roughly 1 goal lower than the Texas RL division. Everything else being equal, it will be more challenging for a Socal team to match a rating with a higher rated Texas team, if they play equivalently (whatever equivalently means), and if that's an expectation or goal.

Socal RL2.jpg

Texas rl2.jpg
 
What I find odd is that Solar RL remains pretty much unaffected by their 4 losses this league season and being towards the bottom of their league.. I guess 41 goals in 13 matches outweighs the 4 losses and being in 5th place? In contrast, Slammers has 8 wins out of their 8 matches and 30 goals for the season and only 3 goals conceded
An algorithm based on averaging goal differential will do exactly that. A 3-4 loss and a 10-0 win earn you a higher ranking than a pair of 3-0 wins.
 
As of today, it looks like the average for the Socal RL division is roughly 1 goal lower than the Texas RL division. Everything else being equal, it will be more challenging for a Socal team to match a rating with a higher rated Texas team, if they play equivalently (whatever equivalently means), and if that's an expectation or goal.

View attachment 14976

View attachment 14977

you also have to include the Sonoran division as Mojave also plays each team cross bracket during the season. That being said, the two highest rated teams Slammers RL and SoCal Blues are both around the same point total.

I guess in the end it’s more high stakes for these two teams to play lower teams should they not beat their predictions. I think those goal differentials are more achievable in 9v9 but not so much in 11v11 once these teams get their bearings on playing a big field.

I think it’s also kind of always been my gripe with the website and now the program that teams can play in lower leagues and hide in bottom bracket tournaments and it inflates their rankings because of goal differential.

I’m very familiar with how the ratings work, one higher rated team has a predictive value and another has their predictive value and that difference is goal differential. But a 2010 RL team rated 35 is predicted to lose to the top 2014 team. I can’t imagine a group of 8 year olds beating on 12 year olds. I think there should be some type of cap for rating (wins, losses, goal scored, goals against, age, and difficulty of bracket).
 
An algorithm based on averaging goal differential will do exactly that. A 3-4 loss and a 10-0 win earn you a higher ranking than a pair of 3-0 wins.

Almost - there needs to be a caveat that the opponents are identical in this setup. If the games are against separate teams, it's entirely feasible that a 3-4 loss against a stellar opponent will help a rating more than a 3-0 win against a cellar-dwelling team.
 
Back
Top