A 7-0 score will only give the team a bump if they performed better than expected, which would cause the score to be reflected in green. If a score was predicted to be 7-0, and the result was 7-0 or within 2 GD, it would not really bump up the team much, and the score will be represented in black. If the score was predicted to be 4-0, and ended 7-0, they will get a bump, and they should, because they outperformed their prediction.
There are plenty of examples of teams winning by 5-1 and still having the SR move down (score in red). That is what makes SR accurate, you can't really effect the ratings much by sandbagging, or avoiding playing certain teams; the teams actual game results compared to the predicted results (team ratings) is what causes a change. It is easy to see if a teams rating/rank is incorrect; they will have many results in green and/or red in a short amount of time. If most of the results are in black, the rating is very close to being accurate.
This is only the second summer of cross-play data being added to the teams algorithm since the new App and data sets were started just over 1.5 years ago. When the App started, you could see that team ratings were initially set by the teams year group and league association. As soon as cross-play started last summer, team movement in rankings was constant and significant. Things stabilized throughout the league season, and then started a lot of movement again during this summer, but not nearly as significant as last summer. I think the within leagues ratings of teams is very strong, due to all of the league data over the past year. The cross-league comparison was probably a bit off coming out of league this year, but with play-offs, finals, and summer tournaments, I suspect the rankings accuracy is going to be even better now. We will probably see a lot less red and green results throughout the next year.