Apologies, just caught this. IMO, this is completely and totally misrepresenting the data. For 2012G, #1 playing #50 is expected to be a 5-0 game, with #1 winning 85% of the time, tying 6% of the time, and only losing 9% of the time. #1 playing #20 is expected to be a 4-1 game, with #1 winning 79% of the time, tying 8% of the time, and losing 13% of the time. #1 playing #10 is expected to be a 4-1 game, with #1 winning 79% of the time, tying 9% of the time, and losing 12% of the time. #1 playing #5 is supposed to be a 3-0 game, with #1 winning 68% of the time, tying 14% of the time, and losing 18% of the time. Even if we go all the way to #1 playing #2, it's still expected to be a 2-0 win for #1, winning 55% of the time, tying 16% of the time, and losing only 29% of the time.
None of these are anywhere close to a coin-flip, and it completely misrepresents the probability of winning or losing for each game. Of course if the ratings are closer, the percentages are closer, and as the ratings diverge, the chances of predicting the win accurately also increase. But even in the top 50, and even in the top 10, the ratings are far enough apart that a true picture of each game can be predicted with a measurable amount of accuracy ahead of time.