| Ranking System |

| Tests | ODIs | T20Is | ICC World Cup Qualifiers | Women's Tests | Women's ODIs | Youth Tests | Youth ODIs | NZ First Class | NZ List A | NZ Twenty20 |

If you'd like to receive email notification when these ratings are updated, email me and I'll add you to the distribution list.


About the ratings
The rating system used on this site is a perpetual rating system. Before playing their first match of test or ODI cricket, all teams start on the same rating level of 500 points.

Teams gain or lose points as they play matches, with the number of points won or lost dependent on the strength of their opposition. Higher-rated teams gain fewer points if they win (and the lower-rated team loses fewer points). The reverse also applies. The following table shows the graduated scale that is used.

Points gained by winner, lost by loser
Win by Draw
Difference in rating Higher-rated team Lower-rated team Points lost by higher-rated team Points gained by higher-rated team
0-19 points 10 10 0 0
20-39 9 11 1 1
40-69 8 12 2 2
70-99 7 13 3 3
100-139 6 14 4 4
140-179 5 15 5 5
180-229 4 16 6 6
230-299 3 17 7 7
300 plus 2 18 8 8

The perpetual rating system is quickly self-adjusting. Once teams have played around ten games, the system provides a useful and accurate guide to the relative strengths of teams. Beyond this point, the number of games played by any team does not have a significant bearing on that team's rating - the proportion of wins and losses, and the strength of the teams met, determine the rating, regardless of the number of games played.
   
 How do the ratings differ from those published by the ICC? 
The ICC system works on essentially the same basis: points are earned dependent on the result of a match or series, as well as the rating of the opponent. However, the following differences in the points system apply:
  • The ICC test ratings award extra points for a series victory.
  • Abandoned ODI matches are ignored and do not change the ICC ratings.
  • Only ODI matches between full ODI members (i.e. all test teams plus Kenya) are rated by the ICC.

Perhaps the biggest difference between the two systems is that the ICC's rating system is not truly perpetual. The ICC's test system only takes matches into account going back a maximum of four years, and the ICC's ODI system only takes matches into account going back a maximum of three years. Every August, the ICC drops the oldest year's results from its rating calculation, and reweights other years as appropriate. This leads to the anomalous situation whereby the ratings change overnight without any cricket having taken place.

For a full explanation of the ICC's system, see ICC Test Championship or ICC - ODI Championship.

Interestingly, both rating systems give similar results - even though this system takes account of every test match dating back to 1876/77.

   
 Why are all draws treated the same? 
A draw in test cricket can be caused by a number of factors: poor weather, two evenly-matched teams, a good batting pitch, etc. Trying to decide why a match was drawn, and what the result might have been if more time was available, quickly gets very subjective. It's easiest, and avoids any controversy, if all draws are treated as though both teams were competing strongly in the match - even when one team was clearly saved by rain.

The same logic is applied to ODIs. If a low-ranked team can earn points from a rain-affected draw in a test match, then it seems fair that teams can do the same in ODIs.

   
 Why is no account taken of venue? 
Trying to take account of the venue of a match would be arithmetically difficult. It would also potentially change the average rating over time and shift it away from 500. Furthermore, ODI matches played by India against New Zealand in Pakistan, for example, would arguably favour India more in terms of conditions - but would be treated as neutral matches for New Zealand, rather than away games.
   
 Can ratings be compared over time? 
Ratings can only be compared over time if there are the same number of test or ODI teams during the two time periods. For example, back in the 1870s when only England and Australia played test cricket, it was difficult for either team to open up a large rating gap over the other team as any change in Australia's rating was mirrored by the change in England's rating.

There is an additional problem with ODI cricket. There are a number of countries that have played a small number of ODIs without much success: East Africa (1975), UAE (1994-2008), Hong Kong (2004-2008), USA (2004). The rating of all these teams is below 500, implying that the average rating of the main teams is above 500. The greater the number of matches played by these lesser teams, the easier it is for higher ratings to be achieved by recognised cricket-playing nations.

To compare ratings between different time periods, these problems can be at least partly overcome by normalisation of the ratings. These normalised ratings are the ones shown in the best and worst rating tables.

   
 How many matches before a team's rating is accurate? 
In part, this depends on the strength of the team. With new entrants to test or ODI cricket generally being weaker than the existing teams, it will take some time before a team's rating falls to a realistic level. For example, Bangladesh's test rating took more than four years before it started to stabilise.

In general, ten test matches is a reasonable number of matches before a rating has any credence. The necessary number of ODIs for an accurate rating is probably closer to 30.