The ICC cricket ratings use a version of the ELO system, which was originally devised by the mathematician Arpad Elo to rank chess players. It is a method for calculating the relative performance of players in zero-sum games. Cricket is a zero-sum game, where each team's gain or loss is balanced out by the loss or gain of the other participant. The ICC's method considers matches in the four most recent years, with results in the two most recent years counting for twice as much as the two oldest years. Further, bonuses are awarded for series wins in some cases.
The ELO system consists of two complementary measurements. The first is the rating for each contestant, and the second is the probability of a specific outcome (usually a win) for one contestant against the other, given the ratings for each at the start of the game. This probability is then also factored in when updating the rating for each contestant depending on the outcome of the game.
The ICC's rating system needs improvement in three areas.
First, it should be able to take into account the margin of a result.
Second, it should be able to take into account which players are in action and which ones are not. An Australia Test team in, say, 2003, with Glenn McGrath and Shane Warne in it is a very different proposition from an Australian team in the same year with one or both missing. The rating of a team at the start of a match should be able to account for this.
Third, the rating should not be subject to arbitrary cutoffs. Teams often play home and away cycles, and the peculiarities of scheduling in international cricket mean that a team's position in the ICC's ranking system is especially prone to being influenced by whether or not it has played primarily at home or away during the two most recent years. For example, India's home and away touring schedule has tended to involve a period of away tours, followed by a period of home series, unlike England, whose schedule provides for home series during the English summer and away tours during the English winter.
ALSO READ: A method to rate the dominance of Test teams
The approach described in this article addresses the three issues above. It also arguably mitigates the effects of the touring schedule by eliminating the arbitrary cutoff and relying instead on the changes in personnel that regularly take place in international squads. The home-away effect is especially acute in the Test match format, and there may yet be a case for building separate home and away ELO ratings for Test cricket specifically. Currently, I am inclined to argue against such a separation because the effects or consequences of away tours and home series can be considered to be baked into selection decisions.
The three problems can be resolved as follows. First, the margin of victory is incorporated using a method I have previously described in these pages. It assigns a share of a point to each team, depending on the result and the margin by which it was achieved. This is assigned to each player playing in the team in that match.
The rating of a team at the start of each match is the average rating of its playing XI. A player on debut gets a rating of 0.5. Defeats move this rating downwards, wins move this rating upwards. How much a player's rating moves either way depends on the rating of the opposition. Thus each team's rating is determined by the ratings of the players in its XI. The fact that squads and selectors use retirements and other strategies to put out the best possible XI to win each game enable the rating to remain updated without resorting to considering results from the most recent couple of years or some such.
The only assumption in this approach is that a team is picked with the goal of maximising the chances of winning.
The charts below provide a summary of how the ELO rating of a team calculated in this way compares with the actual win percentage (wins per match) in ODI and Test cricket. The bars signify the number of teams (at the start of a match) for a specified ELO rating. Note that the rating is rounded off to two decimal places to group teams by a rating. For example, the teams under ELO 0.52 had a rating between 0.515 and 0.524.
This gives us a system for estimating the relative strength of teams that applies the iterative approach of the ELO system while accounting for the peculiarities of international cricket. It factors in the players involved in a match; it avoids having to establish arbitrary cutoffs and acknowledges that retirements, selection decisions, injuries, and other factors are involved in the selection of any given team; and it accounts for margin of victory.
The charts below show the ELO ratings history for ODI cricket. A pair of teams is shown in each chart. The system can be demonstrated just as well using the Test ratings history, but I will restrict myself to one format here. This survey will also help formulate a working definition of a minnow or weak team in ODI cricket.
We begin with Australia and England, who contested the inaugural ODI nearly 50 years ago. The first two decades of ODI cricket were periods of parity between these two sides. It was in the 1990s that the Australians first established some daylight between themselves and England. As the Mark Taylor era came to an end in 1998, there was a brief moment where it looked like there might be a new era of parity, with England picking specialist ODI sides and Australia struggling for bowling to support Warne and McGrath. It was not to be. The Steve Waugh-John Buchanan era was about to begin. Australia would dominate world cricket in the 2000s.
Since 2015, England have been in the ascendant. They have taken away Australia's World Cup title, and as the chart shows, are in their most successful ODI era. Whether this translates into a decade of dominance in the 2020s remains to be seen. Neither side has ever experienced a period where they might have been considered minnows.
Bangladesh and Zimbabwe are an interesting pair. Zimbabwe, especially in the late 1990s, when Andy Flower was their best player, acquired a reputation as a competitive side. Keep an eye on the 0.45 ELO rating line in these charts. The ELO summary chart shows that teams with an ELO rating of upto 0.45 win about 20-30% of their games. (These are not just games against better teams but against all teams.) Australia or England have not had a single team below 0.45 in their history, but 491 out of 529 Zimbabwe sides have been below 0.45. In the period from 1996 to 2003, 170 out of 197 Zimbabwe XIs had ELO ratings below 0.45; and the 2010s were not been kinder to them.
Bangladesh have had an ELO rating under 0.45 for 229 of their 376 ODIs. It is only since 2015 that they have escaped the sub-0.45 band in a sustained fashion. As a rule of thumb, an ELO rating below 0.45 could be reasonably said to represent a minnow. Broadly, with the exception of a brief period in 2007 and 2008, Bangladesh fielded minnow XIs for most of their ODI history until 2015. They are no longer minnows; the same cannot be said about Zimbabwe.
This brings us to West Indies and Sri Lanka. The first of those teams has seen both ends of the ODI ladder. West Indies were dominant in the 1980s and extremely competitive well into the mid-1990s. By around 2017 they found themselves flirting with the 0.45 line, and were being classed as a minnow by some. In this, they have, at least to some extent, been prisoners of their past glories. West Indies have never quite been minnows, and today the troubles of three-odd years ago seem behind them. They have hovered steadily in the 0.45-0.50 band, winning about 40% of their matches. The top teams of the day usually beat them, but the mid-table sides find them a handful.
A similar story could be told about Sri Lanka. After making it to two consecutive World Cup finals in 2007 and 2011, they struggled to replace their golden generation. Lasith Malinga was one of the young tyros of that generation, and he is now coming to the end of his career. Sri Lanka too are in that 0.45-0.50 band, along with Bangladesh and West Indies. These three teams now constitute the heart of the bottom half of the ODI table. In other periods in ODI history other teams (like India and England) have been there.
For most of my lifetime, Pakistan were better than India in ODI cricket. Until about 2005, Pakistan were consistently in the 0.55-0.60 ELO band, while India, in good times, would eke out up to 0.55. In 2005, this changed. Greg Chappell and Rahul Dravid took India into a painful but ultimately successful cultural transition, and India became an ODI side that lived in the 0.55-0.60 band. Pakistan, for reasons on and off the field, have struggled to stay with them. In the last few years the divergence has been magnified. India have advanced even further and now reside in the 0.60-0.65 range. Though Pakistan beat them in the 2017 Champions Trophy final, India have achieved even greater heights since.
Finally, we come to perhaps the two most interesting ODI sides of our day. For the first few years in ODI cricket after their readmission in 1991, South Africa promised little more than mid-table respectability. Then Bob Woolmer and Hansie Cronje turned them into a formidable winning machine. The result has been about 25 years of excellence. And though they are currently in a transition, which is made more uncertain by the recent exodus of talent under the Kolpak system, South Africa just split an ODI series with the world champions and then hammered Australia 3-0.
New Zealand, after Richard Hadlee and Martin Crowe left the scene, spent more than a decade hovering in the 0.45-0.50 ELO band with England and India, much as Sri Lanka, Bangladesh and West Indies have done in recent years. Then, under Stephen Fleming and a generation of accomplished all-round talent, they rose to a higher plane. Once that generation retired, it looked like they would go back to being solid occupants of the bottom half of the table. However, with the current group of players, led by Trent Boult and Kane Williamson, and guided by the experience of Ross Taylor and Tim Southee, New Zealand's ODI side has enjoyed arguably their most successful period. They have reached the last two World Cup finals and would have been world champions had it not been for the way last year's final ended.
Nevertheless, under the ELO system described in this article, they are currently the world's top-ranked ODI side.
Overall, Australia, England, Pakistan and South Africa have never fielded an ODI side that can be classed as a minnow. India have fielded seven ODI XIs (out of 987) that merit the tag; the last of these was in September 1982. New Zealand have fielded ten such sides (out of 772), West Indies 28 (out of 820), and Sri Lanka 126 (out of 820). The Sri Lanka XI that faced Zimbabwe on January 21, 2018 was their first and only minnow XI in an ODI since September 1994. They won that game. Bangladesh have not fielded a minnow XI in an ODI since November 2014.
Currently Bangladesh, West Indies and Sri Lanka remain in the bottom half of the table, above the three teams that remain in the minnow band - Ireland, Afghanistan and Zimbabwe. The first two of those three sides are relatively new entrants in the game, and Test status should help them advance. West Indies briefly flirted with minnow status but have kept their heads above the water. There is little to choose between the top six teams. England and India, for different reasons, appear to be stronger than the other four, but as New Zealand and South Africa have already shown in 2020, this superiority is tenuous.
This ELO system represents an advance on the system currently used by the ICC. Given that the ICC's system is used to determine seedings and qualification for tournaments, it is time it was updated with a system which does not change a team's rating overnight just because April turns into May.
A similar narrative remains to be presented for Test cricket, but that is for another day. For now and forever, all of New Zealand should know that as the world held its collective breath, hidden under a protective mask to save itself from the ravages of Covid-19, they were the top-ranked ODI side in the world.