A team’s manager can be one of the most difficult factors to pin down when determining the factors that go into a team’s success, or lack thereof. With so much of their work behind the scenes, how can you ever hope to determine how much they contributed to their team’s performance? The nature of the work can also differ hugely depending on the team they’re involved with: a manager who’s good at organisation and tactics may do very well at the lower tiers, getting the most out of limited players, whereas at the higher levels more of the job may be about managing egos and trying to get the chemistry right.
Many attempts at trying to weigh a manager’s impact have been made in other sports, and the results have often been inconclusive. For example, it’s often been noted that a team on a bad streak improves after replacing their manager, but the numbers often show that it is impossible to place the blame on this change, rather than a simple reversion to the mean. Another issue is that, being such a thankless job, teams often have a high turnover of managers, reducing the available sample size to study.
Nevertheless, in this article, we’re going to try our best to put some solid numbers on some of these difficult-to-quantify factors. Which managers stand out as exceptional? Should a team be quick to change things up during a lull, or should they value a strong and stable leader? And does switching the manager during a crisis help?
For this article, the 2018 season was excluded as it is still unfinished. The last 3 managers, from 2017 backwards, for the ten teams currently playing in Munster and Leinster were investigated. In order to compensate for the different length of different managers’ tenures, and to acknowledge that not all teams faced the same quality of opponents, the metric used was the team’s tournament rating.
The tournament rating, like the Elo rating its based on, originates in chess. Rather than basing a team’s performance on simple wins, draws and losses, it also accounts for its opponent’s strength. By looking at the total number of wins across a given number of games, as well as the opponent’s rating at the time they played, a ‘tournament rating’ is generated. This rating represents what you would expect a team to have as their initial rating in order to end up with the same number of wins, draws and losses against the same opponents over the same matches.
For example: if a team wins five games against five opponents, whose average rating is 1500, then their tournament rating will be 1900, as a team with a 1900 rating would be expected to beat those at a 1500 rating over 90% of the time. Similarly, a team which lost all of these games would have a tournament rating of 1100, and a team with 2 wins, a draw and 2 losses would have a rating of 1500.
Though the tournament rating benefits from having a larger sample size, in order for teams to revert towards the mean, it still allows us to compare teams (or managers) who have had a different number of matches overall. Also, by comparing their tournament rating with their initial rating, we can see how much a team exceeded expectation. The ratings used for this article were the raw BAINISTEOIR ratings on the site; only the overall rating was taken into account, so other factors, such as home/away advantage or length of time since the last match, were ignored.
The table of the 30 managers investigated is given below:
Now, it’s important to remember a few things here. For one, causation does not equal correlation; some teams simply have better raw talent than others, and there are many other issues and factors that can contribute to a team’s performance which the manager has no control over. There are some names there who had a poor rating, who are better than the numbers imply, and vice versa. Similarly, managers with shorter tenures have less time to revert to the mean. Davy Fitzgerald, for example, performed extremely well in his first year in Wexford, with the highest performance over expectation, but his Waterford and Clare tenures have had more years to adjust; while he’s clearly a good manager based on his tournament ratings, his 2017 season definitely looks like an outlier.
With all these ifs and buts out of the way, though, the numbers back up what we all already know: Brian Cody is absolutely exceptional.
Despite having an incredibly long stint in charge, and is possibly the only manager who’s had enough matches for us to know for sure that his rating has reverted to its correct mean, he finished up with a total tournament rating of 2128 for his 19 year run from 1999-2017. Even with some less than stellar years thrown in, such as 2013 or 2017, he is still firmly in first place. At time of writing, there is no team with a BAINISTEOIR rating this high, meaning the average Brian Cody team, from anywhere within his 20 year run, would be favourites to beat any current team. Though it’s tempting to dismiss his success as simply having better players, due to managing Kilkenny, it’s important to remember that when he started, Kilkenny’s BAINISTEOIR rating was an unspectacular 1869; their lowest point since the mid-fifties. The result of this is that he has the third highest performance over expectation, despite having to maintain this high standard for more than three times as long as anybody else that this study looked at. Even with the talent he has at his disposal, Brian Cody stands out as the best manager that was studied, and would likely appear top of the all time list.
There are a couple of other names that stand out in this list. Davy Fitzgerald is worth looking at on his own, as he appears three different times on three different teams. He is another manager who, given the larger than average sample size, appears to be confirmed as being particularly talented. His first year with Wexford was the highest performance over expectation of any manager. Though this will probably come down over time, it’s almost certain that, by the time he’s done, he’ll have left the team in a better position than he started. His performance as Clare manager puts him in the top ten for total performance, and fourth for performance over expectation, and though his Waterford stint is not quite as impressive, his total rating is still at an elite level of 1982. Looking at his performance across his three different teams, he has a total tournament rating of 2001 and, on average, his teams have performed at a quality 174 rating points above what was expected of them.
Leaving into the performance above expectation, and leaving aside Davy’s outlier performance with Wexford, we see his former teammate, Anthony Daly, is second place for his term with Dublin. Managing them for 6 seasons, his total tournament rating is about average for a top tier team at 1951, but when he began their rating was 1674, about what you’d expect from a team in the upper half of division 2A in the league. This is one of the rare examples of a hurling team breaking through from mediocrity into being genuinely competitive. Though some solid groundwork was laid by his predecessor, Tommy Naughton, who took the team from a rating of 1567 at the start of his tenure up to 1674 by the end, Anthony Daly’s performance has been exceptional.
It can also be interesting to see that the ratings may not always line up with what we expect. The tournament rating system doesn’t weigh any one match above another in importance, and so championship wins don’t always align. Here we can see some of the managers who, despite coming very close, couldn’t quite get their teams to an All-Ireland win, like Derek McGrath for Waterford, or Jimmy Barry-Murphy’s 2011-2015 run as Cork manager. Nevertheless, they are rated higher than others who acquired more silverware.
The value of stability
So, looking at the chart, how do you know when it’s time for a change? Do teams benefit from a changeup, or from keeping the same person at the helm, and ensuring that the players know who the authority is? The following charts look at how quickly each team got through their last three managers, and compare this to their performance:
The results definitely appear to favour stability. Even taking Brian Cody, with his unusually long tenure, out of the chart, the trendline still points the same way. In other words, it’s maybe not a great sign that both Limerick and Offaly have appointed yet another manager since last year.
However, there’s another correlation versus causation question here: Are these teams struggling because they lacked stability, and a clear, guiding strategy? Or, did these teams lack stability because they were unsuccessful, and the manager changes simply a reaction to their inability to get going? How many second chances should a manager get before they’re given the boot?
A fair go
In order to get a more general idea of how long it takes a manager to settle into their role, we looked at how each manager’s year by year tournament rating compared to that of their entire term as manager. From this, we attempted to gain an insight into how managers appear in their first year, versus their second, versus their third, and so on. Of course, the sample size decreases with each year added; though there are 30 managers available for the first year value, this drops to 2 by year 6! Since the number of managers lasting more than a few years drops off so sharply, only the first 4 years were included in the chart:
The above chart shows how much the average manager performed above or below their overall rating in each year; each box contains the first and third quartile values for each year, and the lines extend to show the minimum and maximum values reached.
Of the managers studied, and excluding the six who are still in charge at time of writing (incumbent managers were excluded, both because we don’t yet know the full length of their term, and because Cody is such an extreme outlier in terms of the length of time he’s been around), the average length of a manager’s term was roughly 2.8 years, with the median at 3 years. Funnily enough, this actually appears to be about right: Most managers need to have an adjustment period, and underperform in their first year. In many cases, this may be because they inherited a team in crisis (it’s less often, though not unheard of, that the previous manager left in the middle of a good period), though in others, they simply need time for their changes to take effect. The second and third years are where things seem to take a turn for the better; the average performance in year 2 is the highest, though year 3 contains the highest maximum performance. By year four, we begin to see a decline; possibly by this stage other teams become more prepared for them, or their core of players begin to age, or the manager simply loses their enthusiasm for the job. Whatever the reason, most managers don’t get past this slump in year four. Though it’s not shown in this chart, due to the small sample size, the average performance begins to increase again for those few who survived more than four years; possibly because if you last that long, you must be doing something right. However, even if they don’t decline, the performances really don’t seem to get that much better than years 2 and 3. It seems that trial and error has found the solution; if a manager hasn’t gotten things working after three years, the team is probably not going to get much better. However, if it’s year 4 and they’re still going strong, try to hold on to them!
A change in perspective
To wrap up on the topic of whether or not changing the manager makes a real positive impact, we looked at how tournament ratings changed year on year. By comparing the usual year on year change with those years where there’s a new face in charge, we can see whether or not hurling has followed the trend of sports like soccer or basketball, where similar studies were inconclusive, or if a change of manager can really give a team the boost they need…
The chart compares the year to year tournament rating change experienced by the studied teams. The lines represent the minimum to maximum values, and the boxes represent the first to third quartile values. Looking into it, it does appear that hurling is in line with those other sports mentioned; there is no significant difference. There are signs which support the previous section’s findings that a manager should at least get two to three years to prove themselves: Though there was slightly less year to year change, the highest peaks were achieved under incumbent managers, and the average performance improved slightly more year-to-year under incumbent managers than under new ones. However, at the same time, the standard deviation was a little bit lower, and the 1st and 3rd quartiles were closer together for incumbent managers. In other words: you know what you’re getting with the incumbent. If things are going terribly, it might be worth the risk in trying out somebody new. However, whatever the final action, the main takeaway should be that the blame and the praise allocated to managers is rarely proportional to their actual impact. By and large, good teams will stay good, and bad teams will stay bad. There are some exceptions, with people like Brian Cody who can clearly make a difference, but at the end of the day, it’s the players who matter.