Chapter 02

The Good
Governance
Gamble by Professor Matt Andrews

Professor Matt Andrews suggests that common approaches to assessing governance are often misleading and proposes a new way of thinking about the problem.

Professor Matt Andrews is an Associate Professor at the Harvard Kennedy School. His work focuses on the practicalities of improving political institutions in emerging economies. His work moves beyond current bureaucracy-based indicators, to new methods of assessing if a country’s political system is able to sustain growth and reform. Professor Andrews’ specific case studies help demonstrate what can be learned across other emerging markets.

How do we know if governance is improving?

A few months ago I was working in an African country that had just announced new natural gas discoveries. These discoveries drew constant streams of international investors to the country, all interested in the potential opportunities ahead. I met one group to discuss my impressions of the governance concerns in the country, and of governance risk and ways of assessing risk. They were really impressed with the government’s recent record of reform and with improved indicator scores issued by international organisations, such as the World Bank and International Finance Corporation (IFC). “The government has deregulated and committed to various budgeting and civil service reforms,” they said, “and it has improved its Worldwide Governance Indicator (WGI) and Doing Business scores. We think it is doing all the right things; and its new indicator scores reduce our concerns about governance risk”.

I have heard this line of argument many times over the last few years. Governance indicators and reform commitments have become important signals for external investors looking for the next growth

Reform







miracle, or at least some good investment bets, in emerging markets where information is scarce. Given my recent research on this topic, I have the same response to this thinking: you are gambling whenever you choose to buy in to reform commitments or to rely on governance indicators and it is often a bad gamble.

My argument is simple, and based on empirical evidence which shows that many governments use reforms as signals only – making commitments and maybe even changing the odd law or procedure, but seldom enforcing the new laws or implementing the new procedures. The reforms lead to short-term bumps in indicator scores, but the reforms commonly fail to improve the functionality of government or reduce governance risk in the country. Rather than focusing on these signals and indicators, I argue that investors should examine how well the job of governance is actually being done and whether the government is actually becoming more functional and reliable. These data points will better inform any investment bets being considered.

Governance gaps and reforms as (only) signals

Developing country governments have ramped up their governance reforms in the past two decades. A country like Honduras has introduced over 50 World Bank sponsored reforms since the mid-1990s, and countries like Uganda and Indonesia have even more engagement. My research, however, shows that many of these governments do not see the expected improvements after reforms are completed. There are better laws and procedures – but they are not implemented or used. The reforms create a gap between what governments look like and how they actually function.

I see the gap between form and function in many countries and areas of governance. For instance, my research shows that public financial management reforms lead to better looking budget preparation processes in most African governments, but the budget execution processes remain weak. This means that governments produce good-looking budgets but actual spending results differ substantially from these budgets. This pattern can be seen in recent reforms aimed at enhancing budgetary transparency, with many governments becoming more open about their spending promises, but remaining opaque (or even recalcitrant) about their actual spending behaviour and results. For instance, the South African government is one of the most transparent in the world, given its highly accessible budget documents, citizen budgets, and such; but it is not possible to obtain data on how much is spent on policing or on levels of crime in the country or information on the nation’s education or health delivery problems.

Another example comes from anti-corruption reforms, which are common in developing countries. In Uganda, reforms have resulted in the country having the best-rated anti-corruption laws in the world, scoring 99 out of 100 points on the Global Integrity Index. However, Uganda scores only 48 for implementation of these laws, resulting in an implementation gap of 51. In contrast with Uganda, Germany scored 81 for the quality of its laws in 2011; lower than countries like Ethiopia, Malawi, Liberia and Kenya. Yet Germany scores 76 out of 100 on implementation of these laws, which is over 20 points higher than the average implementation score for the African countries listed. Germany does not look as good on paper as Uganda or Malawi, but “what you see is what you get” in Germany. In contrast, the African examples just look like states with impressive laws that they do not actually implement and dysfunction that undermines growth and development. This evidence is shown in the figure below, which illustrates that the biggest governance differences between developed and developing countries are now in the gaps between form and function.

Developing
Uganda

In reviewing this evidence, it appears that many developing country governments are pursuing “reforms as signals” to ensure continued external support rather than to provide real solutions to problems faced. They adopt the best practice methods proposed by international organisations and endorsed by the international business community, but find these best practices fit poorly with political realities and capacity constraints. As a result, the reforms are not implemented or are diluted and do not give the results that might be seen in more developed economies, where the solutions fit better. Indeed, Uganda continues to experience deep corruption in spite of its impressive laws, while Mozambique’s government still struggles to fund basic services, even with its world class public financial management system and fine-looking multi-year budget.

In Eastern Europe, Albania recently had to dig itself out of a financial hole because of its high levels of arrears, in spite of having a multimillion-dollar financial management information system that was meant to embed commitment controls and make arrears impossible.

When gaps like these emerge, and it becomes apparent that past reforms did not work, governments typically start signalling again – often reproducing the same good-looking reforms that did not work in the first place. For example, Uganda is currently considering revising its anti-corruption laws in response to a corruption scandal in 2012, which is what it did in 2009 after a previous scandal. The repeat-reforms serve as new signals to make governments look the part and assuage concerns of outsiders in the short-term, but have little chance of improving functionality in the medium- to long-term.

Stories of governance gaps and “reforms as signals” should be worrying to potential investors who rely on reform commitments and governance indicators when assessing governance risk. Widely used governance indicators often yield unreliable short-term bumps in response to reforms and investors are commonly taken in by these kinds of signals. Consider the example of Argentina (shown in Figure 2).

“The reforms are not implemented or are diluted and do not give the results that might be seen in more developed economies, where the solutions fit better.”

African

“Countries with such governance are where investors should be looking to engage; in my view, constituting ‘good bets’ and maybe even showing signs of being future growth miracles.”

Cross

Argentina had a legacy of instability prior to the early 1990s, when Carlos Menem’s administration promoted solutions like privatisation, liberalisation, deregulation, and public sector modernisation. There were no fewer than 32 World Bank projects with this content in the early 1990s, generally implemented satisfactorily. Results were immediate; foreign direct investment poured in and economic growth ensued. The government was perceived as effective, reflected in a 1996 Worldwide Government effectiveness score of 0.32, second only to Chile in the Latin America region.

However, instability returned following international financial crashes and domestic political infighting. Recession came quickly, but perceptions of government effectiveness fell even faster. Indicator scores dropped to 0.18 in 1998 and –0.39 in 2002. Observers linked such decline to weak implementation of the 1990s reforms: government had halted pension privatisation and labour deregulation, cancelled other privatisation efforts, and violated new fiscal rules. The IMF’s Anne Krueger described Argentina as the model of a country that “tried little, failed much”.

Such comments suggest that 1990s reforms took the form of ambitious signals that gained short-term external support but could not be implemented in the medium term. Interestingly, the signals started again under the Cristina Kirchner administration in 2003. A number of reforms emphasised transparency and renewed attention to creating institutions as well as formalised, rule-based public management. A 2004 law, for example, introduced new rules to discipline the state. Efforts also focused on re-creating regulatory frameworks to benefit foreign firms whose 1990s privatisation deals had been cancelled in 2002. Anne Krueger took the bait and responded positively to these new signals, noting that “government has committed itself to implementing wide-ranging structural reforms”. Such endorsements helped government effectiveness scores recover to –0.13 in 2004 and –0.06 in 2006. The sad story of 1990s reform was forgotten, replaced by new signals of better practice.

By 2008, the story had shifted once again. Reforms had obviously not been implemented and indicator scores dropped. Unfortunately, some private investors who engaged in the economy, because of improved indicators and reform commitments in 2004, actually lost money (as in the 1990s). Governance gaps and “reforms as signals” generated a false perception of governance risk.

Some countries are going beyond reforms as signals

Beyond these disappointing stories, my research shows that some governments – or parts of governments – are becoming more functional over time. Countries with such governance are where investors should be looking to engage; in my view, constituting “good bets” and maybe even showing signs of being future growth miracles. Interestingly, many of these countries do not produce very good governance indicators and the international community does not recognise the reforms as “impressive” signals. Many investors would look past these countries, but I believe they should not.

One of these countries is Rwanda, where reforms such as decentralisation are helping to improve service delivery (with mayors and other representatives responsible for block funds received from the central government and accountable to the president for the results they produce). However, Rwanda’s decentralisation reforms have emerged gradually, since 1998, which is too slow for some observers. The country’s homegrown judicial reforms have also been considered effective in fostering some basic rule of law especially after the genocide in the 1990s).

Yet many external agencies will not accept the reforms, considering them “hybrids” that fail to comply with the systems and processes considered “good” by the governance indicators. The reforms also reflect the top-down (some may say autocratic) leadership style of president Paul Kagame’s administration, which is not what many observers would call appropriate “decentralisation”. Numerous observers also view the rule of law reforms sceptically, because they do not include formal courts and justice processes (which is what good “rule of law” should look like).

These local interventions (and others in Rwanda and beyond) materialise from processes that are quite different to the “reforms as signals” tendency.

The processes by which they emerge, reflect those seen in South Korea and Singapore in the 1960s, and are crucial to real governance reform in developing countries. These processes are problem driven – which means they do not start with someone identifying a “best practice” reform that looks good on paper. Rather, they start by identifying problems in specific contexts that bother specific communities – frequently those trying to start or run businesses. They emerge through a step-by-step process of experimentation and learning, where groups try different solutions, learn what works, build capacity and encourage political will to do more.

Sentiment 1990-1995

“Argentina’s determined adjustment and reform efforts were rewarded with strong capital inflows, a sharp recovery…and average real economic growth of over 7 1/2 percent per year.”

“widely hailed as a case of successful market reform.”

See the graph below

Sentiment 1996-2002

Argentina “Tried little, failed much” in the 1990s, exhibiting “a reluctance to follow-through, to confront the structural changes.”

“Consistent, coherent reforms are not discernable. Tentative initiatives never got off the starting blocks. [One] doubts whether reforms of the 1990s increased efficiency at all.”

“For all the changes enacted during the 1990s [the system is still] one of rampant cronyism...”

See the graph below

Sentiment 2002-2004

“The Argentina government has committed itself to implementing wide-ranging structural reforms.”

“The government is committed to democracy and a market economy…with some Keynesian accents.”

See the graph below

Sentiment 2005-2008

“Many reforms…were not carried through. [President] Cristina Kirchner neither upheld her promise to strengthen political institutions nor did she provide for a sound economic framework.”

Poor reform implementation “erode confidence further in the government’s respect for stable rules of the game.”

See the graph below
World Bank

The Bottom Line: The same reform was signalled three times in 15 years; governance indicators improved when reforms were announced, and stayed high for a honeymoon period but then

indicators dropped; the reform was never really implemented; governance gaps persist and governance indicators misled investors about the quality of government and about progress in reforms.

They generate hybrid solutions that may not look perfect, but are politically accepted and implementable (and solve the problems). Finally, the interventions are the product of broad engagement by a variety of players, not just presidents and ministers, ensuring that solutions actually work.

These kinds of processes – that generate real change and improved functionality – require hard work and a practical approach to development and governance reform. They demand that governments, international organisations and business communities that are engaging in developing countries, address real problems; find patience to experiment with new ideas; allow and facilitate learning (often through past failures); and stop accepting cheap “reforms as signals”. I call this approach “Problem Driven Iterative Adaptation” (PDIA) and propose that international organisations and the international business community embrace PDIA when engaging with governments or thinking about governance. I believe it is the antidote to “reforms as signals” and is the only way that governments will graduate from

adopting reforms that only make them look like states, to actually finding and fitting solutions that make them more functional over time.

Within regulatory reform, I find that PDIA-type reforms are proving better than more conventional governance reforms, which private investors are often concerned about. Doing Business indicators, produced by the IFC, reward countries with low regulatory and tax burdens (among other factors) as if these are solutions to the problems facing business everywhere. I recently worked in a country that had decreased its regulations and tax burdens but did not see any improved business activity. We worked with a team in government to try and address this issue. They started by convening a group of private business people and asking a simple but seldom asked question: “What are your problems?”. The group was taken aback by the question (because they had not encountered a government interested in solving real problems), but they offered a list of 42 problems that they needed government help in addressing.

Interestingly, only a handful related to factors included in the Doing Business indicator set. Most of the business people were more concerned about the consistency of implementation, rather than how much regulation, or even taxation, they faced. They wanted to know which government agency was actually responsible for enforcing regulations, how they could ensure that tax collectors would deal with them honestly and if government would help initiate conversations across industries.

The government team has now started addressing the problems iteratively; trying new ideas out and going back to business to ask if the problems have been solved. In some cases the initial solutions will do the trick, but in most cases the government teams will have to try new ideas a number of times before solving the problem. In the process they will be establishing capabilities that are useful to the business community and creating governance systems that will facilitate new business opportunities.

PDIA

A better way of assessing governance and governance risk

The approach I describe is shown in simple form in Figure 3, above: start with problems, act on different alternatives to see works, take stock and learn, build authority to do more, and reiterate until the problem is solved. Many seasoned managers say this is common sense and the kind of process all organisations go through to find their structure and institutional shape. It is also the way the human body builds immune systems to respond to problems and ensure that the response becomes built in.

I believe that the human immune system is a good metaphor for what we are looking for in governance regimes. We want a governance (and immune) system that

protects the country (body) from known threats and can ward off future threats and help to ensure functionality of the entire system (fostering a context that is conducive for doing business or raising a family or traveling around or going to school in). The risk is that the governance (or immune) system actually fights with the body, or fails to identify or respond to threats or to institutionalise successful methods of addressing threats.

The biggest problem with today’s governance agenda is that current indicators commonly fail to reward governments that build governance systems resembling functional immune systems. The context-specific hybrid structures that will solve peculiar business problems in a particular country will often not yield improved Doing Business scores.

Indeed, Rwanda’s decentralisation reforms do not give the country better governance indicator scores. In fact, countries that take the practical path of finding and fitting real solutions to their real problems might see decreased indicator scores (because they are not “signalling” by adopting best practices).

One of the ways to overcome this bias is to pay less attention to the form-based indicators that currently dominate thinking about governance. These can be replaced with indicators that provide a more serious focus on the results of governance. I emphasise 35 key ends in my recent book, in areas where governance matters a great deal (like “defence, public safety, law and order”, “public infrastructure”, and “economic progress and adaptation”).

Bike

“In the road safety example, for instance South Africa is categorised as ‘comparatively weak’.”

Risk

I use these to construct a picture of how well a government is using its authority to do the things citizens need.

The dashboard shown is for my own country, South Africa. It shows how South Africa’s performance compares with that in other BRIC countries (Brazil, Russia, India and China). I categorise this comparative performance by calculating (with continuous data)1 the number of standard deviations a score is away from its comparator group mean.2 For example, when this is done for road safety and the BRIC group, South Africa ends up scoring 1.73 standard deviations above the mean of fatalities per 100,000. I use colour codes to reflect this relative performance, with different colours suggesting performance that is “comparatively weak”,

“below average”, “average”, “above average” and “comparatively strong”3. I also work with a clear colour, which reflects instances where data is not available to allow an assessment. In the road safety example for instance, South Africa is categorised as “comparatively weak” (and the block is red).

However, the South African story is not all red and governance performance is mixed. This suggests that investors will face specific risks in some areas – which they will need to factor into any investment decision – but not in others. If choosing between an investment opportunity in South Africa and other BRIC countries, for instance, an investor would need to recognise that any engagement in South Africa will require managing serious,

comparative governance deficiencies in the area of defence, public safety, law and order and economic progress and adaptation. This helps investors know which gaps they would need to cover in the firms or projects they support. In this case they would need to pay for security, have a strategy to address trade issues, and more.

The dashboard provides a view of the governance performance in a given country, much like a picture of the effectiveness of an immune system in a body – identifying where it is effective and where it is not effective. It allows potential investors insight into how “good” governance appears on average (which may be assessed by considering the number of orange and red blocks compared with the number of green blocks).

Risk

It also allows investors to see where governance risks exist (every country has some red and orange areas) and where investors should be looking for signs of real reform, not simply “signals”. This picture is completely different from the one obtained by governance indicators that capture (predominantly) whether countries have best practice forms in place, and reflect only average performance. This point is clear when noting that South Africa performs better than other BRIC countries on most of the commonly used governance indicators. For instance, South Africa scores 0.33 on the WGI measure of government effectiveness (in 2012); China scores 0.01; India scores -0.18. South Africa ranks 41 on Doing Business scores, with Russia at 92, China at 96,

Brazil at 116, and India at 134 (in 2013). These data points leave an impression of South Africa that contrasts dramatically with the detailed dashboard above, partly because the data points capture form over function (and South Africa’s governance “looks” more legitimate than the others) and partly because the data points average a highly varied performance.

I find the governance dashboard approach is an empowering and useful approach to thinking about and assessing governance in developing countries. It gets past the problem of “reform as signals” and makes the gaps in governance clear and apparent, allowing investors to see how risky their bets are and how they can make the governance gamble less risky.

Professor Matt Andrews is an Associate Professor of Public Policy for the Center for International Development at the Harvard Kennedy School and the author of “The Limits of Institutional Reform in Development” (Cambridge University Press, 2013).

1 I use a similar approach for ordinal data, calculating the median for the reference group and then determining how far the country’s score is from this median. The analysis depends a lot on the scale used in the ordinal analysis (whether the scores run from 1 to 4 or from 1 to 10, for instance).

2 Andrews, M., Hay, R, & Myers, J. 2010. Can Governance Indicators Make Sense? Towards a New Approach to Sector-Specific Measures of Governance. Oxford Development Studies. Vol. 38 (4), 391-410.

3 I categorise scores within 0.5 standard deviations from the mean as “average”. I categorise scores between 0.5 and 1.5 standard deviations below or above the mean as “below average” or “above average”. I categorise scores more than 1.5 standard deviations below or above the mean “comparatively weak” or “comparatively strong” (depending on whether a high score on the variable reflects good or bad performance).

Download PDF
3.1
Long-term Return Expectations By Peter Eerdmans, Antoon de Klerk & Grant Webster