The Lancet Global Health Commission on High Quality Health Systems in the SDG Era (HQSS Commission) was launched two weeks ago drawing attention to the fact that access to healthcare, in and of itself, is not sufficient to meet the (health part of the) Sustainable Development Goals (SDGs). Rather, it is access to high quality healthcare that will make the difference – or in the Commission’s own words, care that “improves health outcomes and provides value to people”. While this focus is unquestionably worthy, I wanted to reflect on several of the aims of this much-lauded enterprise.
The HQSS Commission seeks to respond to what it calls the lack of “agreed upon single definition of a high-quality health system” and the attendant lack of consensus metrics with which to measure this. One the Commission’s four specific aims is thus to propose tractable measures of quality. But phrases like ‘single definition’ and ‘measurable indicators’ in the context of an exercise seeking to strengthen quality (in highly variable LMIC) health systems, should raise red flags.
Several years back, the World Health Organization convened a Task Force on Developing Health Systems Guidance, to provide evidence-informed decisions about health system interventions. A series of processes and tools were proposed (see here and here and here), many adapted from guidelines used in clinical evidence-based medicine, such as the GRADE criteria. But a subsequent and insightful commentary noted that the Taskforce’s focus on an evidence-based list of ‘what works’ seemed to overshadow more pressing questions that policy makers at the country level needed answering, such as: “what can work in our (non-research) environment?”, “how can we make an intervention work well?” and “how can we overcome obstacles to implementation in our situation?” The critical need for any guidance to acknowledge contextual – including deeply embedded socio-political and cultural – differences at the national and sub-national level was central to this critique.
Reflecting on that exercise made me wonder whether a ‘single definition’ of health system quality and associated metrics will assist LMIC policy makers in answering similar questions they may have about how to improve quality, or overcome known implementation obstacles to quality-improvement programmes? Two specific issues come to mind: how (and by default what) we measure; and how these measures are subsequently used.
First, regarding how (and what) we measure. Empirical research (not to mention expert opinion) increasingly draws attention to the way quality in health systems is shaped by relational mechanisms and social experiences, including accountability, trust and importantly, perceptions of responsiveness and respect (see for example: here, here and here). Indeed, Margaret Kruk, chair of the HQSS noted some of these issues at the Commission’s launch. Measurement or capture of these relational aspects of health system quality thus become indispensable to the project of understanding health system quality sufficiently well to improve it (as argued in this excellent article). At least at first blush, however, such an approach does not not marry with the Commission’s stated goal of needing a single definition. Moreover, since many of these relational mechanisms and social experiences are contextually contingent it is questionable whether a universal definition or a generalizable set of metrics can adequately inform ‘actionable solutions’ across multiple settings.
A second question arising from the Commission’s goals, is to what use these metrics should or could be put? According to the Commission, there is a need to “galvanize research and action” on quality of care in LMIC health systems to produce “science-led, multidisciplinary, actionable work with wide-reaching goals and measurable indicators”. But in seeking to produce such set of highly visible, (and likely highly respected) metrics with the aim of supporting ‘actionable work’, the Commission has an obligation to consider the ways in which such indicators may be put to use. I am thinking, in particular, about the broken promises of New Public Management (NPM) and its indicator-dependent performance targets (read quality metrics in another setting). A now self-evident truth of NPM is that when introduced into organizations with traditional bureaucratic cultures (LMIC health systems anyone?) it often fails to make a difference. Why? Because without addressing root causes of the work culture in which individuals are operating (including the governance structure, work norms and power dynamics) the targets become at best meaningless and at worse perverse incentives to game the system.
The Commission’s focus on quality health systems is much needed; more work is obviously required to clarify the pathways by which LMICs’ health system quality can be improved. But the way in which the HQSS Commission defines ‘measurement’; how its efforts to produce quantifiable indicators take account of health system complexities; and to what use these indicators are subsequently put, should continue to be scrutinised. For this observer, at least, absent a broader effort to contextualise such measures, there are distinct risks in the current framing of the HQSS Commission’s aims.