Menu Close

Evaluation

Learning Health Systems are expensive and impact the health of large populations, so it is important to understand how effective and cost-effective they are. Although failure is an important source of learning and can help others to decide whether and how to join a Learning Health System, organisations are generally less enthusiastic about publicising failures than successes [179]. In writing this report, it was difficult to find published examples of Learning Health Systems that had failed, while a recent review found no published rigorous evaluation of a Learning Health System [19]. Another systematic review found 43 articles relating to 23 Learning Health Systems (local, regional and national) that reported outcomes. Only six articles were judged to provide high or medium quality evidence[6].

However, the authors have personal experience of several relevant projects that did not meet their anticipated objectives. Health IT research also cites a high failure rate [179], so it is likely that important learning is being missed.

The sociotechnical nature of Learning Health Systems makes them difficult to evaluate. Although the Medical Research Council (MRC) guidance on Developing and Evaluating Complex Interventions [180] remains a gold standard guide, the field has developed significantly since it was published in 2008, and Learning Health Systems present novel challenges. MRC and National Institute for Health Research (NIHR) have commissioned an update to the guidance [180].

Evaluation generally seeks to compare one intervention with another, or to no intervention at all. Because of their heterogeneity, it is not possible to be prescriptive about the methodology that should be used to evaluate a Learning Health System. An evaluation can examine structural or functional aspects of a Learning Health System against a maturity model (see below), but ultimately, success depends on the extent to which each learning cycle improves patient outcomes.

The MRC guidance states that randomised trials are the most robust evaluation method. They can prevent selection bias: systematic differences between those who do and do not receive the intervention. Such methods might randomise individuals or groups of participants, such as an entire hospital. Stepped Wedge Designs can be appropriate when a new intervention is being rolled out to a population in phases, but such methods are often impractical within a Learning Health System.

Traditional randomised controlled trials are often not feasible in Learning Health Systems, where the intervention changes rapidly and may not be easily isolated. The MRC guidance cautions that non-randomised designs are most useful where the effects of the intervention are large or rapidly follow exposure, and where the effects of selection, allocation and other biases are relatively small.

The MRC guide offers a range of approaches for dealing with such biases, including conventional covariate adjustment using a regression model, and extensions such as instrumental variable and propensity score methods. It concludes, however, that the interpretation of small effects from non-randomised studies requires particular care and should draw on supporting evidence where possible.

Traditional controlled evaluation methods might not be appropriate in many complex environments. Constant change may make it impossible to generalise the finding from one evaluation to another environment. Plan, Do, Study, Act (PDSA) cycles or Lean approaches – which evolve a solution suited to the environment – might be more appropriate. These were traditionally used at the local level with rapid cycles but can work at larger scales. They are based on manufacturing industries and measurement is built into the process.

Qualitative evaluation methods can add an additional dimension. They generally explore how people make sense of the world and experience events. Qualitative data could be collected through direct or participant observation, interviews, focus groups and from documentary analysis. For example, a qualitative study might explore how it feels to be a patient with a long-term condition and aim to gain insight into how people make sense of and manage these situations.

In the context of Learning Health Systems, a qualitative evaluation could try to explore user acceptability issues and the barriers and facilitators to implementation. A qualitative evaluation of usability and engagement could also inform the potential design and development of Learning Health Systems, as well as helping to understand if people/patients are managing their condition better and the impact on their quality of life.

A decision maker may be interested in a wide range of measures to determine whether an intervention was effective. For example, whether it caused harm, introduced inequality, had effects beyond health (social care, education, justice, employment, etc). They may want to differentiate between individual and population measures. It may be important to understand how the intervention works.

True outcomes may be long term and it could be necessary to measure proxies that are impacted more rapidly: for example, high blood pressure as a proxy for later heart disease. This requires evidence of the downstream impact of the proxy on the real outcome of interest. To model this, it may be necessary for the investigator to narrow their question down to a very specific active ingredient. This is ultimately reductionist and could lead to a focus on very specific measures, missing the broader interactions. It is important to be aware of these limitations.

It is necessary to identify the active ingredients and actors in a complex intervention and to include them in the evaluation. It is also important to think about whether these active ingredients have interactions that the evaluation might fail to capture. This requires talking to the people delivering the interventions, who are likely to know where the interactions occur.

Quantitative economic methods can be used to compare two or more interventions in relation to their cost and consequences. Commonly used methods include cost effectiveness, cost-utility and cost-benefit analysis. There are methods to compare and quantify the value of different outcomes, but they involve trade-offs.

When evaluating Learning Health Systems that change over time, it should be noted that the most appropriate outcome measures may also evolve, as may the available data. It could therefore be appropriate to alter or update the evaluation methodology.

An evaluation can also help explain how and why an intervention succeeded or failed. A logic model can be a useful tool for planning the evaluation strategy, representing a hypothesis or “theory of change” on how an intervention works. This can help prioritise and structure data collection and analysis to explore the main aspects of an intervention, as well as the relationships between these aspects. It is important to include a wide range of factors. The NASSS Framework outlined above can help develop a comprehensive analysis of factors across all seven of its domains.

Given the challenges we have outlined, an organisation may well need external support from a university, consultancy, government or other organisation to complete an evaluation. Common reasons to bring in an external organisation include:

  • To provide extra capacity
  • To provide expertise that the organisation does not possess in-house

To provide an independent perspective and reduce conflicts of interest