Menu Close

Comparative Effectiveness Research

Large Randomised Controlled Trials (RCT) and meta-analyses of RCTs are currently considered to be the Gold Standard form of evidence that underpins Evidence Based Practice. The reasons for this are well rehearsed, primarily that they can determine causal relationships while reducing confounders and bias (Sibbald and Roland 1998).

The limitations of RCTs are also well understood. They are expensive and time consuming and it can be unethical to expose patients to treatments believed to be ineffective or even harmful (Sibbald and Roland 1998). In addition, RCTs are not well adapted to the complexity of health conditions and the heterogeneity of patient characteristics. They generally have strict inclusion and exclusion criteria, meaning that their results may not apply to the diverse patients, suffering from multiple co-morbidities, who are most often encountered in the real world (Wallace 2015Black 1996, Rothwell 2005, Johnston, Rootenberg et al. 2006, Sanson-Fisher, Bonevski et al. 2007)

It would be infeasible to conduct RCTs to address all of the potential treatments for all variations of all conditions in all of the different patient groups, with their genetic and environmental predispositions and their multiple comorbidities. The result is that there remain huge gaps in the evidence base. Often, clinicians do not have the evidence to recommend one treatment or another. Ultimately, decisions are made on the basis of limited personal experience. Effectively, these decisions represent millions of “n of 1” experiments, taking place globally, every day (Wallace 2015).

Currently, learning from these experiments is limited to the clinicians involved. The advent of the EHR offers the potential for this practice-based evidence to be recorded and used in a more systematic way.

A Green Button

It has been proposed that a function could be developed, within the EHR, to allow clinicians to leverage aggregate patient data for decision making at the point of care (Longhurst, Harrington et al. 2014).

In this system, if the clinician did not have evidence on the relative effectiveness of different treatments for a particular patient and had no guideline to follow, they would click a “Green ‘patients like mine’ Button”. The EHR would identify all patients with similar characteristics (genetic, comorbidities, age, etc.), who have previously had that particular condition. It would identify the treatments that they have received and the outcomes achieved. It would then suggest the optimal treatment for that particular patient, taking account of their preferences.  This approach has already been implemented on a small scale, by manually extracting and aggregating EHR data (Frankovich, Longhurst et al. 2011).

The IBM data driven analytics team have supported the EuResist project (www.euresist.org) that examines data relating to patients who are suffering from HIV and are treated with anti-retrovirals in Europe.  Data is gathered into a database containing information on phenotype, genotype of the viral strain, treatment given and outcomes achieved.  This information can be used, in a semi-automated way, to assess which treatment might work best for a new patient.  In the case of EuResist, the clinicians were around 66% effective in choosing the best treatment on the first attempt.  By comparison, the system was reported to be around 78% effective (Foley and Fairmichael 2015).

If these approaches could be automated and expanded to a broad range of conditions, they would be a key use case of the Learning Healthcare System (Friedman 2015). This capability is cited at the outer range of the 10-Year Vision to Achieve an Interoperable Health IT Infrastructure, published by the US Office of the National Coordinator for Health Information Technology (ONC 2014). Many observers foresee major obstacles to such a system:

  • No other patient is truly like me (Friedman 2015)
  • More robust outcomes measurement would be required (Bates 2015)
  • The answers would not be clear-cut. Clinicians would have to interpret the results carefully to avoid getting the wrong answer (Brown 2015)
  • There are ethical issues related to this use of EHR data (Longhurst, Harrington et al. 2014)

Observational Research

Observational research, including, cohort, cross sectional and case-control studies have long been used in situations where RCTs are too expensive, unethical or when sufficient participants cannot be recruited (Mann 2003). They are observational because the participants are not randomised or pre-assigned to an exposure. The choice of treatments is up to patients and their physicians (Berger, Dreyer et al. 2012). Increasingly, these studies are being viewed as complimentary to, rather than inferior to, RCTs. Under the correct conditions, they can even provide evidence of causal relationships (Greenfield and Platt 2012).

It is now possible to conduct observational research, using routinely collected patient data, that would not previously have been possible (Platt 2015). Currently in US, it is possible to examine certain outcomes with high confidence, as they are captured uniformly across multiple systems.  Acute myocardial infarction or hip fracture requiring surgical repair are examples. There is good evidence that these events are captured and the data is sensitive and highly specific. It is then possible to associate these outcomes with various types of exposures or treatments (Platt 2015).

In the UK, this type of research has been undertaken using secondary care HES and SUS data, which is sometimes augmented by additional coding at the provider (Morrow 2015). In primary care, large databases such a QRESEARCH provide similar functionality.

According to Dr Wallace at Optum Labs, who are already conducting research on a database containing 150 million patient records, research has reached an inflection point. He notes that in 20th century medicine, a great deal of the cost of clinical trials was associated with data collection. Observational studies are so much cheaper that hundreds can be conducted for the price of one RCT (Wallace 2015). Dr Wallace places this in historical terms:

“Research is changing from a hunter/gatherer mode, where huge amounts of effort is invested to associate data with rare events, to a harvest mode in which huge amounts of data are used more efficiently to give insight.” (Wallace 2015)

The content and quality of the underlying data is currently a limiting factor in the usefulness of comparative effectiveness research using routine data. Rigorous recording of outcomes could allow a step change in this kind of research (Dunbar-Rees 2015). For example, the ICHOM Low Back Pain Standard Set (See Outcome Measurement) would provide an effective set of outcome and case-mix indicators to study the comparative effectiveness of instrumented versus non-instrumented fusion for spondylolisthesis. This would also require the recording of important contextual information. In this example, it would be necessary to clarify what is meant by instrumented and non-instrumented (Stowell 2015).

There are also significant methodological concerns. With observational CER, it is possible to control for some confounding factors and not for others. Often a hybrid approach is required, where sophisticated automated analysis of thousands or millions of electronic records can be paired with a manual review of several hundred, to confirm accuracy. This technique was used successfully in a study looking at the link between rotavirus vaccine and intussusception.  This is a powerful technique that could also be extended to patient reported outcomes (Platt 2015).

Pragmatic Randomised Controlled Trials

In Pragmatic Randomised Controlled Trials, the design mimics routine clinical practice (Torgerson). This means relaxing exclusion criteria, not using placebos, accepting non-concordance with treatment and delivering care as it is delivered in the real world. It offers a measure of effectiveness that is generalisable (Helms 2002).

Participants pointed out that such studies lend themselves to being conducted within the Learning Healthcare System. For example, the EHR could be configured to randomise patients.

“Suppose that you are in clinic, about to start an SSRI, but you don’t know which one to go for. Why not allow the system to randomise the patient… the patient wouldn’t need to be contacted again [by the researchers] – all of the outcomes would be collected in routine data so it massively decreases the cost of doing an RCT.” 

This sort of study, that brings together research and clinical practice, would raise the sort of ethical questions around consent, which have been discussed in previous sections.

Clinical Trial Recruitment

There will still be a need for traditional RCTs in certain circumstances (Wallace 2015). Recruitment of sufficient numbers of participants is a challenge for researchers and patients often miss out on clinical trials from which they could benefit. EHR data can be used to identify patients who are suitable for certain RCTs. The IBM Watson team have demonstrated this ability in collaboration with the Mayo Clinic (IBM 2015). In the UK, the Clinical Record Interactive Search (CRIS) system, developed by South London & Maudley NHS Trust has been used to deliver similar functionality (Callard, Broadbent et al. 2014).

Conclusion

No participants claimed that the RCT is dead, but rather that other methodologies will be required if we are to bridge the evidence gap experienced by modern medicine. Observational studies can deliver useful results quickly at relatively low cost and they do not put patients at risk, through experimental exposure. The development of EHRs and rigorous outcomes measurement, offer the potential to accelerate the use of observational research. This may require the development of a new ethics framework.

Even when RCTs are still required, Learning Health Systems can help with recruitment, randomisation and data collection.

Many of these potential developments pose major training and workforce implications that will be discussed in the Implications section of this report.

Evidence

NHS Data Collections as a platform for a Learning Health System

By Dr Tom Foley, Dr Neil Lawrence. We created a guide to the work that NHS Digital does with data ...

What role for learning health systems in quality improvement within healthcare providers?

By Foley, Vale. Abstract Introduction Recent decades have seen a focus on quality in healthcare. Quality has been viewed across ...

Cambridge University Hospitals NHS Foundation Trust (CUH)

By Tom Foley. Dr Afzal Chaudhry, Consultant Nephrologist, Chief Clinical Information Officer and Associate Lecturer, Cambridge University Hospitals In 2014, ...

Children and Young People’s Health Partnership (CYPHP)

By Tom Foley. Dr Ingrid Wolfe, Consultant in children's public health medicine and Programme Director of Children and Young People’s ...

School of Health and Related Research (ScHARR)(Unversity of Sheffield)

By Tom Foley. Dr Clare Relton, Senior Research Fellow, University of Sheffield How do you define a Learning Health System? ...

Dr Gerry Morrow Interview

By Dr Tom Foley. Background Dr Gerry Morrow is Medical Director of Clarity Informatics, who aim to improve patient care ...

Dr Rupert Dunbar-Rees Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background Dr Rupert Dunbar-Rees is a GP by background, and Founder of Outcomes ...

Dr Caleb Stowell Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background Caleb Stowell is Vice President, Research and Development, at the International Consortium ...

IBM Watson Site Visit

By Dr Tom Foley, Dr Fergus Fairmichael. Background Dr Eric Brown Dr Eric Brown is Director of Watson Technologies at ...

Dr Jeff Brown Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background Dr Brown is an Associate Professor in the Department of Population Medicine ...

Professor Richard Platt Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background Professor Platt is Chair of the Harvard Medical School Department of Population ...

Professor Charles Friedman Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background Professor Charles Friedman is Chair of the Department of Learning Health Sciences ...

Dr David W Bates Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background David W. Bates, MD, MSc, is Senior Vice President and Chief Innovation ...

Dr Paul Wallace Interview

By Dr Tom Foley, Dr Fergus Fairmichael. Background Paul Wallace, MD, is Chief Medical Officer and Senior Vice President for ...

A “Green Button” For Using Aggregate Patient Data At The Point Of Care

By Christopher A. Longhurst, Robert A. Harrington and Nigam H. Shah. Abstract Randomized controlled trials have traditionally been the gold ...

Informed consent, comparative effectiveness, and learning health care

By Faden, Ruth R., Tom L. Beauchamp, and Nancy E. Kass. Abstract   Interest in learning health care systems and ...

Electronic health records based phenotyping in next-generation clinical trials: a perspective from the NIH Health Care Systems Collaboratory

By Richesson RL, Hammond WE, Nahm M, et al. Abstract   Widespread sharing of data from electronic health records and ...

Ethics and Informed Consent for Comparative Effectiveness Research With Prospective Electronic Clinical Data

By Faden, Ruth, et al. Abstract Background:Electronic clinical data (ECD) will increasingly serve as an important source of information for ...

Prior event rate ratio adjustment: numerical studies of a statistical method to address unrecognized confounding in observational studies

By Yu, M., Xie, D., Wang, X., Weiner, M. G. and Tannen, R. L. ABSTRACTPurposeThe purpose of this study was ...

Can Observational Studies Approximate RCTs?

By Greenfield, S. and R. Platt Abstract“It is the position of this Task Force that rigorous well designed and well ...

Prospective Observational Studies to Assess Comparative Effectiveness: The ISPOR Good Research Practices Task Force Report.

By Berger, M. L., et al. AbstractOBJECTIVE: In both the United States and Europe there has been an increased interest ...

Evidence-Based Medicine in the EMR Era

By Jennifer Frankovich, M.D., Christopher A. Longhurst, M.D., and Scott M. Sutherland, M.D. Abstract   Pediatricians facing critical clinical decisions often lack ...

Use of primary care electronic medical record database in drug efficacy research on cardiovascular outcomes: comparison of database and RCT findings

By Tannen Richard L, Weiner Mark G, Xie Dawei. Abstract Objectives To determine whether observational studies that use an electronic ...

Limitations of the randomized controlled trial in evaluating population-based health interventions.

By Sanson-Fisher, R. W., et al. AbstractPopulation- and systems-based interventions need evaluation, but the randomized controlled trial (RCT) research design ...

Effect of a US National Institutes of Health programme of clinical trials on public health and costs.

By Johnston, S. C., et al. AbstractBACKGROUND: Few attempts have been made to estimate the public return on investment in ...

External validity of randomised controlled trials: “to whom do the results of this trial apply?”.

By Rothwell, P. M. AbstractIn making treatment decisions, doctors and patients must take into account relevant randomised controlled trials (RCTs) ...

Observational research methods. Research design II: cohort, cross sectional, and case-control studies.

By Mann, C. J. AbstractCohort, cross sectional, and case-control studies are collectively referred to as observational studies. Often these studies ...

Real World’ pragmatic clinical trials: What are they and what do they tell us?

By Helms, P. J. AbstractAlthough the explanatory clinical therapeutic trial remains the foundation for assessing drug efficacy and is required ...

Understanding controlled trials: Why are randomised controlled trials important?

By Sibbald, B. and M. Roland. Abstract Randomised controlled trials are the most rigorous way of determining whether a cause-effect ...

Why we need observational studies to evaluate the effectiveness of health care.

By Black, N. AbstractThe view is widely held that experimental methods (randomised controlled trials) are the "gold standard" for evaluation ...