By Dr Tom Foley, Dr Fergus Fairmichael.
Background
Lynn Etheredge heads the Rapid Learning Project, a Washington DC area policy research center.. His career started at the White House Office of Management and Budget (OMB), where he was OMB’s principal analyst for Medicare and Medicaid and led its staff work on national health insurance proposals. Lynn headed OMB’s professional health staff in the Carter and Reagan administrations. His contributions have ranged broadly across Medicare, Medicaid, health insurance coverage, retirement and pension policies, budget policy, and information technology. Lynn Etheredge proposed the concept of the “rapid-learning health system” in a special issue of Health Affairs in 2007, and is collaborating widely in developing this approach. Rapid learning initiatives are now generating comparative effectiveness research, a national system of learning networks and research registries, national biobanks with linked electronic health record and genomic data, a new Medicare and Medicaid Innovation Center (with $10 billion of funds), and rapid-learning systems for cancer care and pediatrics. He serves on the editorial board of Health Affairs and is author of more than 85 publications. He is a graduate of Swarthmore College.
Interview Synopsis
Beginnings for the Learning Health System
The Robert Wood Johnson Foundation gave a small grant in 2005 to our project (which was located at George Washington University) to explore how to learn from electronic medical records. This led to a special edition of Health Affairs (2007) that launched the discussion about the learning health system as something the U.S. should invest in. This concept was picked up by the Institute of Medicine who helped to develop it further. Initially progress was slow., as we needed to explain the potential for rapid-learning from “big data”. Today, a learning health system has become a major national objective with key support in the public and private sectors.
Legislation
Government policy and legislation helped develop this field with billions of dollars of investments. A lot of the focus by government organisations has so far been on their particular piece more than how the national health system will develop overall. However, we have seen important progress. For example, few health policy experts thought the National Institutes of Health (NIH) would be interested in supporting large new research databases with patient data and assumed they would continue to focus on clinical trials. But NIH is now leading President Obama’s “precision” medicine initiative , as announced recently, for the learning health system and is championing the vision of bio-medicine as a “digital science”.
It will take a lot of national leadership to put together a national computerized database system, with tens of millions of patients, and learn from it. For this to happen, issues around data definitions, how data is collected, how it is standardised, and how it is used will have to be resolved.
Legislation and incentives have been very useful to encourage development of a learning health system, but some government offices lack drive. For example the meaningful use standards from the Office of the National Coordinator for Health IT (ONC) for interoperability are inadequate and may need to be bypassed. In contrast, the National Cancer Institute has led international agreements on genomics data-sharing standards and common terminology and shown that rapid progress is possible,
Currently organisations often treat their government funded research as their own proprietary data. For rapid system-wide learning, this will need to change. To accelerate collaboration, the National Center for Accelerating Translational Science (NCATS) aims to explore new technologies and approaches for research and its uses. Another example is the Global Alliance for Genomics and Health (GA4GH) initiative (http://genomicsandhealth.org/) which involves researchers from 41 countries, who have come together to agree on a framework for data sharing and an API for genomics. The NIH’s “Big Data To Knowledge” (BD2K) initiatives, with investments in a national system of 11 high-performance computing centers, are starting to generate momentum.
Another big advancement in recent years is the development of government-supported research networks. The FDA Sentinel initiative has access to around 50 million current patient records and around 300 million patient years of data in historical records, which is used for tracking drug safety. The Patient Centered Outcomes Research Institute (PCORI) has developed PCORnet which aims to create a nationwide system of several dozen collaborating networks for conducting clinical outcomes research.
The development of FDA, PCORI, and NIH networks has led to agreements on common data models, data elements and protocols. This means a lot of data from different locations can now be accessed by a single computer program. Studies that used to take years can now take weeks instead. It is crazy that with all of this computer and electronic medical records technology available hand extraction from paper records is still used for research!
The use of electronic health data standards and protocols will enable new predictive model and treatment guidance. For example if genetic and clinical data is collected for all cancer patients, the less you have to worry about randomisation in studies. Research can move towards an “N of 1” trial idea, where each individual patient is a trial, and researchers can begin to see how lots of factors interact. That would be a real game changer for medical research. Big data and computers are beginning to change the whole system of research, with a different trial system and different registry and data systems. These initiatives are beginning to gain momentum, led by a national learning system for cancer.
In our system, politicians normally stay out of science research. The good news is that now the political leaders seem to be agreeing, on a bipartisan basis, with biomedical science on a precision medicine and learning health system approach. This could lead to greater acceleration in the field. We could see elements of the learning health system come together a lot quicker.
Predictive Modelling
The growing evidence base may be too much for the clinician to effectively leverage without new tools. Therefore, how knowledge is fed back into the system is very important. As there are increasing amounts of data available, including huge amounts of genetic data, then it becomes clear that computerised decision support will be key. These types of tools have not been well-developed so far. There have been some successful examples such as the Archimedes model that was originally developed by Kaiser Permanente. This area could be key for diseases such cancer where genetic information is very important. The American Society for Clinical Oncology (ASCO) picked up on the idea of a national learning cancer system. The field of paediatric oncology has already had huge success where they have developed a system that collects data on close to 100% of paediatric oncology cases. There was a recognition early on they were looking at genetic differences that required personalised treatments. It is imperative for community oncologists to have access to new predictive models that use massive amounts of data, contribute their records into a system and get back state of the art assistance. The FDA is also working on predictive models for their own use. FDA faces a whole new range of challenges about how to review and approve targeted medications in a national genetically-informed system for personalized medicine.
Currently a huge problem for our highly-pluralistic (and chaotic) national health system is that nobody is on top of overall design for a learning health system, and nobody is sure how all the pieces will fit together in the future..
ASCO has led for a learning cancer system, with the aim to roll out decision support tools for breast cancer this year. There are some existing models such as Adjuvant! Online (https://www.adjuvantonline.com/) which provides decision support tools for treatment options of those with breast cancer. To have this sort of resource used as a standard tool across multiple cancers and other conditions would be transformational. This will gain more acceptance as people begin understand genetically-informed medicine and see its utility in practice. The key to developing predictive models for personalized care is to show that it can be done with effective use cases.
There is also recognition that the traditional Randomised Control Trial (RCT) methods need complementary ways to do research efficiently with large registries and data networks. In many instances, large, pre-designed, pre-populated “patients like me” cohorts, which compare treatments and outcomes for similar patients, may prove to be the best option to inform better decisions.. If something has already occurred 10,000 times, then the best bet is may be to use this base rate for predictions. Statisticians are also beginning to get on board with new ways to separate “signals” from “noise” in clinical data.. This work needs to move in tandem with an improvement in the quality of the data, so there is less noise produced by poor diagnostic codes etc.
Regulation
Patients want better, personalized medical care. But, even with better predictive models, there is still the role for the doctors. We saw with 23andme (www.23andme.com), a commercial personalised DNA service, that there were warnings about the interpretation of results and advice when it appeared that computers and software algorithms might seem to be practicing medicine. This area of decision support tools may need to be regulated and it is currently unclear what should fall under the category of medical devices that would need to be regulated by the FDA. Certainly there will be investment from venture capitalists in advancing better information and advice for cancer care. It will be interesting how this will all fit together.
Consent
Talking about need for a new bio-medical and clinical research system, if the public and private sectors will have tens of millions of patients contributing data there will need to be some national guarantees on appropriate use and privacy, perhaps comment consent forms, standards for who is a trusted agency to handle data, a role for the legal system. Today’s Health Insurance Portability and Accountability Act (HIPAA), has separate regimes for research and care and inconsistent rules that don’t make much sense as clinical and research enterprises make greater use of the same patient data.
Economic evaluation
Healthcare is a huge industry in the US. Saving money is not necessarily what is driving innovation, instead it may often be the potential for profit and for doing good. The healthcare system and pharmaceutical industry will have to figure out how to make money in a rapidly changing world of big data, genetics, Fitbits, Apple watches, smart phones, computers and precision medicine. These are exciting days for the rapid advance of science and a health system that can deliver better, personalized medicine for many millions of patients. But many parts of the health system may need to change.
Website: