Menu Close

IBM Watson Site Visit

By Dr Tom Foley, Dr Fergus Fairmichael.

Background

Dr Eric Brown

Dr Eric Brown is Director of Watson Technologies at the IBM T.J. Watson Research Center, NY. Eric is currently working on the DeepQA project, advancing the state-of-the-art in automatic, open domain question answering technology. The DeepQA team is applying this technology to build Watson, a computer system that famously beat the best human players of Jeopardy! Eric has been working in the broader area of information retrieval since 1992 and has explored a variety of issues, including scalability, parallel and distributed information retrieval, automatic text categorization, question answering, text analysis in the bio-medical domain, and applications of speech recognition in knowledge management.

Dr Joseph Jasinski

Dr. Joseph Jasinski is an IBM Distinguished Engineer and the global industry executive for Smarter Healthcare and Life Sciences at the Thomas J. Watson Research Center, NY. In this role, he is responsible for developing strategies and coordinating research efforts across IBM’s Research Division in areas ranging from the use of information technology in payer/provider healthcare to computational studies in molecular biology.

Dr Shahram Ebadollahi

Dr. Shahram Ebadollahi is the Program Director of Health Informatics at IBM Research. In this capacity, he is responsible for defining and driving the research agenda in the broad area of Healthcare Informatics for IBM Research. He and his colleagues have pioneered technologies in the area of data-driven healthcare, which is the application of data mining, machine learning, and advanced visual analytics to large patient population data for deriving insights and evidence for decision support in healthcare.

Interview Synopsis

From a research perspective, IBM is focused on analytics – “big data” to improve outcomes, reduce costs and improve efficiency. The analytics usually take place above the electronic record, while a lot of EMRs also run on IBM hardware.

IBM’s work in health usually falls under two categories that are not mutually exclusive:
• Data driven analytics
• Knowledge driven analytics

Data driven

Within medicine, there are a huge number of transactional interactions that generate structured and unstructured data. Pattern recognition can be used to infer what might happen in the future, for example, what treatment might be effective given a particular set of circumstances.  This data driven predictive modelling does not aim to provide causality in the same way as a traditional RCT.

An example of this is the EuResist project (www.euresist.org) that examines data relating to patients who are suffering from HIV and are treated with anti-retrovirals in Europe.  Data is gathered into a database containing information on phenotype, genotype of the viral strain, treatment given and outcomes achieved.  This information can be used to produce a vector representing each patient.  Machine learning techniques can be used to cluster these vectors. In each case, there may be unknown variables within each vector, however if there are sufficient numbers involved, clustering can still be performed.   Understanding how to use sparse data like this is part of the mathematical challenge.  Clustering is performed using either semiautomatic or automatic methods to bring together patients who have similar attributes. This can be used to assess which treatment might work best for a new patient.  In the case of EuResist, the clinicians were around 66% effective in choosing an effective treatment on the first attempt.  By comparison, the system was around 78% effective.

This technique has been expanded to other use cases, including the predictive modelling of other diseases.  The advantage is that if it is possible to identify those who are at high risk of developing complications, then they can be more aggressively managed to aid prevention.  This fits in very well with the philosophy of proactively managing healthcare, that is particularly attractive to those health systems that combine both payer and provider.

Knowledge driven

The original purpose for knowledge driven technologies was to leverage unstructured information using Natural Language Processing (NLP) techniques.  This has several obvious applications within medicine;
1. The growing body of medical literature:
• It is not possible for clinicians to keep up to date with all publications, so having a way to more effectively leverage information contained in this literature would be very useful. The next step was to move beyond simple search capabilities to perform deeper analytics on the text, to understand the entities and the relationships described there to leverage further insight.
2. Unstructured information in the patient history:
• A lot of what is captured in electronic notes is unstructured.  There can be a lot of significant information contained within this such as the justification for a particular treatment choice or the connections between symptoms, diagnosis and the lab tests.  In large longitudinal clinical records, there can often be over 1,000 pages, so for the clinician to able to quickly leverage this information through the use of accurate and relevant summaries would be hugely beneficial.

The knowledge driven analytics of Watson is often combined with data driven analytics, under the Watson brand, in order to solve particular problems.

Collaborations

The collaboration with Memorial Sloan Kettering Cancer Center (MSKCC) is primarily knowledge driven.  In this case there is a deep analysis of patient records, to come up with appropriate treatment recommendations and to leverage the literature to justify these recommendations.  This is to complement the clinician’s abilities by providing easy access to the relevant patient attributes.  MSKCC is training the system to summarise the patient information by pulling out the important details and then applying their internal best practices and guidelines to this to give treatment recommendations.  It then provides the relevant guidelines or published authoritative sources to support a particular treatment recommendation.

Another project with the MD Anderson Center in Texas focuses on leukaemia and analysing the patient history and monitoring key attributes to produce summaries of a patient record.  Another element of this project is connecting suitable patients with available clinical trials. This is important in cancer care, where it has often been difficult to find suitable matches for trials. In this case the technology is used to analyse the description of the clinical trial, which may have a lot of Boolean style rules, and then to align that with the attributes of the patient.  As with the MSKCC example summarising the patient case and extracting the appropriate attributes is key and this also allows the clinician to more quickly understand what is going on with the patient.

Another project focuses on problem lists. Problem lists are usually manually maintained in EMRs and can therefore become out of date or be inaccurate.  Machine learning models can be used to address this problem by analysing the longitudinal records, identifying problems and associated features and scoring these to generate the problem list.  This involves different layers of analytics applied to unstructured information.

What is common in all of these projects is the application of NLP and analytics to detect important entities and relationships within a domain.  A lot of the important medical entities are described in a comprehensive resource, from the National library of Medicine, called Unified Medical Language System (UMLS) which is a meta-thesaurus that combines medical taxonomies and is a comprehensive set of important concepts that can be identified in the text.

Challenges

The more complete that any particular dataset is, the more likely you are to get good analytical results.  The best datasets have well defined sets of questions and data collected for every single patient.  The reason for using routinely collected data is often that it is not feasible or not ethical to perform a clinical trial.   The disadvantage is often that datasets are incomplete. Therefore, anything that the health community can do to improve data collection is helpful.

Institutions with electronic records do not necessarily have good data, typically, datasets have a lot of inconsistencies.  There is a need for a lot of “plumbing” technologies which check and clean data, take some unstructured stuff and convert it into structured.  Often there are multiple separate systems within individual providers which are divided by speciality and function such as pharmacy, radiology, laboratory etc.  This presents problems for linking the same person’s data together and duplication of data.  In the US, this challenge is increased as there is no universal identifier for healthcare.  Therefore there is a lot of work involved in generating a longitudinal record for an individual.  This can be automated but each EMR system would require a different approach.   Some institutions, such as Vanderbilt University, have a process to create a de-identified dataset for research purposes which is automatically populated by data that comes into their EMR system.

There are also non-technical challenges which can lead to a delay in the adoption of technology.  Some of these include safety, regulatory, legal and ethical concerns.  At present, in the US, it is still unclear whether such systems are medical devices which would require them to be regulated by the FDA.  There is the potential for enormous societal benefit to sharing anonymised data however current attitude, regulations and cultural issues need to be addressed before this can be achieved.

This also has to make financial sense for all those involved. The healthcare system is highly competitive and things that help overall may not benefit individual players and the cost of protecting data from unauthorised access and hacking can be substantial.

Outcome Measurement

There are concerns that adding additional recording, such as for outcome measurements would be an extra burden on the clinician and the healthcare system, which is already stretched.  One potential solution to this problem may be to use automatic systems to leverage the unstructured information that is recorded, to form structured outcome measurements.  This would make the implementation of outcome measurements more acceptable to clinicians.

Ecosystems

There is a big effort to encourage a developer ecosystem for Watson technologies.  The concept of an ecosystem is also core to the LHS as it will need to cater to a diverse range of services and patient needs.  This would be difficult for any individual organisation to achieve and insights may need to be generated from large populations.  Technologies can help to enable this, but it will also require a large and diverse developer ecosystem to cater for the varying aspects of the LHS.

Future

The field of learning health systems may develop in a number of ways.  One option is for large organisations or governments to adopt these kinds of technologies and show value from their adoption and use.  Demonstrating the value of such technologies would create the pull for other organisations to join in.  In certain fields, such as pharmaceuticals, there has been the formation of consortia that are looking into different angles of LHS.

IBM is creating an array of technology across both the knowledge and data fields, to use evidence from the literature but also to mine and learn from observational data and then bring this back to practice.  In the US, this concept is beginning to gain traction with NIH and government putting real money behind Big Data to Knowledge (BD2K).

One of the big opportunities over the next 5 years is to insert technology into healthcare in a way that appeals to clinicians.   For example, identifying tedious manual tasks that could be automated such as summarising patient cases.  This would save the clinician time and allow them to focus on what they do best, which is treating the patient.  Over time, summaries will become more intelligent, with the ability to link together information, reason about this and pull together relevant information from the literature.  At present, the latest research results take too long to get from bench to bedside.

There would be immediate resistance from clinicians if standardised coding of outcomes was forced upon them. Finding the best balance for data collection has been a challenge. Over time, technology may help to resolve this by becoming better at recognising the natural representation of information and leveraging this.

Different kinds of computer/human interfaces will be more common.  The iPad has shown that certain types of interfaces are widely accepted.  Most EMRs are cumbersome to use and a lot of work could be done to improve this.

The LHS needs to be market driven. Systems can be built to perform tasks but they need to be mindful of what clinicians and patients need and what would actually prove useful to them.

New wearables with mobile sensors and the emergence of health kits, such as the apple health kit and google fit, provide a change in how consumers interact with their own data.  Mobile apps allow patients to be a lot more connected to their information.  There are examples of some healthcare organisations which ask patients to complete digital questionnaires before visits.   Consumers have now grown up with technology and they expect their information to be tracked and for physicians to leverage that. It is surprising that there are not more complaints from consumers about having to provide the same information over and over again when a lot of the information is already recorded somewhere.

Website: