Algorithms or Clinicians?

In predicting patient discharge, are computer programs more accurate or clinicians? What if algorithms can rank patients based on their likelihood of being discharged so that clinicians can prioritize their work? These are some of the goals that a group of researchers wanted to achieve. For hospitals, it’s always a challenge to maintaining high resource utilization while providing patients with timely and quality care. Therefore, predicting patient discharge becomes a key metric to achieve both outcomes. In recent years, clinicians have been using “Real-time demand capacity management” (RTDC) and it has shown initial effectiveness However, the current RTDC practice has several limitations. First, it is done by clinicians and requires daily time commitment. Second, the predictions can be subjective and thus susceptible to high variability. Lastly, most successful implementations of RTDC have occurred in surgical units, and it might not be generalized to patients in other medical units.

To address these limitations, researchers Barnes, Hamrock, Toerper, Siddiqui, and Levin (2016) aim to automate the RTDC process by implementing a series of supervised machine learning methods. Supervised machine learning, in the simplest term means that predictors are selected when the predicted outcomes are known instead of unknown. The patients’ flow management outcome they look at is patient discharge at 2pm or midnight of each day from data as of 7am on the same day. Effectively, they are making automated real-time predictions on patient discharge based on data that every hospital already has. They suggested that if the results are comparable to clinicians’ predictions, their automated approach can save tremendous time from the physicians.

The research setting was a single, 36-bed medical unit in a large, mid-Atlantic academic medical center serving an urban population. The data they used for predictions are readily available at the hospital: patient flow data, i.e., admission and discharge time, demographics, and basic admission diagnoses data. The data used for analysis were 8,852 patient visits and 20,243 individual patient days. The researchers trained their models on data collected over 26.6 months from the start of the study (January 1, 2011) until the date when the clinician predictions began (March 18, 2013). Model predictions were then generated for the following 9.4 months (March 19, 2013 to December 31, 2013); purposefully overlapping with the same period clinician predictions were collected so that they could compare their prediction results from the automated approach with that of clinicians.

hospital bedTheir study reveals the following findings. Among a list of predictors on patients being discharged at 2pm or midnight, they found that the longer a patient stayed at the hospital, if a patient is put in observation (an intermittent care status) before being admitted, or if a patient is evaluated on a Sunday, or is admitted due to having chest pain, s/he is more likely to be discharged on the same day. Predictors such as gender, ethnicity, weekdays (Monday through Thursday), and less common reasons for visit (e.g., abdominal pain, COPD, congestive heart failure) had little to no predictive power for this patient population.

Comparing prediction results from their machine learning methods with the clinicians’ approach, they suggest that the automated approach predicted a higher proportion of discharges, although at the cost of producing more false positives. In addition to the same day discharge, these researchers also calculated these metrics for near future outcomes (i.e., outcomes for the next time period). For example, how many patients predicted to be discharged by 2 p.m. were discharged by the end of the day? Similarly, how many patients predicted to be discharged by the end of the day were discharged by the end of the next day? It was found that one of their models performed significantly better than clinicians’ prediction for the near-future 2 p.m. outcome. Their last finding, which is also most likely to be applied to a real hospital setting, is that the researchers used their prediction modeling to rank patients daily based on their likelihood of being discharged and their ranking order is moderately correlated with the actual discharge order.  With rankings like this, patients who are more likely to be discharged will get higher priority for the remaining tasks to be done. Imagine that these rankings are put into a web application like PLATO™. Every morning clinicians log into the application and they can immediately see a list of patients according to their discharge order, and the remaining tasks for each patient. Their work efficiency can be greatly improved.

Using data that are readily available in most hospitals, these researchers’ automated approach predicts comparable results as clinicians on same-day or near future patient discharge. While it’s attempting to say algorithms are getting close to human prediction, there is still a lot that the algorithms did not tell us. For example, among the most important predictors, length of stay and observation status are sort of expected. But why are Sundays, and being admitted due to chest pain, more important than other factors? Also, the study did not elaborate on the rationale behind clinicians’ daily prediction of patient discharge. While it is not the purpose of their automated modeling to explain the why but rather increase the prediction accuracy, if clinicians’ expertise is combined with algorithms, the prediction will not only be useful, but also meaningful.

 

 

Citation:

Sean Barnes, Eric Hamrock, Matthew Toerper, Sauleh Siddiqui, Scott Levin; Real-time prediction of inpatient length of stay for discharge prioritization, Journal of the American Medical Informatics Association, Volume 23, Issue e1, 1 April 2016, Pages e2–e10, https://doi.org/10.1093/jamia/ocv106

 

Disclaimer: This Blog is for educational purposes only as well as to provide general information and a general understanding of the topics discussed.  The Blog should not be used as a substitute for legal advice and you are advised to seek additional information from your insurance carriers, Medicare and/or Medicaid agencies for additional criteria and regulations regarding these services.

about the author

A portrait of Yixin Qiu

Yixin Qiu is a Project Manager with Qlarant and is involved in anything related to our Predictive Modeling Solution, PLATO™. Yixin works with developers, modelers, SMEs, as well as outreach and support to deliver a high quality software platform with end users in mind. See all posts from Yixin Qiu.

Leave a Reply

Let's Talk About Solutions

How can Qlarant help you improve quality and program performance?