Fri, 12/14/2018

Your team is deep into a project to develop a repository of healthcare data for use by the business to perform analytics, trend analysis, complex metrics, and ad hoc reporting. Given the wide range of potential use cases, you want to make sure that the data is ‘good’ before you make it available.  You checked the source data. You checked your integration work. You checked the master data for patient and provider. Your basic test reports all seem to be fine.  But you still aren’t sleeping. And there is a good reason for that. There is a very high probability that the data is far from perfect.  In fact, it might very well be outright wrong. But that is a healthy thing and here is why. 

We have blogged often on the challenges of healthcare data management. The varied nature and use of source data, data harmonization, applying governance standards, compliance challenges, and the complexity in how data is organized and presented for healthcare consumption all contribute to a challenging data environment. As you integrate you make many decisions that prepare the data in a universally accepted format across the organization. But sometimes those decisions are wrong (for all the right reasons) and that can and will impact the data and its perceived reliability.  The key is how to find these scenarios quickly and take corrective action.

One of the most effective ways to do that is through reporting. It seems counter intuitive, however, real life scenarios are often the only way we will detect gaps in our data. And often the more complex the report, the better candidate it is to diagnose data challenges. For example, I am looking at a patient cohort (diabetes) and I want to know when these patients have last seen their primary care physician, and in the last 12 months have had they eyes checked, feet checked and asked about smoking status. In order for this report to be accurate, you need a good view toward your patients in general (mastering is critical for this), any chronic illness diagnoses and properly coded or “near” coded indications of orders and observations for the activities you would like to analyze.  If any of these things are out of sync in your reporting database, it indicates a gap to explore.

Another useful technique for healthcare data is the utilization of known counts and trusted reports to get a sense for the “reasonableness” of a new and complex report. Census data from a primary EMR can suggest a level of magnitude that you can compare to ensure you aren’t missing or over counting key data elements. Sometimes, commonly used management reports (often hand prepared) are a comparative gauge for the accuracy of new reports. The only caveat here is that the “gold standard reports we always used in the past” often turn out to be less than perfect.  Detailed analysis suggests that the new report isn’t always the one that is incorrect. Start with broad levels of comparative confidence and work towards the detail as you evaluate those new reports.

By accelerating the utilization of your data analytics and reporting tools will allow you to identify issues, QUICKLY correct them and build a level of confidence with your user community with the data. And instituting a continuous improvement strategy where we engage the end consumer of the report early and use their feedback and business knowledge to add value to the testing process is win for the user (they get more interactive participation in the report production process) and a win for the report developer and testing team. Being more aggressive in your user adoption approaches will also allow you to crowdsource the improvement of your data and increase the value to your organization.

Find out more about how we can help you with leveraging data for value by contacting us or calling 800-969-INFO.