Wed, 02/27/2019

A few months ago, I had the honor of speaking at Optum’s MinneAnalytics conference and enjoyed the opportunity to engage with a broad spectrum of professionals and innovators. In talking with one young data scientist about the definition and types of AI, he commented that “yesterday’s AI is often today’s common technology”. His point is that the definition of AI is constantly evolving – as demonstrated by the pervasive autocorrect with which we all have a love/hate relationship with.

When word processing was first developed, the idea that a computer could correct spelling and suggest grammar changes in real time would have certainly been classified as AI, if not altogether impossible. Today, this is not only commonplace, but expected.

Regardless of definition, it is clear that AI utilization is expanding in many industries, including substantially to healthcare. From faster and more accurate detection of tumors, through imaging analysis, to fraud mitigation, and advance prediction of Alzheimer’s, our industry completed some impressive work. Unfortunately, after years of time and effort spent, there has also been hundreds of millions invested on failed projects.

Everyone that I’ve discussed the matter with has emphasized the importance of creating a single source of governed truth, at an enterprise level in order for AI to truly be beneficial. A recent Forbes article puts it this way; “For good results, data sets must be accurate, complete and large.  It’s easy to understand why: A computer algorithm can’t tell if data it receives is wrong. If it continues to learn and evolve based on inaccurate or incomplete information, the downstream results can be deeply flawed. “

Flawed results can mean adverse events, clinician frustration, poor patient experience, and financial decline. So how then do we properly prepare for AI in a safe and effective way? In the most recent Hype Cycle for Healthcare Providers, Gartner emphasizes the necessity of investing in a health data curation and enrichment hub. This platform should be able to quickly integrate data from many clinical, operational, and financial sources, then systematically utilize data quality algorithms to manage missing values. In addition, it should have capabilities to match, merge, cleanse, and enrich that data with geo codes, clinical ontologies, and reference value sets. Mastering records across healthcare domains such as patient, provider, workforce, and others is a key component. Finally, the governed published data must have an effective way to be accessible in data marts for analysts and downstream technologies to use.

Gartner places the maturity of this technology hub at the 5 – 10 year mark. But nearly 5 years ago, Information Builders developed the Omni-HealthData platform. Requested by our healthcare clients, they now see it as one of the most complete and unified data management and analytics platforms available.

Our booth at HIMSS19 was flooded with healthcare providers and payers who have come to the realization that it is time to deal with their data using an enterprise approach. I have had many consulting partners in the AI space stop by and validate these thoughts, one of whom said, “Most of my clients for AI projects are shocked when I tell them that 80% of our work for AI is data prep!”  When you have invested in an organizational, single source of governed data, you have a platform that will successfully support a broad array of innovation in healthcare, including AI. Then you will be ready to make today’s AI dreams into tomorrow’s common technology.

Missed us at HIMSS? Sign up for our webinar and hear the highlights!