Is it ethical for a medical practitioner to use AI for clinical decisioning?
AI in medicine is promising, but not ready for guideline-based clinical decisions. How do we bridge the gap?
I fear the court case where the practitioner says "But AI said it was okay!"
With the surge in medical conference keynote speakers headlining AI topics and social media influencers encouraging the use of AI in day-to-day clinical practice, the medical community needs to simultaneously make clear the explicit use cases where AI is safe for practitioners, hospital systems, and patients to trust its black-box auto-generated content.
One area in particular that is not safe yet is in the realm of guideline-based clinical decision support to practitioners.
Anyone in or around the medical research community knows that there is a massive decade long backlog of clinical research that has yet to be assimilated practically into day-to-day care pathways.
Disseminating medical research into practical use is a slow and challenging process. On average, a medical research publication takes 7 years to reach common practical use in a clinical setting.
Will AI shorten dissemination to near-instant practical use? Not possible... yet.
It's true that AI bots are able to aggregate and summarize information much better than a search engine, but the content generated is only as good as its sources.
If the sources are narrative research guidelines that are intentionally loose enough to ensure the medical practitioner is able to use their best educated clinical judgement, then AI will be limited to the same level of loose specificity.
At present, there are very few practical translations of clinical guidelines that AI can use for training. Not nearly enough to meaningfully influence AI models.
Without practical examples to train AI, you cannot expect to get practical clinical advice that is useful for a specific patient.
In order for AI to have more precise answers, the medical community needs to augment the public domain with an ongoing source of practical and precise "IF-THEN-ELSE" translations of clinical guidelines that can be consumed by AI.
Presently there is a massive chasm of missing practical and precise translations of clinical guidelines that can be used within a patient context.
There are a number of projects around the globe to build out the precise translations of clinical guidelines, but so far, the products are in the form of computer code that only IT teams can utilize. Adoption has been slow due to the time, cost, and lack of computer programming resources to translate the massive backlog of clinical research.
My work at EVAL Health is to make it easy for medical researchers and clinicians to create, use, and share open-source clinical apps without needing technical skills. EVAL Health apps are designed to be consumed by AI to increase the accuracy of clinical guidance.
We can't skip over the medical research translation backlog and hope AI will tell humans how to do medicine better.
Ethically, we need to do the work first.
--
Tim Michalski / CEO
EVAL Health
hello@eval.health