• Home
  • News
  • Can You Trust AI with your health?
08 May 2024

Can You Trust AI with your health?

Steve johnson ZP Oo D Qc8y Mw unsplash

AI, or Artificial Intelligence, is an exciting new technology and businesses worldwide are rushing to shoehorn AI tools into their every day operations and sales offerings. From AI being included as standard in new phones and computers, through to AI tools taking their place in the healthcare and financial services ecosystems, Artificial Intelligence is becoming increasingly pervasive in all walks of life.

Unfortunately, this carries with it a degree of risk, especially in regulated industries where advice is being given, or where a customer’s health is involved.

Artificial Intelligence, as the term is known today, is actually a misnomer. There is nothing “intelligent” about these tools, Large Language Models (or LLMS) are essentially extensions of the chatbots which many companies have been using over the last decades. But the current iteration of Artificial Intelligence seems to be smarter because at its core they present synthesized information to the end user.

This is novel. Or has been novel over the last 24 months.

However, there is a rapidly growing understanding that Artificial Intelligence systems are not able to discern the quality of the data they are producing without a significant investment placed on their usage. Even self-proclaimed “AI Optimists” are declaring the current generation of LLMS to be “little more than a party trick.”

Negative outcomes of current AI systems are already being felt, from Law to healthcare and even in Finance, the litany of errors and issues for this very new technology is extensive.

Artificial Intelligence in Healthcare

Artificial Intelligence in Healthcare

One of the positive aspects of Artificial Intelligence is its potential to make life easier for the whole of humanity. But the issue with using tools that are largely unproven is that mistakes are almost guaranteed to happen – and when it comes to Artificial Intelligence and your health, there is no exception.

From meal planning tools recommending recipes for Chlorine Gas, and an inherent bias in tools that mean an accurate diagnosis may not be available, to the fact that AI tools had an error rate as high as 80% in diagnosis of pediatric patients, the rush to include artificial intelligence tools in healthcare is leading to mistakes. Mistakes which have the potential to be significantly harmful to consumers and patients.

Health is not a financial transaction. It is not a chain of records written in a ledger that can be traced all the way back to the first dollar appearing in your account. Your personal health can change for myriad different reasons every day – from not getting enough sleep, to having an argument with your spouse, the reason for your spike in blood pressure could have nothing to do with the extra piece of bacon you ate in the morning and everything to do with the big meeting you have this afternoon.

While this is not subjective in that your vital signs can still be read and analyzed, and you can still have tests performed that state, “yes, your heart rate is elevated” there is still a degree of nuance in interpreting data. As it stands, LLM derived Artificial Intelligence systems are not very good at nuance, and may not be particularly helpful.

For firms and companies in the medical and healthcare adjacent industries that have rushed to implement Artificial Intelligence tools in an effort to assist customers, there may be some serious concerns.

Artificial Intelligence Mistakes

Artificial Intelligence and Healthcare Mistakes

When your doctor makes a mistake there are a series of systems in place to ensure that the mistake is addressed and any errors rectified. If a doctor provides an erroneous diagnosis to a patient, then there is Medical Malpractice Insurance to provide financial restitution, and the local health authority will likely step in to offer disciplinary action depending on the severity of the case.

This is to say that when you see a human doctor, you do so secure in the knowledge that this person has received extensive education, passed the relevant exams, and will be held responsible in the event a mistake is made. When it comes to artificial intelligence, no such safeguards exist and in fact the inherent biases within AI systems mean that not only do the safeguards not exist, but mistakes are likely to be made.

The same mistakes which would be regulated and managed with a human healthcare professional are largely ignored when it comes to Artificial Intelligence – even basic questions like “how did the AI get trained and what data was used”” pose serious philosophical issues for this emergent technology. When looking at the culpability of AI, despite the seriousness of the issues at hand and the increasing proliferation of the technology there a very few restrictions in place when it comes to the application of Artificial Intelligence tools.

This presents a major problem for many businesses, especially as it relates to their professional liability in the face of negligence.

Healthcare AI Liability

Healthcare Professional Liability and Artificial Intelligence

When it comes to professional liability concerns in a healthcare setting it is important to understand that standard Professional Indemnity products do not work - doctors, nurses, and hospitals and other professionals in the healthcare (or healthcare adjacent) industries satisfy their error and omissions risks through Medical Malpractice Insurance coverage.

Medical Malpractice policies are specially designed forms of professional indemnity protection for healthcare professionals. In Hong Kong, the terms of a Medical Malpractice Insurance policy are generally worded to specify that the insurance “shall pay on behalf of the insured any loss arising from any claim for civil liability in respect of professional services, provided that such a claim is first made against the insured during the period of insurance.”

The issue here is that Artificial Intelligence may not, actually, constitute a professional service – this is to say that, the AI is not a “professional;” it has not been licensed or certified. Consequently, issues which would normally fall under the remit of a Medical Malpractice Insurance policy cannot be covered when it comes to LLM AI systems.

An argument could be made that Artificial Intelligence is not “providing a professional service” but is acting like a medical device – no different than a pacemaker or cochlear implant. However, this argument ignores the fact that many Artificial Intelligence tools are evading the clinical trials and regulatory frameworks designed to ensure that medical devices function safely and as intended. So a situation emerges where an untested tool will be giving advice to a doctor, who in turn may be relying on that data to provide faulty information to a patient. Which brings us back to the starting point – who is at fault when the patient suffers a loss, and where does the restitution come from?

Under the current system, without adequate insurance protection, the liability for misuse of an Artificial Intelligence tool in a healthcare setting may fall on the responsible doctor. But without structures in place to fully understand how AI is going to impact a patient, and what the risks are of this technology, the doctor will likely be covering their financial responsibility independently.

Healthcare Insurance and AI

Healthcare, AI, and Insurance

We are at a very uncertain stage in the implementation of emerging technologies like Artificial Intelligence. These unproven tools are making errors with non-critical topics, and those errors will become serious for numerous reasons when dealing with someone’s health.

As AI starts to touch on more industries and businesses, it is important that proper steps are being taken to ensure that the risks you are facing are being managed. From asking simple questions like “How does this AI tool fall into my professional liability insurance protection?” or “what is the error rate of this tool, and how does that impact my customers?” businesses can take steps to insulate themselves from the extensive fallout that will occur when these tools do fail.

From Commercial Goods Liability, Professional Indemnity, and even through to Employees’ Compensation Insurance, the emergence of Artificial Intelligence tools and technologies over the last 24 months is going to play an increasingly important part in the development of flexible and innovative insurance technologies which will work to ensure that you are able to trust AI in healthcare.

But we’re not there yet.

About Author

Michael Lamb is an insurance industry professional with many years of experience within the Hong Kong Insurance market. Focusing on APAC coverage issues, Michael is able to provide extensive analysis and insight to a range of pressing topics. Previously, Michael provided insurance broker Globalsurance.com with their most highly valued articles and was a key influence in the development of all the content on Pacificprime.com, Michael has a passion for insurance matched by few others in the region.

Connect with us

  • Facebook
  • LinkedIn
  • Twitter