The Pit falls of AI bias in healthcare

**

AI algorithms in healthcare often tend to adopt unwanted biases inadvertently, leading to improper diagnosis and care recommendations. Explainable AI could be key to resolving it.

**

The emergence of artificial intelligence ( AI) has contributed to the use of aggregated data in healthcare to develop complex models for diagnostic automation. It has strengthened the practice of a doctor by tailoring procedures and successfully approaching patients.Healthcare AI is expected to reach $36.15 billion by 2025 with a 50.2 percent growth rate. Hospitals are being built on the assumption that AI systems are the future, and start-ups using AI for healthcare have raised more than $5 billion in risk capital[ii].

Many reports have recently emerged that include AI algorithmic bias, where there is a tendency for AI to favor certain groups based on gender , age, and race.

Biased data sets for AI algorithms:

Data plays an significant role when it comes to the prejudices creeping into the process. For example, a Canadian company created an auditory test algorithm for neurological disease. The test was > 90 per cent accurate.

An algorithm was developed in a similar vein to diagnose malignant skin lesions from the images. People with white skin are more likely to suffer from skin cancer, so there were more data for that particular type of skin available. The algorithm was trained on a patient dataset with a very small sample size of a particular type of patient and is not suitable for clinical use in a hospital with a patient population comprising a wide variety of skin types.

Lack of Transparency

While most algorithmic bias studies focus on determining known factors , i.e. gender , race, age, etc., recent research shows that algorithms can collect hidden biases that are often hard to identify. Therefore it is often arduous and there is very little visibility for developers to identify and fix problems.

In certain cases, prejudices continue to arise due to a similarity between considered variables, such as the training data for an algorithm designed to assess a person’s hospital stay / admission.
The model gave low-risk scores to patients with asthma, and high scores to pneumonia patients.

Can AI biases be fixed?

An explanable and interpretable AI will allow developers, business users and end-users to understand why certain decisions are made.
An Explainable AI can help create trust in an AI algorithm that’s important to the health care system.

Current AI models are limited to providing forecasts, but as they advance, models may evolve towards explanability coupled with medical reasoning and AI results, leading to rational decision-making to increase adoption and confidence in healthcare AI systems. Read More