Intel found that the percentage of respondents whose business is actually or would be using artificial intelligence almost doubled since the onset of COVID-19 in a survey of hundreds of healthcare decision-makers.
Among the expected use cases for AI: early intervention analytics, support for clinical decision-making and collaboration with specialists. In a blog post following the results, Stacey Shulman, vice president of the Internet of Things Division at Intel, said, “Artificial intelligence in health and life sciences has greatly accelerated,”
“From helping clinicians develop personalised protocols to streamlining clinical workloads or unlocking insights in genomics, infusing AI into these industries may be much closer than many initially thought,” she said.
Why does it matter?
In April 2018, and then 230 in July 2020, Intel conducted an online survey of 200 senior decision makers at healthcare organisations.
In 2018, 37% of respondents said that their business had deployed AI, or was preparing to deploy it. Forty-five percent said their organisation did so before the 2020 pandemic.
After COVID-19 started to sweep the nation, the number swelled to 84 percent.
Results of the survey also showed that confidence in AI is increasing, with two-thirds of respondents stating that within two years they would trust AI to process medical records and 62 percent saying that they would trust AI to evaluate diagnostics and screening.
Still, concerns were expressed by experts. Twenty percent said the expense would be the most challenging obstacle to overcome, while 17 percent cited a lack of faith in AI decisions by clinicians, and 16 percent said AI technology was still in its nascent stage.
Respondents were also worried that AI would be improperly implemented, that it would be overhyped and that it would be responsible for a fatal mistake.
The Bigger Theme
Although safety concerns were not listed in the Intel survey responses, other experts have cautioned that a double-edged sword may be AI and machine learning.
In a new world, some types of threats leveraged against healthcare industries depend on AI and ML to perform complex, and dangerous acts.
There’s also the question of bias: AI and ML are not exempt from their creators’ biases. It is doubtful that systems that are not trained on representative sets of data points are accurate.
As for record
“While the pandemic is accelerating AI healthcare adoption out of necessity, we must continue to work collaboratively, utilising public-private partnerships and emerging technology solutions to make solutions more accessible and trusted,” said Shulman. Source