This past month, the assault on Universal Health Care brought new attention to the challenge of ransomware faced by health systems and what hospitals should do to defend themselves from a similar event.
Security experts claim that the attack could also be emblematic of the ways in which machine learning and artificial intelligence are leveraged by bad actors, beyond being one of the most important ransomware attacks in healthcare history.
With some kinds of “early worms,” said Greg Foss, VMware Carbon Black senior cybersecurity strategist, "we saw [cybercriminals] conduct these automated actions, and take information from their environment and use it to automatically disperse and pivot; classify value data; and use it to exfiltrate."
Foss said that the difficulty of carrying out these acts in a new environment relies on “using AI and ML at its heart.”
AI and ML are something that in several different ways adds to defence, “he said.” “This is not something that has been studied, particularly until recently.”
One efficient approach includes analytics of user and entity actions, said Foss: basically when a system analyses the normal behaviour of a person and flags deviations from that behaviour.
A human resource representative suddenly running commands on their host, for instance, is irregular activity and may suggest a violation, he said.
It is also possible to use AI and ML to detect subtle behaviour patterns among attackers, he said. Given that phishing emails sometimes play on the feelings of a would-be victim, playing up the urgency of a message to force someone to click on a connexion, Foss noted that if a message appears abnormally angry, automated sentiment analysis may help flag. Read More from source