Vahdat said that one area in which AI could be hugely beneficial in improving efficiency and cutting costs was cardiological care.
“If we collect data in real time and combine it with their historical data as well as data from millions of similar patients, an intelligent system could predict a heart attack with a high rate of precision,” he said.
“This will allow us to save countless lives and save health care systems around the world huge spend.”
But some experts fear the fast-paced nature of the still nascent AI industry could come at the risk of patient safety.
“We are concerned that in the rush to roll out AI and push the boundaries of technology, there is a risk that important checks and balances that have been established to keep patients safe might be seen as an afterthought, or be bypassed entirely,” Stokes-Lampard said.
Last month, a report by online health publication Stat said that IBM’s Watson supercomputer had made multiple “unsafe and incorrect” cancer treatment recommendations, citing internal company documents. According to Stat, the program had only been trained to deal with a small number of cases and hypothetical scenarios instead of actual patient data. IBM subsequently told CNBC that it has “learned and improved Watson Health based on continuous feedback from clients, new scientific evidence and new cancers and treatment alternatives.”
Stokes-Lampard said that regulators should keep pace with the rapid advances in technology to avoid harm to patients.
She said: “In an ever-changing ‘tech space,’ it is imperative that regulation keeps up with all technological developments, and that it is appropriately enforced, so that patients are kept safe, however they choose to access care.”
But many tech companies — big and small — are mostly averse to new regulation, arguing it could restrain innovation.
“There are a lot of regulations to protect patients already,” Medopad’s Vahdat said. “It isn’t about adding more regulation — it is about adapting and applying existing regulations to better deal with current realities.”
He added: “Patients as a cohort are often vulnerable and we feel responsible to not create hope that is not qualified. We feel strongly that unless our technology is clinically validated it shouldn’t be marketed. Like doctors, health care start-ups should seek to fully understand the ethical implications of creating expectations with patients.”