Many AI devices are already integrated into clinical practice, but a new analysis questions whether some validation processes could lead to algorithm bias in them.
Analysis published in Clinical Radiology, attempted to answer this question with their recent review of FDA cleared AI devices currently in use in medical practice. As of November 2021, there are 151 AI devices approved by the FDA for medical imaging . While the authors of the new paper say these tools will undoubtedly benefit the field of radiology, they question the external validation process involved in their development and whether a lack of transparency in the process could prevent doctors from using AI tools. the future
“Clinical study design and makeup of a clinical validation dataset can affect the safety and effectiveness of a device and introduce potential biases in clinical care,” explained corresponding author Harrison X. Bai, MD, of Johns Hopkins and colleagues. “It is critical that these devices undergo thorough clinical validation to ensure they are generalizable to a diverse population and image acquisition landscape.”
The researchers used the American College of Radiology’s Data Science Institute AI Central Database to conduct their analysis. As of November 2021, of the 151 approved algorithms, 64.2% reported using clinical data to validate their device. However, only 4% of these included the population of study participants and only 5.3% reported the specifications of the machines used.