据《自然医学》(Nature Medicine)杂志周一发表的一篇关于人工智能评论文章称,美国食品药品监督管理局(FDA)批准的由人工智能(AI)驱动,用于图像解读的专利医疗算法数量正在“迅速增加”。
Scipps研究转化研究所(Scipps Research Translational Institute)所长兼创始人埃里克•托波尔(Eric Topol)撰写的研究综述文章指出,去年FDA每月批准人工智能的数量从1项到2项不等,相比之下,2017年FDA总共只批准了两项人工智能。FDA局长Scott Gottlieb去年也表示,FDA正在“积极发展一个新的监管框架来促进人工智能领域的创新”。
Topol表示,在2017年至2018年间获批FDA人工智能批准的公司中,“大多数公司的同行评议出版物都很少,在经过同行评议的研究中,在现实环境中进行前瞻性验证的研究仅有糖尿病视网膜病变、急诊环境中腕关节骨折的检测、组织学上的乳腺癌转移、非常小的结肠息肉和一小部分儿童人群的先天性白内障。”
Topol指出,一种驱动糖尿病视网膜病变的IDx设备的算法,在去年获得了FDA的批准,被称为“临床上首次人工智能前瞻性评估”。这款IDx设备是少数几个被FDA官员高度吹捧的人工智能产品之一。其他的包括由Viz.ai 开发的一款应用程序 (app),用于检测潜在中风,以及新款Apple Watch中的两款旨在帮助房颤患者的应用程序(app)。这些产品在2018年获得了FDA的批准。
从2017年开始出现基于人工智能的设备,包括AliveCor开发的用于Apple Watch房颤检测的KardiaMobile智能手机应用程序,和Arterys肿瘤学人工智能组件。人工智能审批或许可的上升趋势反映在下表中。
此外,FDA周一公布了其数字健康预认证(PreCert)试点计划下一阶段的新测试计划。PreCert 1.0版本的重点将是在FDA当前的权限范围内建立医疗设备软件 (SaMD) 的流程,其中可能包括使用人工智能和机器学习算法的软件功能。该机构指出,相当于首次实施SaMD,在全面实施针对其他类型的数字健康工具的PreCert计划之前,可能需要获得更多的授权。
“在处理深度学习算法方面的监管很棘手,因为目前不允许持续的自动教学功能,而是需要修正软件,使其表现得像一个非人工智能诊断系统。” Topol写道。“相对于单个医生的误诊对患者的伤害,机器算法引发的潜在医源性风险是巨大的。这就是为什么,当人工智能算法在临床实践中应用时,需要系统的调试、审计、广泛的模拟和验证,以及前瞻性审查的原因。需要更多的证据和可靠的验证,来满足甚至高于FDA最近降低的医疗算法审批的监管要求。”
英文原文
FDA Speeds Up Artificial Intelligence Approvals, Review Finds
The number of US Food and Drug Administration (FDA) approvals of proprietary medical algorithms that are powered by artificial intelligence (AI) for image interpretation is “expanding rapidly,” according to an AI review article published in Nature Medicine on Monday.
The research review article produced by Eric Topol, director and founder of the Scipps Research Translational Institute, found that FDA’s AI approvals ranged from one to two per month last year, compared to a total of just two FDA AI approvals in 2017. FDA Commissioner Scott Gottlieb also indicated last year that the agency is “actively developing a new regulatory framework to promote innovation” in the AI space.
“Yet there have been few peer-reviewed publications from most” of the companies that received FDA AI approvals between 2017 and 2018, Topol said. “Among the studies that have gone through peer review, the only prospective validation studies in a real-world setting have been for diabetic retinopathy, detection of wrist fractures in the emergency room setting, histologic breast cancer metastases, very small colonic polyps and congenital cataracts in a small group of children.”
Topol pointed to an algorithm that powers an IDx device for diabetic retinopathy that received FDA approval last year as “the first prospective assessment of AI in the clinic.” This IDx device is among a select few of AI approvals that have been highly touted by FDA officials. Others include an application (app) developed by Viz.ai for the detection of a potential stroke and two applications in the new Apple Watch that are intended to aid patients with atrial fibrillation. These received FDA's OK in 2018.
AI-powered devices from 2017 included AliveCor’s KardiaMobile smartphone app indicated for use on the Apple Watch to aid in atrial fibrillation detection and the Arterys Oncology AI suite. The upwards trend in AI approvals or clearances is reflected in the table below.
In addition, FDA revealed its new test plan for the next phase of its digital health Pre-Certification (PreCert) pilot program on Monday. The focus of PreCert version 1.0 will be to establish processes for software as a medical device (SaMD), which may include software functions that use AI and machine learning algorithms, within FDA's current authorities. The agency indicated additional authority may be required prior to fully implementing the PreCert program for other types of digital health tools, rather than just first-of-its-kind SaMD.
“The regulatory oversight in dealing with deep-learning algorithms is tricky because it does not currently allow continued autodidactic functionality but instead necessitates fixing the software to behave like a non-AI diagnostic system,” Topol wrote. "Instead of a single doctor’s mistake hurting a patient, the potential for a machine algorithm inducing iatrogenic risk is vast. This is all the more reason that systematic debugging, audit, extensive simulation,and validation, along with prospective scrutiny, are required when an AI algorithm is unleashed in clinical practice. It also underscores the need to require more evidence and robust validation to exceed the recent downgrading of FDA regulatory requirements for medical algorithm approval."
内容来自:RAPS
整理翻译:奥咨达
热点推荐
领取专属 10元无门槛券
私享最新 技术干货