U.S. Senate hearing on screening for medical devices

(by Michael Lavine)

On 13 April, 2011, the US Senate’s Special Committee on Aging (http://aging.senate.gov/) heard testimony concerning the FDA’s process for screening medical devices (http://aging.senate.gov/hearing_detail.cfm?id=332494&). Some of the testimony was statistical.

Background: The process by which the FDA screens and approves medical devices underwent a substantial change in 1976. Prior to that time, all devices were screened through a process called Premarket Approval, or PMA. Beginning in 1976, some devices — those deemed to present a lesser risk if they fail — were allowed to be approved through a less stringent process, called either 510(k), after the legislation in which it was introduced, or Premarket Notification, PMN. Recently, concern has arisen about whether the 510(k) process is sufficiently stringent. That was the topic of the 13 April committee hearing. The law governing the process was substantially modified in 1997, so testimony was presented about devices that had been submitted for approval in 1998, or later.

Testimony: I’ll talk here about the testimony of Ralph Hall (http://aging.senate.gov/events/hr233hr.pdf) and a study prepared by Battelle (http://www.advamed.org/NR/rdonlyres/255F9405-677D-45B1-BAC8-0D4FD5017054/0/510kPremarketNotificationEvaluation.pdf) that was submitted to the committee. Both are concerned with Class I recalls, those that have potentially the most serious medical consequences.

The Battelle study compares the number of devices approved to the number that were later recalled. The main result is summarized on page 2 of the report in a table that compares PMA to 510(k). Of the 2825 devices approved through PMA in the relevant time period, 24, or 0.85%, were later subject to a Class I recall Of the 46,690 devices approved through 510(k), 77, or 0.16%, were later subject to a Class I recall. The report highlights the fact that both recall rates are low and that the 510(k) process had the lower rate.

The Hall study compares the number of devices submitted for approval to the number that were later recalled. The main results are summarized on pages 2–3 of the report, which say that fewer than 0.5% of 510(k) submissions were later subject to a Class I recall and fewer than 0.5% of PMA submissions were later subject to a Class I recall.

Comment: The testimony is statistical, and its purpose is to say that the 510(k) process works well. But I want to raise questions about its relevance. Battelle compares Pr[recall | approved by PMA] to Pr[recall | approved by 510(k)] while Hall compares Pr[recall | submitted to PMA] to Pr[recall | submitted to 510(k)]. Are those the relevant probabilities? Or should we be concerned about p = Pr[device is caught by screening | device is faulty]? Consider an analogy to screening airline passengers: are we interested in the fraction of air travelers who are carrying weapons, or the probability that a weapon carrier will be caught by screening?

Are data available to estimate p? Yes, at least to a first approximation. Let A be the number of devices recalled and B be the number caught by screening. Then B/(A+B) is an estimate of p. Of course, not all faulty devices are eventually caught and recalled, even after they’ve been marketed for several years. Thus, A may underestimate the faulty devices that reach the market and B/(A+B) may overestimate p. But at least it’s a start.

Another point concerns which devices are screened by PMA and which by 510(k). Critical devices — those with the most serious consequences should they fail — are screened by PMA. Less critical devices are screened by 510(k). Therefore, if a device approved by 510(k) is later recalled, it may not be a Class I recall; it may be a recall of a less serious nature, and thus not included in the Hall or Battelle studies.

All in all, I’m not satisfied that the testimony presented to the Senate subcommittee fully addresses the right questions. I invite your comments.

Advertisements

1 Response to “U.S. Senate hearing on screening for medical devices”


  1. 1 Markk April 26, 2011 at 3:21 pm

    “Caught by screening” is a bad indicator and I would not trust it as much as the data that were used and given. As a guy who used to work for a very large Med equipment manufacturer, if we got caught at the screening level that was bad. The screening process would cause us to change our process before the screening so we would pass. If a lot of product is getting caught at the screening level it means either the manufacturers thought they could get by because they thought the screen was bad, or much more likely there was a miscommunication of what the screen was actually looking for.

    Recalls are to me a much better indicator if Manufacturers are shipping poor product. They are real failures and don’t indicate gaming or poorly designed testing that is probably out of date as technology changes. So I guess I am disagreeing with you if I understood what you were saying.

    In reality, these are just gross measures anyway, and I think spending a lot of time on some kind of fine tuning at this level is a mis-allocation of resources. We are interested in actual issues in use, add resources there to monitor for problems, more than pre-screening.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




About

The Statistics Forum, brought to you by the American Statistical Association and CHANCE magazine, provides everyone the opportunity to participate in discussions about probability and statistics and their role in important and interesting topics.

The views expressed here are those of the individual authors and not necessarily those of the ASA, its officers, or its staff. The Statistics Forum is edited by Andrew Gelman.

A Magazine for People Interested in the Analysis of Data

%d bloggers like this: