Miami serology test
-
This looks better than a Facebook ad.
To find test subjects, researchers partnered with Florida Power & Light to randomly generate phone numbers in a cross-section of the community. UM researchers arrange the tests with those who agree to do it. There are 10 drive-thru testing locations, usually libraries, where Miami-Dade Fire Rescue personnel wearing full protective suits take blood samples through car windows.
-
One other concern is the sensitivity of these tests is still unclear, but is not really as high as it needs to be for areas with low prevalence. For example, the possibility of zero people with antibodies in Santa Clara (IOW, all positives being false positives) was well within the error bands of the test.
-
@jon-nyc said in Miami serology test:
One other concern is the sensitivity of these tests is still unclear, but is not really as high as it needs to be for areas with low prevalence. For example, the possibility of zero people with antibodies in Santa Clara (IOW, all positives being false positives) was well within the error bands of the test.
That's hard to believe. Where did you get that from?
-
So - their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.1-99.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5-100.
The ranges they give in their study - in other words when the give a range of IFR and case prevalence - it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement
-
Oh, and did I mention that they got the 3330 people by advertising on Facebook?
Did I further mention that the further away from the test site people lived the higher their chances of testing positive? But there were fewer of them (who wants to drive an hour for a study?) so they had to give them extra weighting.
-
@jon-nyc said in Miami serology test:
So - their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.1-99.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5-100.
The ranges they give in their study - in other words when the give a range of IFR and case prevalence - it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement
Well if there's an expected random false positive count of 50 and they find 50 then if you had to bet on the real number of positives, you would bet zero. Zero would not just be "in the error margin", it would be the statistically most likely value of the true number of positives.