Miami serology test
-
wrote on 25 Apr 2020, 01:38 last edited by
This looks better than a Facebook ad.
To find test subjects, researchers partnered with Florida Power & Light to randomly generate phone numbers in a cross-section of the community. UM researchers arrange the tests with those who agree to do it. There are 10 drive-thru testing locations, usually libraries, where Miami-Dade Fire Rescue personnel wearing full protective suits take blood samples through car windows.
-
wrote on 25 Apr 2020, 01:45 last edited by
You can't just tease us with that...
Results?
-
wrote on 25 Apr 2020, 01:46 last edited by
There aren’t any yet.
-
wrote on 25 Apr 2020, 02:06 last edited by
Indy is doing the same method. Random set, invitation only. Obviously still not perfect, since not everyone says yes, and the characteristics of those who participate and those who refuse are probably different.
But still pretty good.
-
wrote on 25 Apr 2020, 02:10 last edited by
One other concern is the sensitivity of these tests is still unclear, but is not really as high as it needs to be for areas with low prevalence. For example, the possibility of zero people with antibodies in Santa Clara (IOW, all positives being false positives) was well within the error bands of the test.
-
wrote on 25 Apr 2020, 02:32 last edited by
@jon-nyc said in Miami serology test:
One other concern is the sensitivity of these tests is still unclear, but is not really as high as it needs to be for areas with low prevalence. For example, the possibility of zero people with antibodies in Santa Clara (IOW, all positives being false positives) was well within the error bands of the test.
That's hard to believe. Where did you get that from?
-
wrote on 25 Apr 2020, 02:51 last edited by jon-nyc
It was from a thread on twitter, I’ll see if I can find it. It’s so hard to find twitter threads days old though.
And think Andrew Gelman’s blog discussed it too.
-
wrote on 25 Apr 2020, 03:08 last edited by
Such a test on such a sample size would seem nearly useless, and reporting its results as if they were even potentially important, would seem scientifically negligent.
-
wrote on 25 Apr 2020, 03:12 last edited by jon-nyc
So - their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.1-99.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5-100.
The ranges they give in their study - in other words when the give a range of IFR and case prevalence - it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement
-
wrote on 25 Apr 2020, 03:18 last edited by
Oh, and did I mention that they got the 3330 people by advertising on Facebook?
Did I further mention that the further away from the test site people lived the higher their chances of testing positive? But there were fewer of them (who wants to drive an hour for a study?) so they had to give them extra weighting.
-
So - their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.1-99.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5-100.
The ranges they give in their study - in other words when the give a range of IFR and case prevalence - it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement
wrote on 25 Apr 2020, 03:34 last edited by@jon-nyc said in Miami serology test:
So - their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.1-99.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5-100.
The ranges they give in their study - in other words when the give a range of IFR and case prevalence - it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement
Well if there's an expected random false positive count of 50 and they find 50 then if you had to bet on the real number of positives, you would bet zero. Zero would not just be "in the error margin", it would be the statistically most likely value of the true number of positives.
-
wrote on 25 Apr 2020, 03:35 last edited by
Right
-
wrote on 25 Apr 2020, 03:35 last edited by
That seems too statistically negligent to believe.
-
wrote on 25 Apr 2020, 03:38 last edited by
Reread my posts. I didn’t say 98.5 was the number. That was just to point out how high a specificity would be consistent with 50 as an expected value even if there were in fact zero.
I discuss the actual estimates later in the post.
-
wrote on 25 Apr 2020, 03:39 last edited by jon-nyc
Honestly this effort seemed unprofessional to me and I’m only an armchair statistician. lol