Miami serology test

This looks better than a Facebook ad.
To find test subjects, researchers partnered with Florida Power & Light to randomly generate phone numbers in a crosssection of the community. UM researchers arrange the tests with those who agree to do it. There are 10 drivethru testing locations, usually libraries, where MiamiDade Fire Rescue personnel wearing full protective suits take blood samples through car windows.

You can't just tease us with that...
Results?

There aren’t any yet.

Indy is doing the same method. Random set, invitation only. Obviously still not perfect, since not everyone says yes, and the characteristics of those who participate and those who refuse are probably different.
But still pretty good.

One other concern is the sensitivity of these tests is still unclear, but is not really as high as it needs to be for areas with low prevalence. For example, the possibility of zero people with antibodies in Santa Clara (IOW, all positives being false positives) was well within the error bands of the test.

@jonnyc said in Miami serology test:
One other concern is the sensitivity of these tests is still unclear, but is not really as high as it needs to be for areas with low prevalence. For example, the possibility of zero people with antibodies in Santa Clara (IOW, all positives being false positives) was well within the error bands of the test.
That's hard to believe. Where did you get that from?

It was from a thread on twitter, I’ll see if I can find it. It’s so hard to find twitter threads days old though.
And think Andrew Gelman’s blog discussed it too.

Such a test on such a sample size would seem nearly useless, and reporting its results as if they were even potentially important, would seem scientifically negligent.

So  their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.199.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5100.
The ranges they give in their study  in other words when the give a range of IFR and case prevalence  it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement

Oh, and did I mention that they got the 3330 people by advertising on Facebook?
Did I further mention that the further away from the test site people lived the higher their chances of testing positive? But there were fewer of them (who wants to drive an hour for a study?) so they had to give them extra weighting.

@jonnyc said in Miami serology test:
So  their actual sample had 50 positives out of 3330. A 98.5% specificity would give an expected value of 50 false positives in a sample that size.
They took two estimates for specificity, one from a test performed by the test manufacturer and one that they performed.
The manufacturer’s estimate (based on a sample size of 371) was 99.5% with a 95 CI of 98.199.9. Their own estimate based on a sample of 30 gave an estimate of 100% with a 95 CI of 90.5100.
The ranges they give in their study  in other words when the give a range of IFR and case prevalence  it does not include the confidence intervals, rather it’s based on the two point estimates 99.5 and 100. So the lower bound they give is based on a 99.5% specificity, the upper bound 100%. Confidence intervals be damned.
All that data is either in the preprint on MedRxiv or the supplement, quoted by Carl Zimmer in a twitter thread.
Disclaimer: I’ve not seen the supplement
Well if there's an expected random false positive count of 50 and they find 50 then if you had to bet on the real number of positives, you would bet zero. Zero would not just be "in the error margin", it would be the statistically most likely value of the true number of positives.

Right

That seems too statistically negligent to believe.

Reread my posts. I didn’t say 98.5 was the number. That was just to point out how high a specificity would be consistent with 50 as an expected value even if there were in fact zero.
I discuss the actual estimates later in the post.

Honestly this effort seemed unprofessional to me and I’m only an armchair statistician. lol