I have previously provided examples of how the illusion of efficacy over a period of time can be created for a vaccine intended to avoid getting infected with a virus. For example, this article shows that if the vaccine is a placebo (i.e. has no effect at all) it will appear to be effective if there is a delay in reporting infections of those vaccinated. But it is just an inevitable statistical illusion. In this article I showed the same illusion is created if those infected shortly after vaccination are classified as unvaccinated. I also produced a detailed video about it and its implications in assessing Covid vaccine efficacy and safety.
In assessing the efficacy of Covid vaccines in observational studies (such as in the large Israel study which claimed 95% efficacy of the Pfizer vaccine) it is now standard to assume that the vaccine takes 14 days to 'work' and hence to classify a person as 'unvaccinated' within 14 days of vaccination. But, as the previous example shows, such an approach inevitably exaggerates efficacy.
Because it is such a critical issue and lots of people still don't 'get it' I have tried to explain in the simplest way possible in this short video why assuming a person is 'unvaccinated' until 14 days after vaccination is such a problem:
The video also shows how vaccine effectiveness in observational trials is further exaggerated if the unvaccinated are less likely to get tested for the virus than the vaccinated (as happened in the Israel Pfizer study).
Vaccine efficacy: statistical illusion or biochemical reality? A dispute between SI and BR
SI: Prof Fenton et al show that vaccine efficacies as high as 95% can be produced by uncorrected systematic biases in the analysis of observational data, even when the true efficacy is 0% (or even negative). Under these circumstances, efficacy decreases with time towards the true value. They claim this may explain the observed waning and the need for vaccine boosters. If true, “vaccine protection” is merely a statistical illusion.
BR: But we can measure antibodies to the antigen provided by the vaccine, and follow their development over time (also T cells and other markers of an activated immune response can, and have been, meas…
I presume your calculations are showing "relative risk reduction" (RRR)? But from what I've read, we need to know the "absolute risk reduction" (ARR) in order to establish efficacy in the real world, and that the drug companies tend to publish RRR so that their product looks far, far better than it is in reality. Here are a few articles to illustrate what I mean: Dr Peter Doshi's paper - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4125239 The Lancet article - https://www.thelancet.com/journals/lanmic/article/PIIS2666-5247(21)00069-0/fulltext Medicina journal - https://www.mdpi.com/1648-9144/57/3/199/htm
I understand this explanation , it has blown my mind! Can you explain how this issue should be addressed? What is the correct way to perform the analysis? I guess there are statistical methods to address this? Thanks for your time.
Can anyone show me info regarding the imbalanced testing rates in the cited Israel study? I believe this is it: https://www.nejm.org/doi/10.1056/NEJMoa2101765 I don't see any info on testing rates, and as far as I know the data is not public.
Very nice simple explanation even I can understand! :-)