The 0.77% RD is per million miles, the 80% RRR was a math error and should be 67%, which has no time frame, and the revised 23k figure is per year, assuming people continue to drive the same amount.aph wrote:gmalivuk wrote:What if a proposed safety measure resulted in a decrease of 0.77% in the likelihood of dying in a car accident per million miles driven. Would you say that was a useful safety measure? Would you be in favor of it if the implementation and regulation would have the hefty price tag of a billion US dollars per year?It's actually an 80% reduction, or nearly 28 thousand lives a year. But the point was that without also being told (or guesstimating, as you did) the initial risk, that single 0.77% number that sort of seems small gives basically no information whatsoever.
How did you get the 80% and the 28k?
(edit: deleted my calculations, need to recalculate, is it per million miles or per year? )
Problems in medical research:Tracking outcome switching
Moderators: gmalivuk, Moderators General, Prelates
 gmalivuk
 GNU Terry Pratchett
 Posts: 26566
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Problems in medical research:Tracking outcome switching
Re: Problems in medical research:Tracking outcome switching
That is a tricky one. I don't know how to do a costbenefit analysis for public policy measures. Any measure that would save several thousand lives sounds like a very good and worth vile cause, and counting the money seems like putting profit ahead of peoples lives, and just plain wrong. But a useful costbenefit analysis would compare lives saved vs lives lost, not lives vs money.
When you are making a car purchase, you know what the side effects of spending more money on safety features will be  you might get a cheaper apartment, spend less on food, or on medical treatments, so that might increase your risk of dying from other causes. If you had enough data, you could compare the increase of risk of dying from other causes to the risk reduced from dying in a car crash and then decide what to do. The big relative risk just doesn't enter the calculations, while risk difference does.
My intuition is that something similar could be done with this example of public policy, and that the money spent should be calculated in terms of risk increase of dying from other causes. Then if the number of potentially saved lives is greater then the number of potentially lost lives, the policy should be implemented. Don't think that is even possible to calculate with today's economic models..
But ok, you've made your point, in this case it is not enough to just report on the size of the effect (it would be just a fraction of a percent for decreasing the risk of car crash deaths), but also the number of lives saved or similar measures.
When you are making a car purchase, you know what the side effects of spending more money on safety features will be  you might get a cheaper apartment, spend less on food, or on medical treatments, so that might increase your risk of dying from other causes. If you had enough data, you could compare the increase of risk of dying from other causes to the risk reduced from dying in a car crash and then decide what to do. The big relative risk just doesn't enter the calculations, while risk difference does.
My intuition is that something similar could be done with this example of public policy, and that the money spent should be calculated in terms of risk increase of dying from other causes. Then if the number of potentially saved lives is greater then the number of potentially lost lives, the policy should be implemented. Don't think that is even possible to calculate with today's economic models..
But ok, you've made your point, in this case it is not enough to just report on the size of the effect (it would be just a fraction of a percent for decreasing the risk of car crash deaths), but also the number of lives saved or similar measures.
 gmalivuk
 GNU Terry Pratchett
 Posts: 26566
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Problems in medical research:Tracking outcome switching
I picked $1bn per year because it's only $43k per life saved. Lost productivity due to esrly death is estimated by the FDA at something like $7m per personlifetime as I recall, which works out to about $100k per year of life lost. Thus, the way public health things are already often calculated, it is a much greater benefit than cost.
Re: Problems in medical research:Tracking outcome switching
The "value of life" for these sorts of calculations in the UK is in the region of £1.2 million. Interestingly, the rail industry is investing closer to £2 million per life saved, because of the higher profile of train crashes.
He/Him/His
Re: Problems in medical research:Tracking outcome switching
aph wrote:qetzal wrote:It sounds like you're conflating two distinct issues here. Publication bias, where negative results don't get published, can make us think an effect is real even when it's not. That's because we might only see the results of the few studies that randomly produce a false positive result, but we don't see the many more studies that generate a true negative.
Found a relevant: https://xkcd.com/882/
Well, if the news were published studies.That's quite different from the situation where a positive result is real, but not large enough to matter. This is where effect size and clinical significance become relevant.
Yes, that would be another situation in which the effect size measures would be relevant  we usually don't calculate or interpret the effect size if the study didn't find statistically significant results, but if we have a good reason to suspect that there should be some stat. sign. effect, we can approximate the needed size of the sample that would resolve the issue using measures of effect size.
I was referring to the importance of calculating the size of the effect for studies that already found stat. sign. results. The pvalue tells us the effect was likely not due to chance, but it doesn't tell us the more important information of just how large the effect was.
Actually, the pvalue does NOT tell you that the effect was not due to chance. The pvalue only tells you how likely your result would be if there's actually no difference between the groups. You can have p < 0.01 and the result could still be almost certainly due to chance.
To illustrate, suppose you're doing drug discovery and you want to find a inhibitor for some new receptor. Suppose also that you have a library of 10,000 random compounds that you can test. You select one and compare receptor activity in control and treated cells, and find that activity is inhibited in the treated cells with p = 0.01. What's the chance that that single, randomly selected compound really inhibits the receptor? Answer: That's not enough information to say.
The answer depends on the expected frequency of active compounds in your library. Let's say you know from experience that on average there would be 10 compounds with true activity against any given receptor in this sort of assay. Now imagine that you test every compound in the library. You expect to get 10 true positives, but you can also expect 100 false positives a p <= 0.01. So the chance that any given positive is a true positive = 10/(10+100) = ~9%. So in this scenario, if you only test one compound and it gives a statistically significant effect at p = 0.01, it's still >90% likely to be a false positive.
Note also that this analysis assumes that random statistical variability is the only source of false positives, and that there are no false negatives. Neither of these is even approximately true in most realworld situations, which only makes things worse.
Re: Problems in medical research:Tracking outcome switching
qetzal wrote:Actually, the pvalue does NOT tell you that the effect was not due to chance. The pvalue only tells you how likely your result would be if there's actually no difference between the groups. You can have p < 0.01 and the result could still be almost certainly due to chance.
Note the 'likely' in the sentence you were quoting. The effect is the difference between the samples. If we get p < 0.01 that means our difference highly likely (with 99.00 percent certainty) did not happen due to chance or random factors such as sampling error, but due to existing difference between population groups. And yes, you can have p < 0.01 be due to chance  by definition, this will happen once in a hundred studies. Or five times out of a hundred for p < 0.05.
Nice illustration, though, I don't think see how you can do statistics with just one test of one compound. How would you calculate the pvalue?
Re: Problems in medical research:Tracking outcome switching
aph wrote:If we get p < 0.01 that means our difference highly likely (with 99.00 percent certainty) did not happen due to chance or random factors such as sampling error, but due to existing difference between population groups.
This is wrong. As a starting point, try the Wikipedia article on pvalue, which states:
There are several common misunderstandings about pvalues.
1. The pvalue is not the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. It is not connected to either. In fact, frequentist statistics does not and cannot attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a pvalue can be very close to zero and the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability that would explain the results more easily), Lindley's paradox.
Note that last sentence. You can have a p value that's very small, and yet the null hypothesis of no difference can still be almost certain.
A small p value only tells you that it's unlikely you'd get that result if there's actually no difference between groups. By itself, it says nothing about whether another explanation is more likely.
Re: Problems in medical research:Tracking outcome switching
Riiight... so I'm saying that a small p value means that the difference in samples likely happened due to an actual difference between groups, and you are saying the small p value means that the result is unlikely if there wasn't any difference. It is like almost the opposite meaning
Re: Problems in medical research:Tracking outcome switching
To be blunt, what you're saying is wrong. ObvioUsly I'm explaining poorly, so please do some reading on your own. It's not easy to grasp, and I freely admit it took me quite a while to understand, but it's woth the effort.
 gmalivuk
 GNU Terry Pratchett
 Posts: 26566
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Problems in medical research:Tracking outcome switching
It's not the opposite, but it is very different.aph wrote:Riiight... so I'm saying that a small p value means that the difference in samples likely happened due to an actual difference between groups, and you are saying the small p value means that the result is unlikely if there wasn't any difference. It is like almost the opposite meaning
The p value is not the probability that the null hypothesis is true. (It's not the probability that there's no difference between groups in a study like the aspirin one.)
The p value is the probability that the observed results would happen if the null hypothesis is true. (It's the probability that the two groups would have rates as far apart as they do, or farther apart, if there is no difference between them.)
But by itself it tells you nothing about the likelihood that there is an actual difference between groups.

 Posts: 1708
 Joined: Wed Oct 20, 2010 9:01 am UTC
Re: Problems in medical research:Tracking outcome switching
HES wrote:The "value of life" for these sorts of calculations in the UK is in the region of £1.2 million. Interestingly, the rail industry is investing closer to £2 million per life saved, because of the higher profile of train crashes.
Nice pegs 1 QALY(Quality Adjusted Life year) at about £30K ($42K).
in practice they'll often spend quite a bit more on individuals, particularly if they're young and if the treatment has good odds of working.
Give a man a fish, he owes you one fish. Teach a man to fish, you give up your monopoly on fisheries.
Who is online
Users browsing this forum: No registered users and 12 guests