1132: "Frequentists vs. Bayesians"
Moderators: Moderators General, Prelates, Magistrates

 Posts: 1
 Joined: Fri Nov 09, 2012 1:23 pm UTC
Re: 1132: "Frequentists vs. Bayesians"
Surely this a good bet for the simple reason that if our sun went supernova we'd all be dead by the time the neutrinos hit the detector..?
Re: 1132: "Frequentists vs. Bayesians"
My favourite quote, when reading up on Frequentists and Bayesians:
A frequentist is a person whose longrun ambition is to be wrong 5% of the time.
A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule
A frequentist is a person whose longrun ambition is to be wrong 5% of the time.
A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule
Re: 1132: "Frequentists vs. Bayesians"
JediMaster012 wrote:After reading the comments talking about the slim chance of explosion, why would it need to be a "Bayesian Statistician?" Wouldn't the average person tell you there was no chance of the sun having exploded regardless of what someone or something told you? Isn't that just common sense?
Yes. The average person is a Bayesian statistician.
Pingouin7 wrote:pareidolon wrote:On what I assure you is an entirely unrelated topic,
anyone here believe the world's going to end on December 12th?
Why would anyone believe that?
There is no limit to what the human brain can convince itself to believe.
philip1201 wrote:Not everything which maps countable infinities onto finite areas is a Lovecraft reference.

 Posts: 4
 Joined: Sat Jan 03, 2009 4:01 pm UTC
Re: 1132: "Frequentists vs. Bayesians"
Picking an absurdly large threshold for statistical significance is bad frequentist statistician.
Re: 1132: "Frequentists vs. Bayesians"
benhowt wrote:More worrying is this; turns out one judge has decided that the assumption is so worrying that they won't accept its use in court...even people who get it don't get it...
http://www.guardian.co.uk/law/2011/oct/02/formulajusticebayestheoremmiscarriage
In mitigation (your honour):
And so he decided that Bayes' theorem shouldn't again be used unless the underlying statistics are "firm".
Who gets to decide what's firm, and what isn't, isn't made clear.
Re: 1132: "Frequentists vs. Bayesians"
The distinction between frequentists and bayesians among statisticians isn't really that stark. No practicing statistician believes that Bayes' theorem is never useful and no Bayesian statistician believes that "frequentist" concepts like pvalues are never useful either. And what's more, I think that most statisticians would define a "Bayesian" simply to be someone who applies Bayes' rule to their work most of the time, and a "frequentist" as somebody who doesn't. That means a TON of statistical techniques are not explicitly "Bayesian," yet I doubt those statisticians aren't aware of when Bayes' rule is appropriate for certain problems. By the way, a lot of notexplicitlyBayesian methods can actually be shown to be equivalent to Bayesian methods under certain conditions, so a lot of times the two approaches actually end up yielding the same conclusion.

 Posts: 82
 Joined: Mon Dec 20, 2010 5:28 am UTC
Re: 1132: "Frequentists vs. Bayesians"
EpicanicusStrikes wrote:J Thomas wrote:What this tells MDs is to ignore rare diseases.
Mindsets like this are lethal. My father died because his lung cancer was treated as pneumonia and not caught until it was too late. My uncle died because his esophageal cancer was treated as GERD until it was too late. My mother died because even though they caught her breast cancer in time, she developed pyoderma gangrenosum which was treated as vasculitis.
The treatment for vasculitis is debridement, which is the best way possible to ensure the spread of PG. But of course they didn't realize that until, once again, it was too late.
No actual comment on the comic. I just really fucking hate the medical industry.
I am sorry about your losses. I would argue that lung cancer and esophageal cancer are not rare and are not indicative of ignoring rare diseases. Unless there are some mitigating circumstances, the MD(s) involved should be sued and possible removed from the field.
Re: 1132: "Frequentists vs. Bayesians"
@EpicanicusStrikes: If there was evidence that a person had ailment A and not ailment B, then nothing that anyone has said in this thread should compel a physician to continue to assume that they had B.
philip1201 wrote:Not everything which maps countable infinities onto finite areas is a Lovecraft reference.
Re: 1132: "Frequentists vs. Bayesians"
david_h wrote:benhowt wrote:More worrying is this; turns out one judge has decided that the assumption is so worrying that they won't accept its use in court...even people who get it don't get it...
http://www.guardian.co.uk/law/2011/oct/02/formulajusticebayestheoremmiscarriage
In mitigation (your honour):And so he decided that Bayes' theorem shouldn't again be used unless the underlying statistics are "firm".
Who gets to decide what's firm, and what isn't, isn't made clear.
Yeah, if you read the article, it contradicts the headline, and, I hear, if you track down the original judgement, even the article misrepresents it. Apparently the problem is not with the use of Bayes theorem, nor with the use of "floppy" statistics where the entire plausible range is taken into account, but with an "expert" statistician taking the stand and declaring that his calculations show that it's almost certain the vile miscreant dunnit without bothering to mention that his calculations are based on figures pulled out of thin air...
 EpicanicusStrikes
 Random Boners = True Attraction
 Posts: 130
 Joined: Wed Nov 16, 2011 11:36 am UTC
Re: 1132: "Frequentists vs. Bayesians"
SerialTroll wrote: I would argue that lung cancer and esophageal cancer are not rare and are not indicative of ignoring rare diseases. Unless there are some mitigating circumstances, the MD(s) involved should be sued and possible removed from the field.
My dad went in the early '70s and it was likely a more rare condition then than it is now. Plus he was either in his late 20s when it occurred, which does put it even less onto their radar. As for my uncle? That was just a few years ago, but it's not like esophagoscopies are regularly scheduled for people in their early 40s regarldess of symptoms. Most GERD patients are given Prevacid and sent away.
But the condition doesn't even have to be that rare. My point is in substituting statistics for actual diagnostic work. Treating people by way of formula. It's everywhere. Consult the averages, identify the common course and never deviate. Finding a genuine diagnostician outside of places like Mayo is damn near impossible.
That's what I'm saying is a bad thing. And yeah, there have been plenty of opportunities for lawsuits thoughout. But what's the point? Spend years in court dragging out the emotional turmoil only to, maybe, get some cash out of it? The doctor will have documentation showing reasonablecourse and continue to practice. No policies will have been changed and the industry will continue to treat based on the medical equivalent of actuarial charts.
"@EpicanicusStrikes: If there was evidence that a person had ailment A and not ailment B, then nothing that anyone has said in this thread should compel a physician to continue to assume that they had B."
Ah, but wht if there is evidence that ailment A is a misdiagnoses? Such as invasive vasculitis? Radiation treatment does not destroy large muscle groups unless it was applied using a microwave oven. So they had eveidence that their ailment A assumption was wrong. That means they need to look further, rather than shrugging and saying "well that's weird", and carrying on as if ailment B doesn't even exist. It's been claimed that statistics are a good tool in diagnosing medical conditions.
I'm ranting in that statistics are overused and actual research needs to be employed as well.
Re: 1132: "Frequentists vs. Bayesians"
EpicanicusStrikes wrote:Oktalist wrote:If there was evidence that a person had ailment A and not ailment B, then nothing that anyone has said in this thread should compel a physician to continue to assume that they had B.
Ah, but wht if there is evidence that ailment A is a misdiagnoses? Such as invasive vasculitis? Radiation treatment does not destroy large muscle groups unless it was applied using a microwave oven. So they had eveidence that their ailment A assumption was wrong. That means they need to look further, rather than shrugging and saying "well that's weird", and carrying on as if ailment B doesn't even exist.
I don't follow. Did you get A and B mixed up at the end there?
Ailment A is the true one. B is the misdiagnosis. I was saying that (i) you are correct, and (ii) you have not contradicted anything that anyone has said here.
If there's conflicting evidence on both sides, then you do more investigation. If you've exhausted all your investigative options, then you may have to bite the bullet and just bet that the more common ailment is the one you've got.
It's been claimed that statistics are a good tool in diagnosing medical conditions.
No, it's been claimed that statistics are a good tool in choosing which tests to perform (the ones that may tell you something useful) and which tests not to perform (the ones that will not tell you anything useful).
Last edited by Oktalist on Fri Nov 09, 2012 3:15 pm UTC, edited 1 time in total.
philip1201 wrote:Not everything which maps countable infinities onto finite areas is a Lovecraft reference.
Re: 1132: "Frequentists vs. Bayesians"
XKCD should be officially renamed "Me So Smart". This strip can be amusing at times but for the most part it's just incredibly pretentious. Some say it used to be better. Maybe. All I know is that now it seems like the purpose of most of the strips is for the author to let us all know how smart he is and for his readers to let others know how smart they are by emailing the strip (i.e. "See how smart I am? I think jokes about Bayesian vs Frequentist statistics are funny! Impressed? Ha! Bet you don't get it, do ya? Tee hee! Me so smart!" etc.).
This strip just isn't funny or witty. I get it; he knows about Bayesian vs Frequentist statistics. We're all mighty impressed down here I can tell you.
Now geeks across the world can email one another this to to show one another how smart they are (and what lame senses of humor they have).
I imagine (what's his name, Randall? something?) sitting up at night googling abstruse science terms he saw in a pop sci book or cable documentary to learn enough about it to name drop it the nexty day:
"Say, tomorrow I want to let people know that I know about  what wasthat thing called again?  Oh yeah! The fine structure constant! OK, so, how can I work that into a strip? Hmmm..."
Let's be real; this is what xkcd is all about now, isn't it?
This strip just isn't funny or witty. I get it; he knows about Bayesian vs Frequentist statistics. We're all mighty impressed down here I can tell you.
Now geeks across the world can email one another this to to show one another how smart they are (and what lame senses of humor they have).
I imagine (what's his name, Randall? something?) sitting up at night googling abstruse science terms he saw in a pop sci book or cable documentary to learn enough about it to name drop it the nexty day:
"Say, tomorrow I want to let people know that I know about  what wasthat thing called again?  Oh yeah! The fine structure constant! OK, so, how can I work that into a strip? Hmmm..."
Let's be real; this is what xkcd is all about now, isn't it?

 Posts: 1
 Joined: Fri Nov 09, 2012 3:07 pm UTC
Re: 1132: "Frequentists vs. Bayesians"
So I fb'd this for my "nerdier" friends. Someone said they didn't get it and "is that a bad thing?" Of course i told her it depended on her priors. BWAHAHA!
Re: 1132: "Frequentists vs. Bayesians"
Not to derail the current discussion(though it sounds like the problem was less one of doctors using statistics and more one of them using stats incorrectly and not updating based on evidence), and not to attack Randall's art, but isn't it perhaps a bit ironic that it's the frequentist who has his head more firmly attached to his shoulders, so to speak?

 Posts: 109
 Joined: Mon Aug 23, 2010 3:00 pm UTC
Re: 1132: "Frequentists vs. Bayesians"
Looks like Randall stayed up late reading Nate Silver's book.
Personally, I doubt that there is a schism among statisticians like what Silver describes in the book. Probability & statistics is a very large field, and misusing any single aspect of it will always get you in trouble.
Personally, I doubt that there is a schism among statisticians like what Silver describes in the book. Probability & statistics is a very large field, and misusing any single aspect of it will always get you in trouble.
 SecondTalon
 SexyTalon
 Posts: 26508
 Joined: Sat May 05, 2007 2:10 pm UTC
 Location: Louisville, Kentucky, USA, Mars. HA!
 Contact:
Re: 1132: "Frequentists vs. Bayesians"
xkcdtag wrote:XKCD should be officially renamed "Me So Smart". This strip can be amusing at times but for the most part it's just incredibly pretentious. Some say it used to be better. Maybe. All I know is that now it seems like the purpose of most of the strips is for the author to let us all know how smart he is and for his readers to let others know how smart they are by emailing the strip (i.e. "See how smart I am? I think jokes about Bayesian vs Frequentist statistics are funny! Impressed? Ha! Bet you don't get it, do ya? Tee hee! Me so smart!" etc.).
This strip just isn't funny or witty. I get it; he knows about Bayesian vs Frequentist statistics. We're all mighty impressed down here I can tell you.
Now geeks across the world can email one another this to to show one another how smart they are (and what lame senses of humor they have).
I imagine (what's his name, Randall? something?) sitting up at night googling abstruse science terms he saw in a pop sci book or cable documentary to learn enough about it to name drop it the nexty day:
"Say, tomorrow I want to let people know that I know about  what wasthat thing called again?  Oh yeah! The fine structure constant! OK, so, how can I work that into a strip? Hmmm..."
Let's be real; this is what xkcd is all about now, isn't it?
We're on to you, SirMustapha.
ALTERNATE JOKE
Holy shit, this forum has a webcomic!?
heuristically_alone wrote:I want to write a DnD campaign and play it by myself and DM it myself.
heuristically_alone wrote:I have been informed that this is called writing a book.
 EpicanicusStrikes
 Random Boners = True Attraction
 Posts: 130
 Joined: Wed Nov 16, 2011 11:36 am UTC
Re: 1132: "Frequentists vs. Bayesians"
Oktalist wrote:EpicanicusStrikes wrote:Oktalist wrote:If there was evidence that a person had ailment A and not ailment B, then nothing that anyone has said in this thread should compel a physician to continue to assume that they had B.
Ah, but wht if there is evidence that ailment A is a misdiagnoses? Such as invasive vasculitis? Radiation treatment does not destroy large muscle groups unless it was applied using a microwave oven. So they had eveidence that their ailment A assumption was wrong. That means they need to look further, rather than shrugging and saying "well that's weird", and carrying on as if ailment B doesn't even exist.
I don't follow. Did you get A and B mixed up at the end there?
Ailment A is the true one. B is the misdiagnosis. I was saying that (i) you are correct, and (ii) you have not contradicted anything that anyone has said here.
If there's conflicting evidence on both sides, then you do more investigation. If you've exhausted all your investigative options, then you may have to bite the bullet and just bet that the more common ailment is the one you've got.It's been claimed that statistics are a good tool in diagnosing medical conditions.
No, it's been claimed that statistics are a good tool in choosing which tests to perform (the ones that may tell you something useful) and which tests not to perform (the ones that will not tell you anything useful).
Keep in mind that I did preface this with a claim that it's not so much about the comic, but rather venom being spewed at the laziness of the medical industry. Perhaps I did get my As and Bs inverted, but my point is in the lack of investigation. Statistics should point towards a valid selection of tests. Unfortunately, statistics are being used in place of tests.
"No need to run tests, my procedure manual says you need Vancomycin because 90% of patients respond well. Here ya's go. Oh, sure, a culture would let us know that it's resistant to Vanc, but that's just research. We don't do that here."
Rhyme has it right  stats being used incorrectly and procedures being pushed despite contradictory evidence simply because the odds of their stats being misleading are considered tolerably low.

 Posts: 256
 Joined: Wed Feb 25, 2009 5:36 am UTC
Re: 1132: "Frequentists vs. Bayesians"
gogurt wrote:The distinction between frequentists and bayesians among statisticians isn't really that stark. No practicing statistician believes that Bayes' theorem is never useful and no Bayesian statistician believes that "frequentist" concepts like pvalues are never useful either. And what's more, I think that most statisticians would define a "Bayesian" simply to be someone who applies Bayes' rule to their work most of the time, and a "frequentist" as somebody who doesn't. That means a TON of statistical techniques are not explicitly "Bayesian," yet I doubt those statisticians aren't aware of when Bayes' rule is appropriate for certain problems. By the way, a lot of notexplicitlyBayesian methods can actually be shown to be equivalent to Bayesian methods under certain conditions, so a lot of times the two approaches actually end up yielding the same conclusion.
The way I find the best is to actually divide these things up. If you are dealing with classes of things with certain frequencies  like you would with actuarial tables, for instance  this is generally a more frequentist approach. You are actually describing realworld frequencies (1 in 1 million will get cancer) based on prior frequency statistics and assumptions of continuity. If, however, you are dealing with unique events  like an election  "degrees of certainty" in metaphor are all you really have, but at that point the math is actually not applicable. To say that, for instance, Obama had a 90% chance of winning the day before the election is merely to express a high degree of belief in his victory  but it doesn't actually say anything different than to say he had a 92% or 86% chance.
The Bayesian theorem is mathematically useful, and I know of no "frequentist" who rejects the use of it. The associated epistemological interpretation is generally false once one stops being a mathematician or abstract statistician and starts dealing with the real world, where probabilities and their distribution are determined based on frequencies. (The whole training phase of an object recognition system is this building up of frequencies until the expected error is low enough to proceed.) Basically, I think this approach here is the correct one.
 doogly
 Dr. The Juggernaut of Touching Himself
 Posts: 5526
 Joined: Mon Oct 23, 2006 2:31 am UTC
 Location: Lexington, MA
 Contact:
Re: 1132: "Frequentists vs. Bayesians"
collegestudent22 wrote: To say that, for instance, Obama had a 90% chance of winning the day before the election is merely to express a high degree of belief in his victory  but it doesn't actually say anything different than to say he had a 92% or 86% chance.
This is false.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.
Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.
Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?
Re: 1132: "Frequentists vs. Bayesians"
As somebody involved academically in medical epidemiology, I feel compelled to contribute to the discussion of statistics w/r/t to diagnosis.
First: physicians are not trained to ignore zebras. Doctors love zebras! Zebras are interesting, and your thousandth diabetic of the day isn't. They're explicitly trained to restrain themselves from diagnosing zebras in the absence of compelling evidence because  given that every disease known to man has a heterogenous set of symptoms  an uncommon manifestation of a very common illness is going to be the correct diagnosis more often than the common manifestation of a very uncommon illness. While every one that happens to suffer from this diagnosis is correct in being sad about this, the fact is that  by the numbers  this helps far, far more patients than it harms. The reverse is not true. Physicians who go looking for zebras tend to do far, far more harm than good.
I know you want to say, well, just calculate the probability of the commonlookinguncommon against... blah. They do not, and for one simple reason: physicians must be at least vaguely competent with far more conditions and illnesses than one could possibly memorize stats for, with uncommon presentations that do not even have descriptive statistics compiled never mind available for memorization, never mind computing conditional probabilities on the fly. Until computeraided diagnosis becomes the standard, combined with widespread collection of data from EMRs, this sort of fuzzy, qualitative approach will continue to be the standard. Not because docs don't know it's flawed, or don't care, but because the technology is only just getting there.
Now, beyond that, the statistics you guys hail as creating "useless tests" are screening tests. Yes, screenings are problematic: there's a large swath of the medical community that has become very critical of screening tests. Those criticisms don't really apply as well to diagnostics. And here's why:
Naive probability of having disease X = P
Probability of having that disease just because you walked into a medical office: >P, but we don't know how much
Then the doctor looks at your patient history. Most illnesses don't come out of the blue: approximately 80% of diagnoses can usually be deduced from a patient's history, and honestly, I haven't seen a study on this but I suspect if you exclude the ER (with its higher proportion of acute infection and trauma) that number would jump for the rest of the medical field. This is your firstorder diagnostic test: low specificity, middling sensitivity. It is as much  from a probability standpoint  part of the series of diagnostic tests as anything that comes in a kit.
Then, the doctor looks at you. His glance at your symptoms is your second diagnostic test. Slightly higher specificity, fairly high sensitivity.
Then come the initial round of diagnostics. Bloodwork and the like. As anyone who's been involved as a patient knows, this is rarely the last round  and yet, now, dealing with the "accuracy of tests", note that we're now at our /third/ stage of diagnosis, having attempted to already weed out those conditions that are highly improbable given the results of the first two. This, by the way, is how screenings work when they work correctly: you narrow down the target population to as highrisk a demographic as possible, and you ensure that screening positives are followed up by something with a relatively high specificity, to account for the high proportion of false negatives that arises.
Never assume that the accuracy of a single test determines whether it's useful or not. If you don't factor in the stages of diagnostics preceding it, you are doing it a great disservice.
Also, doctors never look at the cost of a test in our system, unless it's something they can bill for under FFS. Certainly the idea that a test isn't costeffective isn't something they care about (well, under FFS at least). I admit that ACOs/capitation plans are better at getting docs to consider whether tests are actually costeffective.
First: physicians are not trained to ignore zebras. Doctors love zebras! Zebras are interesting, and your thousandth diabetic of the day isn't. They're explicitly trained to restrain themselves from diagnosing zebras in the absence of compelling evidence because  given that every disease known to man has a heterogenous set of symptoms  an uncommon manifestation of a very common illness is going to be the correct diagnosis more often than the common manifestation of a very uncommon illness. While every one that happens to suffer from this diagnosis is correct in being sad about this, the fact is that  by the numbers  this helps far, far more patients than it harms. The reverse is not true. Physicians who go looking for zebras tend to do far, far more harm than good.
I know you want to say, well, just calculate the probability of the commonlookinguncommon against... blah. They do not, and for one simple reason: physicians must be at least vaguely competent with far more conditions and illnesses than one could possibly memorize stats for, with uncommon presentations that do not even have descriptive statistics compiled never mind available for memorization, never mind computing conditional probabilities on the fly. Until computeraided diagnosis becomes the standard, combined with widespread collection of data from EMRs, this sort of fuzzy, qualitative approach will continue to be the standard. Not because docs don't know it's flawed, or don't care, but because the technology is only just getting there.
Now, beyond that, the statistics you guys hail as creating "useless tests" are screening tests. Yes, screenings are problematic: there's a large swath of the medical community that has become very critical of screening tests. Those criticisms don't really apply as well to diagnostics. And here's why:
Naive probability of having disease X = P
Probability of having that disease just because you walked into a medical office: >P, but we don't know how much
Then the doctor looks at your patient history. Most illnesses don't come out of the blue: approximately 80% of diagnoses can usually be deduced from a patient's history, and honestly, I haven't seen a study on this but I suspect if you exclude the ER (with its higher proportion of acute infection and trauma) that number would jump for the rest of the medical field. This is your firstorder diagnostic test: low specificity, middling sensitivity. It is as much  from a probability standpoint  part of the series of diagnostic tests as anything that comes in a kit.
Then, the doctor looks at you. His glance at your symptoms is your second diagnostic test. Slightly higher specificity, fairly high sensitivity.
Then come the initial round of diagnostics. Bloodwork and the like. As anyone who's been involved as a patient knows, this is rarely the last round  and yet, now, dealing with the "accuracy of tests", note that we're now at our /third/ stage of diagnosis, having attempted to already weed out those conditions that are highly improbable given the results of the first two. This, by the way, is how screenings work when they work correctly: you narrow down the target population to as highrisk a demographic as possible, and you ensure that screening positives are followed up by something with a relatively high specificity, to account for the high proportion of false negatives that arises.
Never assume that the accuracy of a single test determines whether it's useful or not. If you don't factor in the stages of diagnostics preceding it, you are doing it a great disservice.
Also, doctors never look at the cost of a test in our system, unless it's something they can bill for under FFS. Certainly the idea that a test isn't costeffective isn't something they care about (well, under FFS at least). I admit that ACOs/capitation plans are better at getting docs to consider whether tests are actually costeffective.

 Posts: 1478
 Joined: Sun Nov 05, 2006 12:49 am UTC
Re: 1132: "Frequentists vs. Bayesians"
Stargazer71 wrote:Looks like Randall stayed up late reading Nate Silver's book.
Dangit, that's what I came here to post. I read it last week, good stuff.
Re: 1132: "Frequentists vs. Bayesians"
In response to the alttext, probably the hardest (solvable) variant on the two guards problem I know of:
(source: http://www.ocf.berkeley.edu/~wwu/cgibin/yabb/YaBB.cgi?board=riddles_hard;action=display;num=1028840975;start=0)
There are three omniscient gods sitting in a chamber: GibberKnight, GibberKnave, and GibberKnexus, the gods of the knights, knaves, and knexuses of Gibberland. Knights always answer the truth, knaves always lie, and knexuses always answer the xor of what the knight and knave would answer.
Unfortunately, the language spoken in Gibberland is so unintelligible that not only do you not know which words correspond to "yes" and "no", but you don't even know what the two words that represent them are! All you know is that there is only one word for each.
With three questions, determine which god is which.
[Notes:
Standard: (Rules that are generally assumed unless otherwise noted.) The gods only answer yes/no questions. Each god answers in the single word of their language as appropriate to the question; i.e. each god always gives one of only two possible responses, one affirmative and one negative (e.g. they would always answer "Yes" rather than "That would be true"). Each question asked must be addressed to a single specific god; asking one question to all the gods would constitute three questions. Asking a single god multiple questions is permissible. The question you choose to ask and the god you choose to address may be dynamically chosen based on the answers to previous questions.
Specific: Because of possible loop conflicts, you may not ask any questions regarding how a knexus would answer.]
(source: http://www.ocf.berkeley.edu/~wwu/cgibin/yabb/YaBB.cgi?board=riddles_hard;action=display;num=1028840975;start=0)
Re: 1132: "Frequentists vs. Bayesians"
collegestudent22 wrote:The way I find the best is to actually divide these things up. If you are dealing with classes of things with certain frequencies  like you would with actuarial tables, for instance  this is generally a more frequentist approach. You are actually describing realworld frequencies (1 in 1 million will get cancer) based on prior frequency statistics and assumptions of continuity. If, however, you are dealing with unique events  like an election  "degrees of certainty" in metaphor are all you really have, but at that point the math is actually not applicable. To say that, for instance, Obama had a 90% chance of winning the day before the election is merely to express a high degree of belief in his victory  but it doesn't actually say anything different than to say he had a 92% or 86% chance.
Actually I think you are conflating two different concepts here. What you are referring to is the Bayesian versus frequentist interpretation of probability. That is a separate (kiiiinda related, but not really) topic from Bayesian versus frequentist inference. I don't blame you though, the way statistics is taught nowadays most people never realize the difference between probability and inference. They are two pretty different beasts.
 Username4242
 Posts: 168
 Joined: Fri May 01, 2009 9:03 pm UTC
 Location: (Previously) Montana State UniversityBozeman, Montana.
Re: 1132: "Frequentists vs. Bayesians"
Joke I posted on Facebook two days ago, motivated by people who were treating the upcoming American election as some kind of proof of Nate Silver's credibility:
==========
A Bayesian and an antiBayesian walk into a bar and see a pair of dice on the table.
The Bayesian says, "There's only about a three percent chance of rolling double sixes."
The antiBayesian says, "If I roll double sixes on the next roll it'll prove you wrong! Look, I did it! Hah! What are the odds of that, BrightLight?"
"About three percent."
=====
This is the first time I've ever had a "Get out of my head, Randal" moment.
==========
A Bayesian and an antiBayesian walk into a bar and see a pair of dice on the table.
The Bayesian says, "There's only about a three percent chance of rolling double sixes."
The antiBayesian says, "If I roll double sixes on the next roll it'll prove you wrong! Look, I did it! Hah! What are the odds of that, BrightLight?"
"About three percent."
=====
This is the first time I've ever had a "Get out of my head, Randal" moment.
Coming on Midsummer's Day to a Web Browser Near You: http://www.songsofalbion.com
Re: 1132: "Frequentists vs. Bayesians"
pareidolon wrote:anyone here believe the world's going to end on December 12th?
The world ended for about 50 Mayans on November 7, during the Guatemala earthquake. No doubt the world will end for a few Mayans on December 12th, as it does every day (agree both the Frequentists and the Bayesians) though both might overestimate a bit if they began their observations on November 7.
The claim that the box detects supernovagenerated neutrino bursts is disputable. How would one calibrate and test it? Laplace developed the actual mathematics we call "Bayes", and he used it to evaluate the accuracy of astronomical observations. A good Bayesian would very much doubt that the box is >99.99999999999% accurate, which it must be to reliably observe a supernova neutrino event (or even the result of a dice roll).
Chances are, somebody forgot to plug in a cable.
Re: 1132: "Frequentists vs. Bayesians"
miedvied wrote:As somebody involved academically in medical epidemiology, I feel compelled to contribute to the discussion of statistics w/r/t to diagnosis.
First: physicians are not trained to ignore zebras. Doctors love zebras! Zebras are interesting, and your thousandth diabetic of the day isn't. They're explicitly trained to restrain themselves from diagnosing zebras in the absence of compelling evidence because  given that every disease known to man has a heterogenous set of symptoms  an uncommon manifestation of a very common illness is going to be the correct diagnosis more often than the common manifestation of a very uncommon illness. While every one that happens to suffer from this diagnosis is correct in being sad about this, the fact is that  by the numbers  this helps far, far more patients than it harms. The reverse is not true. Physicians who go looking for zebras tend to do far, far more harm than good.
Thank you. I was trying to say this, but I couldn't figure out a way to say it that wouldn't make me sound repellant and inhuman.
I know you want to say, well, just calculate the probability of the commonlookinguncommon against... blah. They do not, and for one simple reason: physicians must be at least vaguely competent with far more conditions and illnesses than one could possibly memorize stats for, with uncommon presentations that do not even have descriptive statistics compiled never mind available for memorization, never mind computing conditional probabilities on the fly. Until computeraided diagnosis becomes the standard, combined with widespread collection of data from EMRs, this sort of fuzzy, qualitative approach will continue to be the standard. Not because docs don't know it's flawed, or don't care, but because the technology is only just getting there.
This sounds like something we're long overdue for. (By long overdue I mean possibly as much as 10 years, or more likely 5 years.) Ideally, anybody could get onto the internet and describe their symptoms, and whatever of their history they think matters, and get a breakdown of what has already happened with as many cases as fit their description. We would of course have a whole lot of people doing crazy selfdiagnosis, and it would be an excellent chance for practically everybody to learn about statistics.
Now, beyond that, the statistics you guys hail as creating "useless tests" are screening tests. Yes, screenings are problematic: there's a large swath of the medical community that has become very critical of screening tests. Those criticisms don't really apply as well to diagnostics. And here's why: ....
If you have gotten to the point that a particular diagnosis seems to be about one chance in a thousand, at that point a test that's good to one in a hundred won't do a whole lot of good. Maybe it will do some good. But how many oneinathousand possibilities do you have? Depending on how much research has been done on rare conditions that fit, possibly a hundred of them. Do you want to do a hundred oneinathousand tests?
I think you are more likely to do the test if you suspect there's a 1% chance. Or if it's a 10% chance, then very likely. If it's one of three main choices, almost certainly. When it looks like 0.1%, it's a zebra.
My other trite obvious point was that highly specific tests are better, when they're available. If you have a good immunological test or a good genetic test, then it's somewhat unlikely to get a false negative and extremely unlikely to get a false positive. If the particular bacterial or viral genetic material is there in high enough concentration to cause symptoms, then that is not a false positive. So a lot of this back and forth wishywashy stuff goes away, for those particular tests.
The Law of Fives is true. I see it everywhere I look for it.
 mathmannix
 Posts: 1445
 Joined: Fri Jul 06, 2012 2:12 pm UTC
 Location: Washington, DC
Re: 1132: "Frequentists vs. Bayesians"
Pingouin7 wrote:pareidolon wrote:On what I assure you is an entirely unrelated topic,
anyone here believe the world's going to end on December 12th?
Why would anyone believe that?
Yeah, everyone knows it's 21 December!
Edit: Also, I love zebras too. They're like horses, only with stripes. And more likely to bite you.
I hear velociraptor tastes like chicken.
Re: 1132: "Frequentists vs. Bayesians"
radtea wrote:This is the first time I've ever had a "Get out of my head, Randal" moment.
What are the odds of that?
philip1201 wrote:Not everything which maps countable infinities onto finite areas is a Lovecraft reference.
Re: 1132: "Frequentists vs. Bayesians"
I think this is a little unfair. As others have noted, it's not as though frequentists are honourbound to ignore Bayes' Theorem. In fact, they're very well aware of exactly the problem this comic illustrates (though the example most often used is the medical test scenario others have posted about). At least, many of them are, and certainly most (if not all) of those who are actually statisticians, although certainly you do get plenty of scientist/social scientist users of frequentist statistics who are not aware of this (as well as plenty who are). For this reason the frequentist statisticians and more expert users are careful in their phrasing never to say that the pvalue is the chance of the null hypothesis being correct (which is recognised to be very much not true), but always to say that it's the chance of observing such a result given that the null hypothesis is correct. (Again there certainly are some less expert users who do fall into that trap).
I think if this were to happen in real life, the frequentist would formally reject the null hypothesis, but he/she would also be very aware that the sun going nova at this point in time is an unlikely event, so would suggest running the test again several more times. That would bring the chance of rejecting the null hypothesis given that it's actually true down from 1/36 to (1/36)^n, where n is the number of times the test is run. Of course it could still happen that the detector rolled double sixes every time, but it makes it more unlikely. Performing the test 5 times would give a one in 60466176 chance of being wrong. If that 1 in 60466176 chance does come up, well that's just too bad.
Also, I'm not sure whether P(the sun goes nova at this point in time) is something that can be calculated from the astronomical observations available to us? If it is, then that's information that should be equally available to both the frequentist and the Bayesian; the Bayesian will use it as a prior while the frequentist will perform the test and then use Bayes' Theorem to calculate the probability of the sun having gone nova given the test result, and they should get the same thing (although the Bayesian's analysis will have incorporated the uncertainty in the estimate of the prior probability of the sun going nova). Either of them could use the prior probability to calculate how many times it would be necessary to perform the test in order for there to be only a 5% chance of the sun not actually having gone nova given all "YES" results. And then they could perform the test that many times.
If that probability is not something that is available to either, then in the comic the Bayesian is only going on "well, it's much less than 1/36", and as I said, the frequentist would also be aware of the issue. If they did perform the test five times, say, the Bayesian might be less confident that the prior probability of the sun going nova now is less than 1/60455176 and not so willing to make the bet.
In real research, the difference between frequentist and Bayesian approaches is that the Bayesian will incorporate the results of previous research into the analysis as a prior while the frequentist would perform the analysis in isolation (but discuss previous results in the paper) and eventually someone will (hopefully) perform a metaanalysis including that work along with all the rest on the same question. So it's just a question of at what stage of the process it all gets put together. Of course, if there has been no previous research on a particular question, the Bayesian will have to use noninformative priors. The important point is that no good scientist would ever consider one experiment/study/analysis enough to definitively answer a question. It's not great to have a 5% chance of being wrong if you only take one stab at an important question, but it's ok to be wrong 5% of the time if you keep on attempting to answer it.
(Edited because 0.05 != 20%, doh!)
I think if this were to happen in real life, the frequentist would formally reject the null hypothesis, but he/she would also be very aware that the sun going nova at this point in time is an unlikely event, so would suggest running the test again several more times. That would bring the chance of rejecting the null hypothesis given that it's actually true down from 1/36 to (1/36)^n, where n is the number of times the test is run. Of course it could still happen that the detector rolled double sixes every time, but it makes it more unlikely. Performing the test 5 times would give a one in 60466176 chance of being wrong. If that 1 in 60466176 chance does come up, well that's just too bad.
Also, I'm not sure whether P(the sun goes nova at this point in time) is something that can be calculated from the astronomical observations available to us? If it is, then that's information that should be equally available to both the frequentist and the Bayesian; the Bayesian will use it as a prior while the frequentist will perform the test and then use Bayes' Theorem to calculate the probability of the sun having gone nova given the test result, and they should get the same thing (although the Bayesian's analysis will have incorporated the uncertainty in the estimate of the prior probability of the sun going nova). Either of them could use the prior probability to calculate how many times it would be necessary to perform the test in order for there to be only a 5% chance of the sun not actually having gone nova given all "YES" results. And then they could perform the test that many times.
If that probability is not something that is available to either, then in the comic the Bayesian is only going on "well, it's much less than 1/36", and as I said, the frequentist would also be aware of the issue. If they did perform the test five times, say, the Bayesian might be less confident that the prior probability of the sun going nova now is less than 1/60455176 and not so willing to make the bet.
In real research, the difference between frequentist and Bayesian approaches is that the Bayesian will incorporate the results of previous research into the analysis as a prior while the frequentist would perform the analysis in isolation (but discuss previous results in the paper) and eventually someone will (hopefully) perform a metaanalysis including that work along with all the rest on the same question. So it's just a question of at what stage of the process it all gets put together. Of course, if there has been no previous research on a particular question, the Bayesian will have to use noninformative priors. The important point is that no good scientist would ever consider one experiment/study/analysis enough to definitively answer a question. It's not great to have a 5% chance of being wrong if you only take one stab at an important question, but it's ok to be wrong 5% of the time if you keep on attempting to answer it.
(Edited because 0.05 != 20%, doh!)
Last edited by neremanth on Tue Nov 13, 2012 9:26 pm UTC, edited 1 time in total.

 Posts: 123
 Joined: Mon Jun 08, 2009 12:56 pm UTC
Re: 1132: "Frequentists vs. Bayesians"
Second best variation off of the old riddle I've seen... mainly because it gets interrupted and is buried in titletext.
My favorite is here:
http://www.giantitp.com/comics/oots0327.html
My favorite is here:
http://www.giantitp.com/comics/oots0327.html

 Posts: 82
 Joined: Mon Dec 20, 2010 5:28 am UTC
Re: 1132: "Frequentists vs. Bayesians"
xkcdtag wrote:XKCD should be officially renamed "Me So Smart". This strip can be amusing at times but for the most part it's just incredibly pretentious. Some say it used to be better. Maybe. All I know is that now it seems like the purpose of most of the strips is for the author to let us all know how smart he is and for his readers to let others know how smart they are by emailing the strip (i.e. "See how smart I am? I think jokes about Bayesian vs Frequentist statistics are funny! Impressed? Ha! Bet you don't get it, do ya? Tee hee! Me so smart!" etc.).
This strip just isn't funny or witty. I get it; he knows about Bayesian vs Frequentist statistics. We're all mighty impressed down here I can tell you.
Now geeks across the world can email one another this to to show one another how smart they are (and what lame senses of humor they have).
I imagine (what's his name, Randall? something?) sitting up at night googling abstruse science terms he saw in a pop sci book or cable documentary to learn enough about it to name drop it the nexty day:
"Say, tomorrow I want to let people know that I know about  what wasthat thing called again?  Oh yeah! The fine structure constant! OK, so, how can I work that into a strip? Hmmm..."
Let's be real; this is what xkcd is all about now, isn't it?
Admittedly this wasn't the best strip of all time, but if you don't enjoy it, why do you read it, find the unmarked forums, create a log in, and then post here? That doesn't sound like an intelligent use of time.

 Posts: 82
 Joined: Mon Dec 20, 2010 5:28 am UTC
Re: 1132: "Frequentists vs. Bayesians"
Carteeg_Struve wrote:Second best variation off of the old riddle I've seen... mainly because it gets interrupted and is buried in titletext.
My favorite is here:
http://www.giantitp.com/comics/oots0327.html
I love this comic. Thanks for sharing it!
Re: 1132: "Frequentists vs. Bayesians"
Alright help me out here... I've been doing statistics and p values for a while and I will think there is an honest error in this comic.
If the machine says yes, there is a 1 in 36 chance that this is due to the dice thus. 027 that no is the truth.
If the machine says no, there is equally a 1 in 36 .027 chance that it is actually a yes.
Thus statistics requires a 2 tail test, and with the upper and lower bounds including this. 027, for a total of
.054
P is therefore greater than .05, and the test fails
Am I missing something here? I know it's pedantic, (you could use 3 dice and get around it)
But for the sake of accuracy, that bit of logic got me thinking.
If the machine says yes, there is a 1 in 36 chance that this is due to the dice thus. 027 that no is the truth.
If the machine says no, there is equally a 1 in 36 .027 chance that it is actually a yes.
Thus statistics requires a 2 tail test, and with the upper and lower bounds including this. 027, for a total of
.054
P is therefore greater than .05, and the test fails
Am I missing something here? I know it's pedantic, (you could use 3 dice and get around it)
But for the sake of accuracy, that bit of logic got me thinking.

 Posts: 1
 Joined: Fri Nov 09, 2012 9:34 pm UTC
Re: 1132: "Frequentists vs. Bayesians"
Hp: The chancethat the Sun just exploded p(exp) is at most 1e20
p(exp) = 1e20
p(nexp) = 1  p(exp)
p(YESexp) = 35/36
p(YESnexp) = 1/36
p(NOexp) = 1/36
p(NOnexp) = 35/36
p(YES) = 35/36 * 1e20 + 1/36 * (1  1e20) ~= 1/36
p(NO) = 1/36 * 1e20 + 35/36 * (1  1e20) ~= 35/36
p(YES) ~= p(YESnexp)
p(NO) ~= p(NOnexp)
Th: there is no correlation between the explosion of the sun and the answer of the machine.
Yes, I'm bored.
p(exp) = 1e20
p(nexp) = 1  p(exp)
p(YESexp) = 35/36
p(YESnexp) = 1/36
p(NOexp) = 1/36
p(NOnexp) = 35/36
p(YES) = 35/36 * 1e20 + 1/36 * (1  1e20) ~= 1/36
p(NO) = 1/36 * 1e20 + 35/36 * (1  1e20) ~= 35/36
p(YES) ~= p(YESnexp)
p(NO) ~= p(NOnexp)
Th: there is no correlation between the explosion of the sun and the answer of the machine.
Yes, I'm bored.
 bmonk
 Posts: 662
 Joined: Thu Feb 18, 2010 10:14 pm UTC
 Location: Schitzoed in the OTT between the 2100s and the late 900s. Hoping for singularity.
Re: 1132: "Frequentists vs. Bayesians"
kissmyawesome wrote:Surely this a good bet for the simple reason that if our sun went supernova we'd all be dead by the time the neutrinos hit the detector..?
Not really. The photon pressure would hold the sun together for a short time, while the neutrinos are released very quicklyhence the detector will give a short warning that the fit is about to hit the shanindeed, it has already done so less than 100 million miles away.
SerialTroll wrote:Admittedly this wasn't the best strip of all time, but if you don't enjoy it, why do you read it, find the unmarked forums, create a log in, and then post here? That doesn't sound like an intelligent use of time.
Yeah, but a troll has got to get attention somehow, doesn't he?
Having become a Wizard on n.p. 2183, the Yellow Piggy retroactively appointed his honorable self a Temporal Wizardly Piggy on n.p.1488, not to be effective until n.p. 2183, thereby avoiding a partial temporal paradox. Since he couldn't afford two philosophical PhDs to rule on the title.
Re: 1132: "Frequentists vs. Bayesians"
KaWraith wrote:Alright help me out here... I've been doing statistics and p values for a while and I will think there is an honest error in this comic.
If the machine says yes, there is a 1 in 36 chance that this is due to the dice thus. 027 that no is the truth.
If the machine says no, there is equally a 1 in 36 .027 chance that it is actually a yes.
Thus statistics requires a 2 tail test, and with the upper and lower bounds including this. 027, for a total of
.054
P is therefore greater than .05, and the test fails
Am I missing something here? I know it's pedantic, (you could use 3 dice and get around it)
But for the sake of accuracy, that bit of logic got me thinking.
No, that's not quite right.
H0: the sun is fine, it hasn't gone nova (Null hypothesis)
H1: the sun has gone nova! (Alternative hypothesis)
Test statistic: "YES"
P("YES" given the null hypothesis) = P(machine lies) = P(rolling 2 sixes) = 0.027 < 0.05
So: reject the null hypothesis and conclude that the sun has gone nova (although see what I said in my previous post about what a frequentist would really do in this situation).
Two sided tests can only apply when we have a continuous test statistic, which is not what we have here (our test statistic can be "YES" or "NO", it's not a number). With a continuous test statistic (which is what we more usually have when we do hypothesis tests), our pvalue of interest is not the probability of getting that exact test statistic, and thus that exact correlation or coefficient or whatever (because the probability of getting precisely any value to an infinite number of decimal places is vanishingly small). Instead, the pvalue is the probability of getting a test statistic (and thus correlation etc) that extreme, i.e. of getting that test statistic or a larger test statistic. The onetail/twotail thing arises because there are two different ways of interpreting "larger". We could mean "larger in absolute value", or we could mean "larger, i.e. closer to positive infinity". It depends on what we would consider a plausible result which of these is the sensible interpretation.
For example, if we are interested in whether girls score higher in English exams than boys, before we do our study we might expect that we could either find that girls do better or boys do better (or they both do equally well), given that in the past boys have been found to score better than girls but more recently the reverse is usually found. So if our test statistic is 0.8 (say) the p value we want would give the probability of a test statistic that's greater than 0.8 or smaller than 0.8 (similarly, if our statistic was 0.8, we'd want the same pvalue).
But if we are interested in whether mobile phones cause cancer, since (as far as I'm aware) there's no suggestion that they might instead cure it, we're not interested in test statistics associated with a reduction in cancer with mobile phone use, so if our test statistic was again 0.8, the p value we want would give just the probability of a test statistic that's greater than 0.8, not the probability of a statistic greater than 0.8 or smaller than 0.8.
To look at it another way, when calculating pvalues we are only ever interested in the case where the null hypothesis is true, and we observe what we actually do observe. The pvalue is always the probability of observing what we observe, given that the null hypothesis is true.
1/36 is the probability of the machine answering YES given that the sun doesn't go nova  this is what we're interested in.
35/36 is the probability of the machine answering NO given that the sun doesn't go nova  we're not interested in this, but we would be if the machine had answered NO instead of YES
1/36 is also the probability of the machine answering NO given that the sun goes nova  this is not our pvalue, nor does it contribute to our pvalue, but it is generally of interest as the Type II error probability
35/36 is also the probability of the machine answering YES given that the sun goes nova  again, not particularly of interest and not contributing to our pvalue.
Re: 1132: "Frequentists vs. Bayesians"
bmonk wrote:kissmyawesome wrote:Surely this a good bet for the simple reason that if our sun went supernova we'd all be dead by the time the neutrinos hit the detector..?
Not really. The photon pressure would hold the sun together for a short time, while the neutrinos are released very quicklyhence the detector will give a short warning that the fit is about to hit the shanindeed, it has already done so less than 100 million miles away.
Plus, since it's the middle of the night, we've got tens of thousands of miles of rock shielding which will buy us a little time (it'd probably be the atmospheric shockwave that kills us, not the actual radiation wave).
First we'd get the neutrino pulse released from the Sun's core, then the highenergy photon wave released from the Sun's surface (delayed by the energy having to pass through the Sun's mantle to reach free space) would smash into the daylightside of the planet, then the wave of plasma that was the Sun's surface would engulf what's left...
Re: 1132: "Frequentists vs. Bayesians"
So two years ago this forum linked me to HPMOR, which led to a thorough and highly beneficial trip down the rabbit hole of Bayesian reasoning. In light of that, here's a link to An Intuitive Explanation of Bayes' Theorem for anyone interested in seriously improving their ability to hold true beliefs and make good decisions.
Re: 1132: "Frequentists vs. Bayesians"
[quote="Aiwendil"]That's one really small neutrino detector![/quoteeutrino
That is so cute. So, fun you know that.
Neutrino detectors are huge, hidden deep and filled with??Weird stuff; Dry cleaning fluid??
Umm. Our sum can't go Super Nova. Right?I thought our plan was Red Giant then White Dwarf.
Something else will get us first. Yet; We have seen it in our mind's eye.
The Sun has a life of its own. A very self absorbed and active life.
That is so cute. So, fun you know that.
Neutrino detectors are huge, hidden deep and filled with??Weird stuff; Dry cleaning fluid??
Umm. Our sum can't go Super Nova. Right?I thought our plan was Red Giant then White Dwarf.
Something else will get us first. Yet; We have seen it in our mind's eye.
The Sun has a life of its own. A very self absorbed and active life.
Life is, just, an exchange of electrons; It is up to us to give it meaning.
We are all in The Gutter.
Some of us see The Gutter.
Some of us see The Stars.
by mr. Oscar Wilde.
Those that want to Know; Know.
Those that do not Know; Don't tell them.
They do terrible things to people that Tell Them.
We are all in The Gutter.
Some of us see The Gutter.
Some of us see The Stars.
by mr. Oscar Wilde.
Those that want to Know; Know.
Those that do not Know; Don't tell them.
They do terrible things to people that Tell Them.
Return to “Individual XKCD Comic Threads”
Who is online
Users browsing this forum: No registered users and 88 guests