## 2001: "Clickbait-Corrected p-Value"

**Moderators:** Moderators General, Prelates, Magistrates

### 2001: "Clickbait-Corrected p-Value"

Alt-text: When comparing hypotheses with Bayesian methods, the similar clickbayes factor can account for some harder-to-quantify priors.

In related news, all scientific advancements in 2019 are predicted to center around Donald Trump and/or Kim Kardashian.

(And the phrase "one weird trick" is proved the most effective placebo of all time.)

### Re: 2001: "Clickbait-Corrected p-Value"

(And the phrase "one weird trick" is proved the most effective placebo of all time.)

Most effective placebo? So you are saying that clickbait writers think it works, so in spite of not actually doing anything it still produces better clickbait?

Most effective placebo? So you are saying that clickbait writers think it works, so in spite of not actually doing anything it still produces better clickbait?

### Re: 2001: "Clickbait-Corrected p-Value"

I know that I know just enough to know that I don't know what the hell this is about. I am guessing it is a sort of wry humour. [confuzzled-face emoticon]

What I know is that Clickbait is something.

What I know is that Clickbait is something.

- Eternal Density
**Posts:**5571**Joined:**Thu Oct 02, 2008 12:37 am UTC-
**Contact:**

### Re: 2001: "Clickbait-Corrected p-Value"

Heheh, nice work. Clever, funny, good comic.

Play the game of Time! castle.chirpingmustard.com Hotdog Vending Supplier But what is this?

In the Marvel vs. DC film-making war, we're all winners.

In the Marvel vs. DC film-making war, we're all winners.

- Eebster the Great
**Posts:**3304**Joined:**Mon Nov 10, 2008 12:58 am UTC**Location:**Cleveland, Ohio

### Re: 2001: "Clickbait-Corrected p-Value"

This "corrected" p-value frequently exceeds unity.

### Re: 2001: "Clickbait-Corrected p-Value"

sotanaht wrote:Most effective placebo? So you are saying that clickbait writers think it works, so in spite of not actually doing anything it still produces better clickbait?

Something like that. Real-world drug placebos are getting better at treating pain, so why not clickbait placebos?

### Re: 2001: "Clickbait-Corrected p-Value"

This... Should be actually we think really deeply about. A systematic way to correct for publication and researcher bias would be *awesome*.

Like, imagine you see a (recent) study saying "Women better at X". Should you trust it? Or Should you correct for the fact that a result saying "Men better at X" would be unlikely to get published?

Randalls formula is defintely not it, but maybe something to that effect could be created.

Like, imagine you see a (recent) study saying "Women better at X". Should you trust it? Or Should you correct for the fact that a result saying "Men better at X" would be unlikely to get published?

Randalls formula is defintely not it, but maybe something to that effect could be created.

2001/moc.dckx

### Re: 2001: "Clickbait-Corrected p-Value"

@maniexx wrote:a result saying "Men better at X" would be unlikely to get published?

With some exceptions based on X - "Men better at child-rearing" would be more likely to get published than the equivalent about women.

### Re: 2001: "Clickbait-Corrected p-Value"

This is an intriguing idea.

Ok so, the p value represents P(strength of evidence is at least as much as the observed results | H_0) (right?), so then this would end up representing

P(strength at least that of observed results | H_0) * (click(H_1)/click(H_0))

So, the expected clicks on claim of H_1 if H_0 would be click(H_1) * P(strength >= observed results | H_0) , and the expected clicks on claim of H_1 if H_1 would be, uh, click(H_1) * ( ??? )

I guess I'll approximate ( ??? ) as 1. I don't know if there is an analogous name for P( evidence strength >= observed evidence | H_1) .

And, the expected clicks on claim of H_0 if H_0 is true would be click(H_0) * ( ...hmmm...) ,

And the expected clicks on claim of H_0 if H_1 is true would be (...),

So, combining these, could get expected number of clicks on correct claims vs expected number of clicks on incorrect claims conditioned on H_1 and on H_0,

So, given prior probabilities for H_0 and for H_1, this could give a value for "expected number of clicks on correct claims minus expected number of clicks on incorrect claims" ?

Or, something like that.

That might require a bunch of committing to a methodology before doing the experiments or something like that?

Then, I guess the p value and other similar values required for the expected value of (clicks on correct claims - clicks on incorrect claims) to be sufficiently high could be computed,

And then one could commit to publish it exactly in those cases where those criteria are met.

That sounds tricky to implement, but theoretically maybe worthwhile?

Edit: I should make it clear that I don't know what I'm talking about. That is probably clear from the many question marks, but,

Epistemic status : You shouldn't trust my conclusions on this unless you can verify for yourself that my reasoning made sense.

Ok so, the p value represents P(strength of evidence is at least as much as the observed results | H_0) (right?), so then this would end up representing

P(strength at least that of observed results | H_0) * (click(H_1)/click(H_0))

So, the expected clicks on claim of H_1 if H_0 would be click(H_1) * P(strength >= observed results | H_0) , and the expected clicks on claim of H_1 if H_1 would be, uh, click(H_1) * ( ??? )

I guess I'll approximate ( ??? ) as 1. I don't know if there is an analogous name for P( evidence strength >= observed evidence | H_1) .

And, the expected clicks on claim of H_0 if H_0 is true would be click(H_0) * ( ...hmmm...) ,

And the expected clicks on claim of H_0 if H_1 is true would be (...),

So, combining these, could get expected number of clicks on correct claims vs expected number of clicks on incorrect claims conditioned on H_1 and on H_0,

So, given prior probabilities for H_0 and for H_1, this could give a value for "expected number of clicks on correct claims minus expected number of clicks on incorrect claims" ?

Or, something like that.

That might require a bunch of committing to a methodology before doing the experiments or something like that?

Then, I guess the p value and other similar values required for the expected value of (clicks on correct claims - clicks on incorrect claims) to be sufficiently high could be computed,

And then one could commit to publish it exactly in those cases where those criteria are met.

That sounds tricky to implement, but theoretically maybe worthwhile?

Edit: I should make it clear that I don't know what I'm talking about. That is probably clear from the many question marks, but,

Epistemic status : You shouldn't trust my conclusions on this unless you can verify for yourself that my reasoning made sense.

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

- Eebster the Great
**Posts:**3304**Joined:**Mon Nov 10, 2008 12:58 am UTC**Location:**Cleveland, Ohio

### Re: 2001: "Clickbait-Corrected p-Value"

There is no meaningful way to deal with this formula as long as it produces probabilities greater than one.

### Re: 2001: "Clickbait-Corrected p-Value"

Eebster the Great wrote:There is no meaningful way to deal with this formula as long as it produces probabilities greater than one.

Sure there is. If an event that happens with P=1.0 always happens, then an event with P=2.0 always happens twice. Simple.

### Re: 2001: "Clickbait-Corrected p-Value"

Eebster the Great wrote:There is no meaningful way to deal with this formula as long as it produces probabilities greater than one.

Why not interpret it as something other than a probability?

Like, say, as something reflecting a component contributing to likely accidental misinformation, and therefore something to be reduced.

Alternatively, if you must interpret it as a probability, just put a cap on it at 1, setting it down to 1 in any case that it exceeds 1

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

- Eebster the Great
**Posts:**3304**Joined:**Mon Nov 10, 2008 12:58 am UTC**Location:**Cleveland, Ohio

### Re: 2001: "Clickbait-Corrected p-Value"

Mikeski wrote:Eebster the Great wrote:There is no meaningful way to deal with this formula as long as it produces probabilities greater than one.

Sure there is. If an event that happens with P=1.0 always happens, then an event with P=2.0 always happens twice. Simple.

The present definition allows p = infinity.

madaco wrote:Why not interpret it as something other than a probability?

Like, say, as something reflecting a component contributing to likely accidental misinformation, and therefore something to be reduced.

Alternatively, if you must interpret it as a probability, just put a cap on it at 1, setting it down to 1 in any case that it exceeds 1

But then for every random event, P almost certainly equals 1. That also seems wrong. Well, I don't know that it seems "wrong" exactly, but it seems useless anyway.

### Re: 2001: "Clickbait-Corrected p-Value"

Why would p be almost certainly 1?

Surely there is a positive probability that someone would click the null hypothesis headline, and also a positive probability that the regular p value would be less than 1, so, ?

Surely there is a positive probability that someone would click the null hypothesis headline, and also a positive probability that the regular p value would be less than 1, so, ?

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

- Eebster the Great
**Posts:**3304**Joined:**Mon Nov 10, 2008 12:58 am UTC**Location:**Cleveland, Ohio

### Re: 2001: "Clickbait-Corrected p-Value"

Yeah, not "almost every" in the technical sense, but in the sense that there are so many clickbait headlines that a subjectively large majority of them will have results greater than 1. Consider the example in the comic. It is not necessary that any actual article be published about anything; the formula is right there. And the likely number of clicks on headlines like "chocolate does not improve athletic performance" is so low that you can easily be multiplying p

_{traditional}by huge factors.### Re: 2001: "Clickbait-Corrected p-Value"

@maniexx wrote:Like, imagine you see a (recent) study saying "Women better at X". Should you trust it? Or Should you correct for the fact that a result saying "Men better at X" would be unlikely to get published?

Neither of those are likely to get published unless they are talking about chromosomal sex, which would be unlikely to use the terms "men" and "women". And they would also have to be talking about something pretty damn specific to physical development.

"(Women|Men) more likely to x" will be published in a sociology journal with equal likelihood.

Eddie Izzard wrote:And poetry! Poetry is a lot like music, only less notes and more words.