omgryebread wrote:Because of the system's huge bargaining power, they'd be able to negotiate prices of care lower.
Prices of care, while costs for patients, are revenues for care providers, and now we're exactly at the incentives that I'm talking about. To the degree that less money goes to health care, it is less attractive to invest in improving health care. (As mentioned before, total health may improve if incentives are aligned towards improving health rather than providing care. Paying people to floss would probably do more for dental health than fancy new equipment for dentists.)
omgryebread wrote:Drug companies can't charge as much per patient, but they get a lot more patients. Big Pharma was for the PPACA, once they won a few concessions.
This can't go both ways, though. Either costs are lower because pharmaceutical companies are being paid less, or costs are the same because pharmaceutical companies are being paid the same. (Alternatively, Big Pharma benefits while the pharmaceutical industry as a whole suffers; this typically falls under "regulatory capture.")
Qaanol wrote:ITT: Vaniver and CorruptUser support death panels.
Panels are both wasteful and inaccurate. I support death formulas
Ghostbear wrote:The examples at hand seem to be focused more on the plausible end of the pricing spectrum however, so your answer doesn't really seem to further the conversation.
There are many people who are not willing to agree with that premise, or who object to "the market should set the threshold" with "thresholds are bad." Once you agree that some people should die when it would be possible for them to live, then it's just a question of who and why.
The standard argument for markets is they assign values to values- that is, tradeoffs. Everyone wants the indigent to be healthy, everyone wants the environment to be spotless, and everyone wants an iPad. But the question of how people will choose to allocate scarce resources among those (and also future needs) is a difficult problem to solve, and markets are the best way to answer that question in a way that both respects individual choice and incentivizes individuals to promote the values of others. (If I'm better at improving the health of the indigent, and Bob is better at improving the environment, we can each specialize in doing what we do best, leading to better health and a cleaner environment.)
Requiring individuals to provide the cost of their medical care ensures that everyone (on expectation) gives more to other people than they take from them. (Insurance that lowers individual cost variance is very different in imapct from coverage that lowers individual cost mean.) Oftentimes, people are motivated enough by the existence of others to subsidize their existence, and so one should expect that to be a part of a market with humans.
Now, a single-payer institution is probably one of the most efficient ways to improve the health of the indigent, and many voluntary ones have existed over the years. (During the middle ages, when starvation was often a more pressing concern than medical care, churches turning alms into food for the poor can be seen as a single-payer institution, though it's a cleaner example if they contracted out to a baker than having an in-house baker.) There are strong reasons to believe an institution focused on QALYs for everyone will be the best use of charitable medical dollars. For example, it's unlikely that the visibility of a disease will be very strongly correlated to its prevalence times the effectiveness of dollars spent on it; breast cancer research is probably overfunded compared to other cancer research.
Ghostbear wrote:What a bunch of arbitrary limitations. Vaniver's discussions: those without [experience] need not apply?
Fixed. No, I'm not a doctor, but I'm familiar with off-label prescriptions (i.e. the pre-1962 system with some unnecessary limitations). My expectation is that if you haven't heard of the Kefauver Harris Amendment, you haven't seen any serious examinations of whether or not it's been good for American healthcare. (It hasn't.)
As is, the efficacy requirement has costs but no benefits. Previously, a drug needed to go through safety trials to be put on the market, but then efficacy was up to doctors, scientists, and the public to determine. Now, a drug needs to go through safety trials and then
efficacy trials. The efficacy trials only determine what goes on the label, though- doctors can prescribe the drug for any purpose.
Oftentimes, though, side effects become primary effects (like with Rogaine, originally designed to treat high blood pressure), or effects are discovered, published after a few months, and then doctors start prescribing the drug based on the study, rather than waiting years for the FDA to catch up.
Alternatively, it may be that some drugs treat only a segment of the population. With just a safety trial, you make sure drugs are tolerably dangerous, and then let everyone figure out what it's useful for. It might be that something like depression is actually, say, six different chemical imbalances, each of which is best treated by a different class of drug. The only drugs that made it through the FDA, though, are ones that were effective enough on the sample of depression patients that were in the FDA's study. It's not obvious that we would be able to identify the chemical imbalances from how people respond to the generalist drugs that exist, though it might become obvious if we had access to specialist drugs that weren't effective on the general population.
(Even the safety requirement has downsides: there are treatments for lethal conditions that aren't approved because they have crippling side effects. Many patients, though, see the trade as worthwhile, and the only other alternative offered by the FDA is death.)