My understanding is that the discrepancy in those two examples comes from our emotional morality engine. If we feel something is wrong in our gut, like killing some innocent stranger to harvest their organs, this is our emotions talking. The emotions do on-the-spot heuristic calculations and thus don't benefit from perfect information.
However, we have an ability to do abstract moral calculations as well, and this is where perfect information helps. It's always going to feel wrong to kill someone in cold blood (hopefully), but if we can justify it like you did with Ozy's plan (when only a single person gets killed), then that's the reason engine talking, and it overrode the emotions.
In my opinion, the value of morality isn't whether it's intrinsically true, but whether we believe it's true. That's because our belief affects our behavior. Thus the value of morality is about how it makes us behave. So arguing over whether Ozy was justified isn't truth seeking in the scientific sense, it's really about establishing whether we should be morally outraged if someone presents a similar situations in the future. (Practically I think morality is similar to economics. We don't argue that higher taxes are more economically true, rather we say it will produce better results. Well, statements of moral truth are really endorsements of moral policy. But for some reason we just don't store it in our head that way.)
So if we apply this perspective to the train track example, perhaps we're OK with it because there's no slippery slope. It's a one-shot example that doesn't open any ugly doors. If we seem to have problems with trains hurtling uncontrolled down tracks, then in the future we can enforce better safety around train tracks to make sure no one is loitering around.
But with the organ harvesting example, that opens a hugely ugly door. Who gets harvested, who gets saved? Someone has to make this choice. It's scarey to think that at anytime someone might come knocking with some knives and an icebox. With perfect information, we could see clearly what the social response to a policy like this would be, and my guess is that it wouldn't be good.
Basically what perfect information gives us is perfect modeling of the future (and this could be in a statistical sense as well, i.e. option A has a greater expected number of lives saved than option B). In the book we make assumptions about how perfect this is. But my point with that other post is that using this example to guide us on real-life moral issues won't help much. We don't have Ozy or Dr. M's ability to predict/look into the future. And not to mention, we're very corruptible given enough power. Endorsing a policy to allow someone to kill so many people and then lie to cover it up will always go badly.
EDIT: And let me point out that just because moral calculations get easier with perfect information, that doesn't mean we will all agree. If two people have different ideologies of what the future should look like, perfect information will help each of them independently make decisions that better align with what they want. But what they want will still not agree.
A gentle answer turns away wrath, but a harsh word stirs up anger.