Kit. wrote:Microsoft at its... well, not exactly worst, but you got the idea.
Actually, Alvy Ray Smith wrote the first version of that memo when he was working at Pixar. You might want to rethink your company prejudice.
I don't see how it would help your case, as I also have a prejudice against Pixar
But anyway, for Pixar, which at the moment had no relation to image acquisition business (as far as I know), his memo might have a point (but he still shouldn't have called those his entities "pixels"). For Microsoft, it wouldn't.
DeGuerre wrote: Kit. wrote:
lgw wrote:The worst way to sample is to divide the image into cells and then represent each cell as a number - that forces aliasing into the stored image.
Then you should probably tell all the imaging sensors manufacturers around the world that they are doing it wrong.
As is pointed out in the memo, imaging sensor manufacturers are doing it very much right.
I have restored the quote I was answering to.
There were technologies that weren't "to divide the image into cells and then represent each cell as a number" (silver halide film has already been mentioned) - but, unfortunately, they have proved to be worse
DeGuerre wrote:Real sensors sample points (sometimes, in colour or other multispectral sensors, there are multiple distinct but close points), after the physics of the sensor and optical system have convolved the signal with a sampling kernel first.
No, real sensors don't sample points. Saying that "sensor readings can theoretically be mapped into sample points of some imaginary subjects", while being technically correct by itself, doesn't help either, because it's exactly how aliasing is introduced in the first place.
DeGuerre wrote:It's difficult to see how you could construct an imaging sensor which integrates with uniform weight over a square (or rectangular) support. It's even more difficult to see anyone would bother to construct such a sensor.
So, is it difficult for you to understand why photo sensor manufacturers employ microlenses over square pixels on their sensors?
DeGuerre wrote:Real optical systems tend to be isotropic (which shouldn't be surprising), which squares and rectangles are not. Moreover, they are often well-approximated by a Gaussian for the same reason why the central limit theorem works: a bunch of uncorrelated imperfect optical stages put on top of each other tends to be Gaussian in the limit.
Please don't tell me that you have never observed the real
effects of the optical aberrations.Added:
endolith wrote:On the other hand, the imaging sensor in a camera really does use tiny little rectangles, and those are called pixels. But there's no reason why they couldn't be arranged in triangles or as hexagons, and they would still be called pixels.
That's the traditional ("normal" for me) meaning of the term. Fujifilm used to make sensors with octagonal pixels, but wasn't particularly successful. There are also some patents for sensors with hexagonal pixels, but I'm not aware of such sensors in production.