I'm afraid the discussion in the article is not quite correct.
There are three main points:
1) The final temperature depends neither on how tightly the incident light
is focussed, nor on the power density in the focus, but only on the spectral
2) A rock on the moon barely sees the moon's surface, but mostly the empty
3) The temperature of the object in focus will end up between that of the
moon's and the sun's surfaces.
Perhaps the easiest way to see a problem in the argumentation is to consider
a parabolic mirror instead of the moon as a reflector: It is clearly cooler
than an object placed in its focus. One could argue that these two reflectors
cannot be compared, because the moon does not reflect specularly. However,
this is actually irrelevant for the principle, it just makes the moon a
horribly inefficient reflector.
I hope the following will be convincing.
The temperature of a body 'A' in equilibrium with the electromagnetic field
is the temperature of all objects visible from 'A' averaged over the whole
solid angle and weighted with their respective emissivities.
As you correctly pointed out, a lens mainly trades illuminated area for solid
angle of illumination. By focusing the sun on an object you simply modify the
solid angle distribution when the object "averages" the objects it sees. If
the lens is directed toward the sun, the object "sees" the sun over a larger
solid angle and gets hotter.
Incidentally, if the lens is directed towards empty sky on a sunny day, the
object in focus will actually cool below ambient temperature.
Now let's get to the point: the moon.
The moon is a reflector that weakly couples the sun to our object. In addition,
the moon is a thermal radiator whose equilibrium temperature is defined by the
solid angle of the sun (as seen from the moon) and the sun's temperature (we
can pretty much ignore the temperature of the rest of the universe). The moon's
thermal emissivity is proportional to its "blackness" (something like 1-minus-albedo).
On the other hand, the effective emissivity of the sun via the moon is (apart
from some boring prefactors) the product of the sun's original emissivity, the
moon's albedo, the moon's solid angle (as seen from the receiver) and the sun's
solid angle (as seen from the moon). The solid angles enter because the
reflection is assumed to be diffuse.
If we now place 'A' under a lens with very high numeric aperture directed
towards the moon while insulating the back against thermal loss (or if we buy
two mirrors and another lens to illuminate both sides of 'A' with moon light),
the object is in thermal contact with two reservoirs: The moon (directly) and
the sun (via the moon). The equilibrium temperature is then the temperature of
the moon's surface weighted by its native emissivity and the sun's surface
weighted by its effective emissivity via the moon. The equilibrium temperature
will therefore always end up between the temperatures of the moon's and the
It can be increased e.g. by increasing the reflector's reflectivity (reducing
its own emissivity at the same time). An extreme case for this is a mirror:
very high and specular (eliminates the solid angles in the effective emissivity)
reflectivity, barely any thermal emission on its own.
So what does the power density in the focus mean? Nothing for the equilibrium
temperature. It just defines how susceptible the effect is to parasitic thermal
coupling (a.k.a. loss).
In conclusion: I agree that it's probably impossible to light a fire by
focussing light from our moon (just my belly speaking at this point), but it
would be possible if painted the moon with a very bright white paint. It might
also be possible with a highly reflective artificial satellite as depicted
in "Die another day"