They should really be more clear about what they mean, but I expect they're using a Laplacian pyramid. As more evidence, they cite: "Denton et al "Deep generative image models using a laplacian pyramid of adversarial networks."
The idea is, store a very low resolution copy of your image, and a series of "difference" images. Each difference image tells you what to add to the lower resolution copy to get the next higher resolution version of the image. You can imagine that lots of values of the difference will be close to zero.
The "mean intensity" and therefore "illumination" is only really stored at that lowest resolution copy, and doesn't really (usually, they hope) affect the gradient of the image, which is what the laplacian pyramid stores. That's why those authors say it's not sensitive to illumination changes. Does that make sense?