1) Will the error subsurface be flat?
One cannot find the hessian or any such error subsurface of the pooling layer because common pooling layers like max and avg pooling do not have any parameters to learn (as you mentioned)!
2) But, we can speak for the effect of the pooling layers on the error surface of the previous layers. The effect is different for different types of pooling layers. For max pooling, it generates a relatively sparser gradient flow for the parameters on the previous layers as only a few of the output from the previous layers are selected during forward propagation. Whereas for average pooling, its allows for a more smooth gradient flow to all the learnable parameters on the previous layers.