It might result in linearity but might not. If you have a true relationship like $\text{logit}\big(\mathbb E[Y\vert X=x]\big) =\beta_0+\beta_1x+\beta_2x^2$, then you have a perfectly valid logistic regression but also need that quadratic term to do the modeling well.
Transforming features ($X$) is a separate issue than the link function. You might find that the relationship between the transformed expected value and the features works much better when you includes something like a quadratic term or a logarithm. However, that’s fairly unrelated to skewness of features and should come down to a combination of domain knowledge and model flexibility (as is the case in linear regression).
In particular, GLMs make no assumptions about features having any particular distribution.