In general, if the agent is simply not able to take non-valid actions in a given environment (e.g. due to strict rules of a game, like chess), then it is standard practice to have the environment support that by providing some kind of function or filter for $\mathcal{A}(s)$, the set of actions available in state $s$.
However, it does seem like the basic Gym interface does not support it, and has no plans to support it.
It is still possible for you to write an environment that does provide this information within the Gym API using the env.step method, by returning it as part of the info dictionary:
next_state, reward, done, info = env.step(action)
The info return value can contain custom environment-specific data, so if you are writing an environment where the valid action set changes depending upon state, you can use this to communicate with your agent. The caveat is that both your environment and agent would be working to a convention you had invented in order to do this, and the same approach would not extend to other environments or be used by developers of other agents.
Alternatively, you can support calculating what the valid actions are in any other custom way that is not covered by the Gym API - a new method on the environment, a separate "rules engine" for a game etc. Both the agent and the environment could call that in order to perform their roles correctly.
The situation without an official API to do this may be subject to change, but from the linked Github issue it seems that Open AI developers consider a generic interface for this, that accounted for all the different kinds of action space, was more effort than it was worth.