I am trying to use a reinforcement learning solution in an OpenAI Gym environment that has 6 discrete actions with continuous values, e.g. increase parameter 1 with 2.2, decrease parameter 1 with 1.6, decrease parameter 3 with 1 etc.
I have seen in this code that such an action space was implemented as a continuous space where the first value is approximated to discrete values (e.g. 0 if it is < 1 or 2 if it is < 2 and > 1).
Does anybody know if the above solution is the correct way to implement such an action space? Or does Gym offer another way?