I was just wondering what the best approach is for training a neural network (or any other machine learning algorithm) where the order of the inputs does not matter.
For example: f(x1,x2,x3,x4) = f(x2,x1,x3,x4) = f(x2,x4,x1,x3)
My current approach is to just randomize the order of every training sample (my network has 44 inputs). It kind of works but it causes the result of the validation loss to jump around every epoch a lot.
Maybe something related to embedding?
There are other questions that refer to this but with a variable number of inputs and typically in regards to a RNN. I'm talking about the simple case of a fixed number of inputs where order doesn't matter.
Thanks!