When we are concerned about speed, GPU is way better than CPU.
But if I train a model on a GPU and then deploy the same trained model (no quantization techniques used) on a CPU, will this affect the accuracy of my model? Can the accuracy of the same model degrade on a CPU?
My intuition says, GPU vs CPU should not make any difference if accuracy is concerned.
But I have one doubt that, the GPUs and CPUs have their own different ways to process the information internally. Both of them have different architecture. When a model is trained on a GPU, does the exact same way of processing happens when trained on a CPU but in much slower way? I am not concerned about the accuracy while training, but if a model was trained on a GPU, will it perform exactly with same accuracy on a CPU?
