Intro
Recently I was able to get my GPU (Intel Iris Plus Graphics) to work with "accelerating" Tensorflow. It took many hours of research and following tutorials (most of which didn't help). But now that I finally got it working, it's training my models much slower than normal on my CPU... the GPU estimated time was saying 1-2 hours per epoch (my model is big when running this in Google Colab with GPU, it is about 6 minutes per epoch), while the estimated time on CPU is around 40 minutes per epoch.
I really only have two main questions...
- Why?
- Is there anything I can do about it?
Why?
(Please note, you might need more info about how I got this running... I didn't want to include it in here to start with, because I don't know you will need that info, and it will take a while for me to compile it together and type it up here in a way that will make sense. If you do need that information, let me know, and I will add that to my question)
What might be causing my GPU to train my models slower than my CPU?
Does Tensorflow use both CPU and GPU?
- If so, is there something I need to do to tell it to use both? Or is my GPU just not very good?
- If not, is there a way to tell it to? Or is there some sort of other configuration I need to do to get my GPU running in a way that will be optimized for Tensorflow model training?
Is there anything I can do about it?
I kind of asked questions related to this in the "Why?" section, but I just wanted to specifically ask about this.
Is there anything I can do to make it train faster? Or is it just that my GPU is not as good as I want it to be?
I know one thing I could do is buy a better computer. I've been wanting a desktop computer with better hardware than my laptop, but don't currently have the money.