Questions tagged [gpu]

Graphics Processing Units (GPUs) within the context of Machine Learning often refer to the hardware requirements, design considerations, or level of parallelization for implementing and running various machine learning algorithms.

Graphics Processing Units (GPUs) within the context of Machine Learning often refer to the hardware requirements, design considerations, or level of parallelization for implementing and running various machine learning algorithms. Due to the size of various data sets and complexity of many cutting-edge learning techniques (Deep learning, Reinforcement Learning, Neural networks) applied to various use cases (Audio, Video, Signal Processing), GPUs are often required to carry out these computations. External GPUs processing power can often be accessed through third party cloud based platforms such as Google Colab or Amazon Web Services.

165 questions
44
votes
4 answers

Multi GPU in Keras

How we can program in the Keras library (or TensorFlow) to partition training on multiple GPUs? Let's say that you are in an Amazon ec2 instance that has 8 GPUs and you would like to use all of them to train faster, but your code is just for a…
Hector Blandin
  • 579
  • 1
  • 7
  • 11
41
votes
3 answers

Choosing between CPU and GPU for training a neural network

I've seen discussions about the 'overhead' of a GPU, and that for 'small' networks, it may actually be faster to train on a CPU (or network of CPUs) than a GPU. What is meant by 'small'? For example, would a single-layer MLP with 100 hidden units…
StatsSorceress
  • 1,981
  • 3
  • 14
  • 30
39
votes
7 answers

Using TensorFlow with Intel GPU

Is there any way now to use TensorFlow with Intel GPUs? If yes, please point me in the right direction. If not, please let me know which framework, if any, (Keras, Theano, etc) can I use for my Intel Corporation Xeon E3-1200 v3/4th Gen Core…
James Bond
  • 1,155
  • 2
  • 11
  • 12
36
votes
3 answers

How to disable GPU with TensorFlow?

Using tensorflow-gpu 2.0.0rc0. I want to choose whether it uses the GPU or the CPU.
Florin Andrei
  • 1,080
  • 1
  • 9
  • 13
23
votes
3 answers

Should I use GPU or CPU for inference?

I'm running a deep learning neural network that has been trained by a GPU. I now want to deploy this to multiple hosts for inference. The question is what are the conditions to decide whether I should use GPU's or CPUs for inference? Adding more…
Dan
  • 341
  • 1
  • 2
  • 6
16
votes
5 answers

R: machine learning on GPU

Are there any machine learning packages for R that can make use of the GPU to improve training speed (something like theano from the python world)? I see that there is a package called gputools which allows execution of code on the gpu, but I'm…
Simon
  • 1,071
  • 2
  • 10
  • 28
13
votes
1 answer

How to make my Neural Netwok run on GPU instead of CPU

I have installed Anaconda3 and have installed latest versions of Keras and Tensorflow. Running this command : from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) I find the Notebook is running in CPU: [name:…
Deni Avinash
  • 133
  • 1
  • 1
  • 5
13
votes
3 answers

CNN memory consumption

I'd like to be able to estimate whether a proposed model is small enough to be trained on a GPU with a given amount of memory If I have a simple CNN architecture like this: Input: 50x50x3 C1: 32 3x3 kernels, with padding (I guess in reality theyre…
11
votes
1 answer

GPU Accelerated Data Processing for R in Windows

I'm currently taking a paper on Big Data which has us utilising R heavily for data analysis. I happen to have a GTX1070 in my pc for gaming reasons. Thus, I thought it would be really cool if I could use that to speed up some of the processing for…
Jesse Maher
  • 113
  • 1
  • 5
10
votes
2 answers

What is the difference between Pytorch's DataParallel and DistributedDataParallel?

I am going through this imagenet example. And, in line 88, the module DistributedDataParallel is used. When I searched for the same in the docs, I haven’t found anything. However, I found the documentation for DataParallel. So, would like to know…
Dawny33
  • 8,226
  • 12
  • 47
  • 104
10
votes
3 answers

Switching Keras backend Tensorflow to GPU

I use Keras-Tensorflow combo installed with CPU option (it was said to be more robust), but now I'd like to try it with GPU-version. Is there a convenient way to switch? Or shall I re-install fully Tensorflow? Is the GPU version reliable?
Hendrik
  • 8,377
  • 17
  • 40
  • 55
9
votes
1 answer

After the training phase, is it better to run neural networks on a GPU or CPU?

My understanding is that GPUs are more efficient for running neural nets, but someone recently suggested to me that GPUs are only needed for the training phase. Once trained, it's actually more efficient to run them on CPUs. Is this true?
Crashalot
  • 223
  • 2
  • 5
8
votes
3 answers

Why do I get an OOM error although my model is not that large?

I am a newbie in GPU based training and deep learning models. I am running cDCGAN (Conditional DCGAN) in TensorFlow on my 2 Nvidia GTX 1080 GPUs. My data set consists of around 320,000 images with size 64*64 and 2,350 class labels. If I set my batch…
Ammar Ul Hassan
  • 185
  • 1
  • 1
  • 5
8
votes
1 answer

Why doesn't training RNNs use 100% of the GPU?

I wonder why training RNNs typically doesn't use 100% of the GPU. For example, if I run this RNN benchmark on a Maxwell Titan X on Ubuntu 14.04.4 LTS x64, the GPU utilization is below 90%: The benchmark was launched using the command: python rnn.py…
Franck Dernoncourt
  • 5,573
  • 9
  • 40
  • 75
7
votes
2 answers

What is the best hardware/GPU for deep learning?

What is the best GPU for deep learning currently available on the market? I've heard that Titan X Pascal from NVidia might be the most powerful GPU available at the moment, but would be interesting to learn about other options. And additionally -…
Igor Bobriakov
  • 1,071
  • 2
  • 9
  • 11
1
2 3
10 11