I'm trying to run a model which uses CUDA and the colab notebook I'm using is running on GPU runtime.
Now if I call print(torch.cuda.is_available()) from the colab cell itself, it returns True, but when I run a file which prints the same condition it returns False.
There is one question (here) asked with the same problem and has been answered previously, however the answer suggests that I call the main function from the notebook directly. While it solves the problem, I still:
- Need to understand the underlying problem and proper solution
- Even if I call main function, how can I provide arguments that were fed to while running the python file (e.g.
--config_file path_to_file)?
PS: I couldn't reply on the above mentioned question as I don't have the required reputation and had to create another question referencing the older one.