I need to use scipy.optimize module after encoding some data with pytorch. However, scipy.optimize does not take torch.tensor as its input. How can I use scipy.optimize module with data with gradient path attached?
Asked
Active
Viewed 2,355 times
4
-
1For questions regarding the `scipy` package, I insist you post the question on StackOverflow. – Shubham Panchal Aug 03 '19 at 05:32
-
[This](https://gist.github.com/gngdb/a9f912df362a85b37c730154ef3c294b) works pretty well – McLawrence Nov 18 '19 at 14:51
-
I wrote [this new wrapper](https://github.com/gngdb/pytorch-minimize) as well because I forgot I wrote that gist. – gngdb Mar 07 '21 at 19:28
2 Answers
1
Here's a workaround I'm currently using:
# model definition
class MLP(Module):
# define model elements
def __init__(self):
super(MLP, self).__init__()
self.hidden1 = Linear(2, 200)
self.hidden2 = Linear(200, 100)
self.hidden3 = Linear(100, 1)
self.activation1 = LeakyReLU()
self.activation2 = LeakyReLU()
# forward propagate input
def forward(self, X):
optimize_flag = False
if not torch.is_tensor(X):
optimize_flag = True
X = Variable(torch.from_numpy(X)).float()
X = self.hidden1(X)
X = self.activation1(X)
X = self.hidden2(X)
X = self.activation2(X)
X = self.hidden3(X)
if optimize_flag:
return X.detach()
return X
Just make sure when you are using scipy.optimize.minimize or any other function that uses the x0 flag to input to pass in a detached tensor (i.e. minimize(surrogate, x0=x.detach().numpy()))
Yamen
- 11
- 1
0
I wrote a library to do just that: autograd-minimize
You can do something like:
from autograd_minimize.torch_wrapper import torch_function_factory
from autograd_minimize import minimize
func, params = torch_function_factory(your_model, your_loss, X_train, y_train)
# Minimization
res = minimize(func, params, method='L-BFGS-B')
```
bruno
- 1