4

I need to use scipy.optimize module after encoding some data with pytorch. However, scipy.optimize does not take torch.tensor as its input. How can I use scipy.optimize module with data with gradient path attached?

Kiritee Gak
  • 1,789
  • 1
  • 10
  • 25
Eiffelbear
  • 175
  • 8

2 Answers2

1

Here's a workaround I'm currently using:

# model definition
class MLP(Module):
    # define model elements
    def __init__(self):
        super(MLP, self).__init__()
        self.hidden1 = Linear(2, 200)
        self.hidden2 = Linear(200, 100)
        self.hidden3 = Linear(100, 1)
        self.activation1 = LeakyReLU()
        self.activation2 = LeakyReLU()
 
    # forward propagate input
    def forward(self, X):
        optimize_flag = False
        if not torch.is_tensor(X):
            optimize_flag = True
            X = Variable(torch.from_numpy(X)).float()
        X = self.hidden1(X)
        X = self.activation1(X)
        X = self.hidden2(X)
        X = self.activation2(X)
        X = self.hidden3(X)
        if optimize_flag:
            return X.detach()
        return X

Just make sure when you are using scipy.optimize.minimize or any other function that uses the x0 flag to input to pass in a detached tensor (i.e. minimize(surrogate, x0=x.detach().numpy()))

Yamen
  • 11
  • 1
0

I wrote a library to do just that: autograd-minimize

You can do something like:

from autograd_minimize.torch_wrapper import torch_function_factory
from autograd_minimize import minimize 

func, params = torch_function_factory(your_model, your_loss, X_train, y_train)

# Minimization
res = minimize(func, params, method='L-BFGS-B')
```
bruno
  • 1