5

I'm looking for a way to create a conditional loss function that looks like this: there is a vector of labels, say l (l has the same length as the input x), then for a given input (y_true, y_pred, l) the loss should be:

def conditional_loss_function(y_true, y_pred, l):
    loss = if l is 0: loss_funtion1
           if l is 1: loss_funtion2
    return loss

It is like a kind of semi-supervised loss funtion.

Tian
  • 51
  • 1
  • 1
  • 3
  • 1
    Possible duplicate of [Custom loss function with additional parameter in Keras](https://datascience.stackexchange.com/questions/25029/custom-loss-function-with-additional-parameter-in-keras) – bers Aug 29 '19 at 11:38

1 Answers1

1

You should be able to solve this with currying. Make a function that takes the label as input and returns a function which takes y_true and y_pred as input. Note that the label needs to be a constant or a tensor for this to work.

def conditional_loss_function(l):
    def loss(y_true, y_pred):
        if l == 0: 
            return loss_funtion1(y_true, y_pred)
        else: 
            return loss_funtion2(y_true, y_pred)
    return loss

model.compile(loss=conditional_loss_function(l), optimizer=...)

Small working example with different loss function depending on the label:

# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
data = dataset[:,0:8]
label = dataset[:,8]

X = Input(shape=(8,))
Y = Input(shape=(1,))
x = Dense(12, input_dim=8, activation='relu')(X)
x = Dense(8, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)

def custom_loss(l):
    def loss(y_true, y_pred):
        if l == 0:
            return binary_crossentropy(y_true, y_pred)
        else: 
            return mean_squared_error(y_true, y_pred)
    return loss    

# Compile model
model = Model(inputs=[X, Y], outputs=predictions)
model.compile(loss=custom_loss(Y), optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(x=[data, label], y=label, epochs=150)

# evaluate the model
scores = model.evaluate([data, label], label)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
Shaido
  • 652
  • 6
  • 13
  • 1
    I think you may have missed the point. The vector l is almost certainly dynamically changing. – kbrose Mar 01 '18 at 15:11
  • @kbrose: I believe it should work for your case. This github issue have a similar solution https://github.com/keras-team/keras/issues/2121 – Shaido Mar 01 '18 at 15:43
  • @kbrose You are right, l is a vector which has the same length as the input. – Tian Mar 01 '18 at 16:31
  • @Tian: I added a small working example that should better illustrate the idea – Shaido Mar 02 '18 at 01:31
  • @Shaido I am confused that should "label" here be a tensor? If "label" is a tensor, can we do like "if label == 0:"? – Tian Mar 02 '18 at 04:14
  • @Tian: To use the labels in the loss function, you need to make it into a tensor (easiest by setting into one of the inputs to the model as the above example). In the normal case this is not necessary. The reason we can do `if l == 0:` here is because the `custom_loss` method takes `l` as input and gives back a function with the input/output accepted by the `loss=` when compiling the model. – Shaido Mar 02 '18 at 05:21
  • This approach is not compatible with eager execution, see https://stackoverflow.com/questions/57704771/inputs-to-eager-execution-function-cannot-be-keras-symbolic-tensors – bers Aug 29 '19 at 11:38