1

When training my CNN model, based on the random initialization of weights, i get the prediction results. In other words, with the same training and test data i get different results every time when i run the code. When tracking the loss, i can know if the result would be acceptable or not. Based on this, i want to know if there is a way to stop the training if the loss begins by a value superior to a desired one in order to re-run it. The min_delta of the EarlyStopping does not treat this case.

Thanks in advance

1 Answers1

1

You can extend the base Keras implementation of callbacks with a custom on_epoch_end method which compares your metric of interest against a threshold for early stopping.

From the linked article they provide a code sample with a custom callback class + calling that class during model.fit:

import tensorflow as tf

# Implement callback function to stop training
# when accuracy reaches ACCURACY_THRESHOLD
ACCURACY_THRESHOLD = 0.95

class myCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if(logs.get('acc') > ACCURACY_THRESHOLD):
            print("\nReached %2.2f%% accuracy, so stopping training!!" %(ACCURACY_THRESHOLD*100))
            self.model.stop_training = True

# Instantiate a callback object
callbacks = myCallback()

# Load fashion mninst dataset
mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# Scale data
x_train, x_test = x_train / 255.0, x_test / 255.0

# Build a conv dnn model
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(512, activation=tf.nn.relu),
    tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

model.compile(optimizer='adam', \
                loss='sparse_categorical_crossentropy', \
                metrics=['accuracy'])

model.fit(x_train, y_train, epochs=20, callbacks=[callbacks])

Check the linked article for more details:

https://towardsdatascience.com/neural-network-with-tensorflow-how-to-stop-training-using-callback-5c8d575c18a9

Brandon Donehoo
  • 366
  • 1
  • 8
  • thank you, is there any way to access to the last logs.get('acc') which verifies the condition from the callbacks object. – phillipe cauchett Sep 22 '20 at 14:23
  • Hi, not sure I fully understand the question. If it is how to determine what parameters you can access through the logs.get() function, I would try to print list(logs.keys()) in your on_epoch_end method to see what is available. Does that help? – Brandon Donehoo Sep 22 '20 at 14:44
  • Hi, i just want to know, after performing the model.fit when the training stops thanks to the custom callbacks, how can i get the value of accuracy in this example which is superior to the threshold. In other words, can i add a line of code after model.fit to get the last accuracy value recorded. Thank you. – phillipe cauchett Sep 22 '20 at 15:03
  • Hi again, there's a line in the code which should print the accuracy value when the threshold is reached: print("\nReached %2.2f%% accuracy, so stopping training!!" %(ACCURACY_THRESHOLD*100)). Is that not working for you? – Brandon Donehoo Sep 22 '20 at 15:20
  • Thanks, but what i want to do is to get the value something like ( return logs.get('loss')) if we have logs.get('loss')> ACCURACY_THRESHOLD) in order to re-run the training. So, is there any way to get this value from callbacks object. – phillipe cauchett Sep 22 '20 at 16:41
  • i found the answer here. Thanks https://stackoverflow.com/questions/36952763/how-to-return-history-of-validation-loss-in-keras – phillipe cauchett Sep 22 '20 at 22:52