0

I'm training a keras sequential model with these layers included. The problem I face here is that the val_loss is constant and does not decrease.. The model is meant to classify the Diabetic retinopathy detection The code snippet is as belows

input_layer = Input(shape = (224,224,3))
base_model = ResNet50(include_top = False, input_tensor = input_layer, weights = "imagenet")
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(256, activation = "relu")(x)
x = Dropout(0.3)(x)
x = Dense(128, activation = "relu")(x)
#x = Dropout(0.5)(x)
out = Dense(5, activation = 'softmax')(x)

model = Model(inputs = input_layer, outputs = out)

And,

optimizer = keras.optimizers.Adam(learning_rate = 3e-4)

es = EarlyStopping(monitor='val_loss', mode='min', patience = 8, restore_best_weights=True)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience = 3, factor = 0.5, min_lr=1e-6)
    
callback_list = [es, rlrop]

model.compile(optimizer = optimizer, loss = "categorical_crossentropy", metrics = ["accuracy"])

The main problem is the over fitting of the model... please help.. I'm not so good at it. Please let me know if there's anything more I need to add for reference.

  • Does this match your problem? [Validation loss is not decreasing](https://datascience.stackexchange.com/questions/43191/validation-loss-is-not-decreasing) – the_strange May 16 '23 at 17:43
  • Please share the training and validation loss plot to properly assess the situation. – noe May 16 '23 at 18:25

0 Answers0