I guess there are better "keywords" than that, especially I'm not sure about the "complexity" word. But I thought of none.
Let's say I have a NLP model with 1. a Embedding Layer 2. a LSTM layer 3. an output layer with n classes.
I would like to know if there is an easy way to estimate the time complexity of training the model ? For exemple, to compare with an embedding dimension of 100 vs 200, with LSTM hidden layer dimension of 128 vs 256 ... for a given sequence length. I suppose I could use %timeit on my train() function (or just on criterion(predicted_label, label).backward())... But if there is a "better way", I'd prefer.