2

I would like to build a ANN for text classification, which has an LSTM layer, and using weights obtained via a Doc2Vec model trained before:

model_doc2vec = Sequential()
model_doc2vec.add(Embedding(voacabulary_dim, 100, input_length=longest_document, weights=[training_weights], trainable=False))
model_doc2vec.add(LSTM(units=10, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_doc2vec.add(Flatten())
model_doc2vec.add(Dense(3, activation='softmax'))
model_doc2vec.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

At the moment I am not able to get the weights in the Embedding() layer mentioned above. I would like to know which is the easiest way to get these weights.

Simone
  • 695
  • 1
  • 12
  • 23

1 Answers1

6

Weights are nothing but the pretrained word vectors . You can use any of word2vec or Glove embedding and create an embedding matrix to get that . Essentially each row of the matrix will be the vector for a word in the word2vec/Glove vocabulary. Please have a look here

Gyan Ranjan
  • 801
  • 7
  • 12