6

I would like to train a generative model that generates artificial handwritten text as output. which architectures would you recommend to use?

Training input could be either images of handwritten letters, not words, or maybe sequences of points for each letter. I thought of using some kind of combination of GAN+LSTM/GRU. Already found:

  1. http://blog.otoro.net/2015/12/12/handwriting-generation-demo-in-tensorflow/

  2. https://distill.pub/2016/handwriting/

Would appreciate any further recommendations.

wacax
  • 3,370
  • 4
  • 22
  • 45
GrozaiL
  • 374
  • 3
  • 6
  • Interesting question. Is the goal to produce images that look like text or do they have to be words? That's kind of a nuanced but important difference in your question and would dictate your approaches. – I_Play_With_Data Jan 29 '19 at 18:26

3 Answers3

1

Found some implementation of lstm-based handwriter. Maybe I will use some parts.

GrozaiL
  • 374
  • 3
  • 6
0

I suggest you to implement a "simple" GAN with convolutional layers. It's not necessary, in my opinion, to add LSTM layers. That's an additional layer of complexity, while you can achieve state-of-the-art results with conv layers alone (and save also training time).

You can train your model(s) on the EMNIST Dataset of handwritten letters.

Leevo
  • 6,005
  • 3
  • 14
  • 51
0

This paper is using images of words for training.

Adversarial Generation of Handwritten Text Images Conditioned on Sequences https://arxiv.org/abs/1903.00277

brian
  • 1