Transfer learning between different languages

It looks fine on my end, I haven’t seen similar errors as yours.

If I don’t drop any layer I get:

    Initializing model from /home/ben/Downloads/deepspeech-0.5.1-checkpoint
Loading layer_1/bias
Loading layer_1/weights
Loading layer_2/bias
Loading layer_2/weights
Loading layer_3/bias
Loading layer_3/weights
Loading lstm_fused_cell/kernel
Loading lstm_fused_cell/bias
Loading layer_5/bias
Loading layer_5/weights
Loading layer_6/bias
Traceback (most recent call last):
  File "/home/ben/PycharmProjects/DeepSpeech/DeepSpeech.py", line 893, in <module>
    tf.app.run(main)
  File "/home/ben/PycharmProjects/DeepSpeech/venv/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/ben/PycharmProjects/DeepSpeech/DeepSpeech.py", line 877, in main
    train()
  File "/home/ben/PycharmProjects/DeepSpeech/DeepSpeech.py", line 483, in train
    v.load(ckpt.get_tensor(v.op.name), session=session)
  File "/home/ben/PycharmProjects/DeepSpeech/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 2175, in load
    session.run(self._initializer_op, {self._initializer_op.inputs[1]: value})
  File "/home/ben/PycharmProjects/DeepSpeech/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/ben/PycharmProjects/DeepSpeech/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1128, in _run
    str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (29,) for Tensor 'layer_6/bias/Initializer/zeros:0', which has shape '(33,)'

Process finished with exit code 1

If I drop last layer only:

Initializing model from /home/ben/Downloads/deepspeech-0.5.1-checkpoint
Loading layer_1/bias
Loading layer_1/weights
Loading layer_2/bias
Loading layer_2/weights
Loading layer_3/bias
Loading layer_3/weights
Loading lstm_fused_cell/kernel
Loading lstm_fused_cell/bias
Loading layer_5/bias
Loading layer_5/weights
Loading global_step
Loading beta1_power
Loading beta2_power
Loading layer_1/bias/Adam
Loading layer_1/bias/Adam_1
Loading layer_1/weights/Adam
Loading layer_1/weights/Adam_1
Loading layer_2/bias/Adam
Loading layer_2/bias/Adam_1
Loading layer_2/weights/Adam
Loading layer_2/weights/Adam_1
Loading layer_3/bias/Adam
Loading layer_3/bias/Adam_1
Loading layer_3/weights/Adam
Loading layer_3/weights/Adam_1
Loading lstm_fused_cell/kernel/Adam
Loading lstm_fused_cell/kernel/Adam_1
Loading lstm_fused_cell/bias/Adam
Loading lstm_fused_cell/bias/Adam_1
Loading layer_5/bias/Adam
Loading layer_5/bias/Adam_1
Loading layer_5/weights/Adam
Loading layer_5/weights/Adam_1
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:00:02 | Steps: 2 | Loss: 145.359764

So it looks fine. Which branch are you using? transfer-learning2? https://github.com/mozilla/DeepSpeech/tree/transfer-learning2

Yes I was using transfer-learning2. I was able to find the issue. I had to additionally give the parameters ‘–load init’ and ’ --source_model_checkpoint_dir /model’. Before I just gave the checkpoint_dir, which apparently wasn’t enough. I am not entirely sure why though.

Anyway, thank you very much for the help.

1 Like