How can I hold the model fixed?

This question may not make much sense. But I am trying to do some work in the adversarial space and want to hold the model fixed so that I can backpropogate back to (and over) my input audio to change it based on the output of the model. Those familiar with adversarial attacks will better understand what I’m talking about.

For the rest, basically… I have my input audio, X, and my predictions Y going through DeepSpeech (DS). I want to pass X through DS to get Y, then modify Y to be something else (instead of the correct word it, maybe I want it to be happy), and then backpropogate that through DS (without changing the weights) back to my input X and modify that.

Can anyone provide any guidance as to how to do that?

Thank you kindly

https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/train/AdamOptimizer?hl=en#compute_gradients

So then do I just pipe my input to the DeepSpeech model and compute_gradients for each layer of DeepSpeech?

Just wanted to follow up on the best way to approach this?

What’s wrong with the link we shared you ?

I guess it makes sense I’m just not sure where in the DeepSpeech codebase I’d start to do that. And what is the loss function calculating? It’s not WER directly, I assume?

Loss function is computing the loss of the network, to perform back propagation ?

Right - so what is the loss function for DeepSpeech?

Ctc loss as documented.

This is properly documented in https://deepspeech.readthedocs.io/en/v0.6.0/DeepSpeech.html#introduction