Adversarial Examples training input placeholders

I’m a PhD student researching audio adversarial examples. So I’ll be using DeepSpeech to generate some attacks for ASR (also looking at verification models).

There are existing attacks against DeepSpeech, but only work against 0.1. Github issues for those attacks:


Would devs (or anyone) be interested in opening up the source version with an option argument to create_inference_model to modify the input placeholders for optimising our attacks? N.B. This is not regarding training DeepSpeech, rather optimising a specific attack method against a pre-trained model checkpoint.

This would be particularly useful in the 0.5 release, as it looks like the mfcc & windowing will be handled natively in DeepSpeech and won’t require any additional code for attacks (current blocker).

Currently building own fork to do this, but figured I’d ask and see if there’s any interest in doing this on the project side.

Thanks, Dx