Hello everyone,
I saw this gist of running inference on a microphone with python https://gist.github.com/reuben/80d64de15d1f46d34d28c7e83fc5f57e#file-ds_mic-py and I’ve been trying to get this working on node for the past couple of days to no avail.
I’ve tried using the feedAudioContent argument like in the python gist but I keep getting an illegal number of arguments error when I pass in a read stream as the first argument and the mic buffer as the second.
I’ve also tried to get it working like in the example, (https://github.com/mozilla/DeepSpeech/blob/master/native_client/javascript/client.js) and here I’ve gotten slightly farther. I’ve replaced the audio buffer with one sent from the browser recording. converted it to wav, and sent as a BufferArray with the following settings:
codec: {
sampleRate: 16000,
channels: 1,
app: 2048,
frameDuration: 20,
bufferSize: 2048
}
But whenever I do this all I get is either a blank inference result or sometimes I get the letter “h” as a result.
I haven’t modified the original script much. Just, instead of getting a buffer from a wav file using fs I just have the browser send a buffer.
Any help on this would be incredibly appreciated.