Hello,
I am using a model trained with DeepSpeech in tf serving just fine. However without using the custom decoder my WER is around 16%. I exported the model using the custom decoder but when I loaded it into tf serving I received
Loading servable: {name: speech_detection version: 1} failed: Not found: Op type not registered 'CTCBeamSearchDecoderWithLM' in binary running on c41d408c3a94. Make sure the Op and Kernel are registered in the binary running in this process.
has anyone resolved thus issue?
I figured I could attempt a rebuild of tf serving with the .so files included in the BUILD file but wanted advice before I started investigating.
Thanks,
Dom
P.s.
I understand tf serving is not supported. I am just wondering if anyone encountered this. I am happy to make a PR to the docs for how to get the model in serving working without (and hopefully with)the decoder