Hi, I have exported a model with a batch size of 32, I wanted to inference on 32 audios at once instead of individual audios.
Our clients only support batch_size=1. For batched inference you can take a look at the evaluate.py
code, which works directly from a training checkpoint, or maybe evalate_tflite.py
, which uses the Python package and works with an exported (batch_size=1) model and splits the work across processes.