We offer a model called the "Turbo V2" model, which is a highly optimized model for low-latency applications such as chatbots, while still maintaining the same quality and vocal performance that users have come to expect from our other models.
We've consistently measured around 400 milliseconds of latency.
Currently, the model is only available in English. You can read more about it here.