Do you offer an AI model for conversational purposes or for chatbots?

  • Updated

Our latest model, Turbo v2.5, is a highly optimized model, specifically tailored for low-latency applications without sacrificing vocal performance and keeping inline with the quality standard that people have come to expect from our models.

We’ve measured latency of around 300ms consistently (+ network).

It’s 25% faster than Turbo v2. It’s 300% faster than Multilingual v2 and adds Vietnamese, Hungarian and Swedish to our existing 29 languages.

You can read more about it here.