What are No-Go Voices?

In our continuing efforts to ensure a positive experience for all users of our platform, we’re taking specific steps to prevent AI voices from being used to spread misinformation. While our terms already prohibit using our platform to impersonate or harm others, we are taking the added measure of introducing a “no-go voices” safeguard. This safeguard is designed to detect and prevent the creation of specific voice clones. We are working to expand this safeguard to other languages and election cycles. We also aim to continually refine this measure through practical testing and feedback.

 

What is the purpose of the no-go voices policy?

  • This safeguard restricts the creation of voice clones that approximate the voices of political figures, including those actively involved in presidential elections in the US and UK, as part of our broader commitment to prevent AI voices from being used to fabricate misleading content.

What happens to banned voices?

  • The voices are blocked and are no longer usable.

What action do you take in misuse cases?

  • In accordance with our terms and community rules, we take action as is appropriate based on the violation, which could include warnings, removal of voices, a ban on the account, and, in appropriate cases, cooperating with authorities.

Why don’t you add more voices to the list?

  • We are currently evaluating additional voices and seek to work with industry partners to extend these safeguards.

 

You can read more about it here:

Updated

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request