Using synthesizers
How to control the voice of your application.
Overview
Synthesizers are used to convert text into speech; this guide will show you how to configure and use synthesizers in Vocode.
Supported Synthesizers
Vocode currently supports the following synthesizers:
- Azure (Microsoft)
- Cartesia
- Eleven Labs
- Rime
- Play.ht
- GTTS (Google Text-to-Speech)
- Stream Elements
- Bark
- Amazon Polly
These synthesizers are defined using their respective configuration classes, which are subclasses of the SynthesizerConfig
class.
Configuring Synthesizers
To use a synthesizer, you need to create a configuration object for the synthesizer you want to use. Here are some examples of how to create configuration objects for different synthesizers:
Example 1: Using Eleven Labs with a phone call
In this example, the ElevenLabsSynthesizerConfig.from_telephone_output_device()
method is used to create a configuration object for the Eleven Labs synthesizer.
The method hardcodes some values like the sampling_rate
and audio_encoding
for compatibility with telephone output devices.
ElevenLabs Input Streaming
You can try out our experimental implementation of ElevenLabs’ input streaming API by passing in experimental_websocket=True
into the config and using the ElevenLabsWSSynthesizer
, like:
Play.ht v2
We now support Play.ht’s new gRPC streaming API, which runs much faster than their HTTP API and is designed for realtime communication.
Example 2: Using Cartesia’s streaming synthesizer
We support Cartesia’s low-latency streaming API enabled by WebSockets. You can use the CartesiaSynthesizer
with the CartesiaSynthesizerConfig
to enable this feature.
Telephony
In this example, the CartesiaSynthesizerConfig.from_output_device()
method is used to create a configuration object for the Cartesia synthesizer.
The method takes a speaker_output
object as an argument, and extracts the sampling_rate
and audio_encoding
from the output device.
Controlling Speed & Emotions
You can set the speed
and emotion
parameters in the CartesiaSynthesizerConfig
object to control the speed and emotions of the agent’s voice! See this page for more details.
Example 3: Using Azure in StreamingConversation locally
In this example, the AzureSynthesizerConfig.from_output_device()
method is used to create a configuration object for the Azure synthesizer.
The method takes a speaker_output
object as an argument, and extracts the sampling_rate
and audio_encoding
from the output device.