Installation

npm install vocode

Or, start from our Replit template.

Usage

Setting up the conversation

Our self-hosted backend allows you to expose a websocket route in the same format that our hosted backend does. This allows you to deploy any agent you’d like into the conversation.

To get started, clone the Vocode repo or copy the client backend app directory.

Environment

Copy the .env.template and add your API keys.

cp .env.template .env

You’ll need:

  • Deepgram (for speech transcription)
  • OpenAI (for the underlying agent)
  • Azure (for speech synthesis)
DEEPGRAM_API_KEY=
OPENAI_API_KEY=
AZURE_SPEECH_KEY=
AZURE_SPEECH_REGION=

Running with Docker

From the client_backend directory:

docker build -t vocode-client-backend .
docker run --env-file=.env -p 3000:3000 -t vocode-client-backend

Running with Python

pip3 install vocode
uvicorn main:app --port 3000

You now have a server with a Vocode websocket route at localhost:3000! You can now use the useConversation hook with your self-hosted backend as follows:

const { status, start, stop, analyserNode } = useConversation({
  backendUrl: "<YOUR_BACKEND_URL>", // looks like ws://localhost:3000/conversation or wss://asdf1234.ngrok.app/conversation if using ngrok
  audioDeviceConfig: {},
});

Demo installation and setup

Clone the vocode-react-demo repository.

$ git clone https://github.com/vocodedev/vocode-react-demo.git

Run npm install inside the directory to download all of the dependencies.

$ npm install

Set your Client SDK key inside of your .env

REACT_APP_VOCODE_API_KEY=YOUR KEY HERE

Start the application

$ npm start