Agent Reference
AgentConfig
The initial message the agent should send at the start of the conversation.
Whether the agent should generate responses continuously (True) or only respond once per human input (False).
The maximum number of seconds the agent is allowed to go without speaking before being considered idle.
Whether the human can interrupt the agent while it is speaking.
Whether the agent should end the conversation when it detects the human saying goodbye.
Whether to play filler audio (typing noises or phrases like “uh” and “um”) when the agent is thinking. Can be configured via FillerAudioConfig.
Configuration for sending webhooks from the agent.
Whether to track the sentiment of the agent’s responses.
Configuration for custom actions the agent can take, such as making API calls.
LLMAgentConfig
The preamble text to prepend before the prompt when generating responses. This allows priming the model.
The prompt to use for generating the first response from the agent. If not provided, the agent will generate a response to the first user message without any initial prompt.
The name of the OpenAI model to use, e.g. “text-curie-001”.
The sampling temperature to use for the model. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
The maximum number of tokens to generate in the completion.
The response for the agent to give when it is cut off mid-response.
ChatGPTAgentConfig
The preamble text to prepend before the prompt when generating responses. This allows priming the model.
The prompt to use for generating the first response from the agent. If not provided, the agent will generate a response to the first user message without any initial prompt.
The name of the OpenAI model to use, e.g. “gpt-3.5-turbo-0613”.
The sampling temperature to use for the model. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
The maximum number of tokens to generate in the completion.
The response for the agent to give when it is cut off mid-response.
Configuration when using the Azure OpenAI API.
Configuration for hitting a vector database for retrieval before generating a response.
ChatAnthropicAgentConfig
The preamble text to prepend before the prompt when generating responses. This allows priming the model.
The name of the Anthropic model to use. e.g. “claude-v1”.
Max tokens to sample for autocompletion stream. Defaults to 200.
ChatVertexAIAgentConfig
The preamble text to prepend before the prompt when generating responses. This allows priming the model.
The name of the Vertex model to use. e.g. “chat-bison@001”.
Whether the agent should generate responses continuously (True) or only respond once per human input (False). Google Vertex AI doesn’t support streaming, set to False
EchoAgentConfig
The EchoAgent simply echoes back the user’s messages. It does not take any configuration parameters.
GPT4AllAgentConfig
The preamble text to prepend before the prompt when generating responses. This allows priming the model.
The path to the model weights file.
Whether the agent should generate responses continuously (True) or only respond once per human input (False).
LlamacppAgentConfig
The preamble text to prepend before the prompt when generating responses. This allows priming the model.
Additional kwargs to pass to the LlamaCpp model initializer.
The template to use for formatting the prompt with the conversation history and current user input. Can be a string referring to a built-in template like “alpaca”, or a custom PromptTemplate.
InformationRetrievalAgentConfig
A description of the intended recipient that the agent is trying to reach.
A description of who the agent is representing in the call.
A description of the goal or task that the agent is trying to accomplish, to provide context for the information retrieval.
A list of the information fields the agent is trying to collect.
RESTfulUserImplementedAgentConfig
Configuration for the REST endpoint to hit to get responses from the user implemented agent.
Whether the agent should generate responses continuously (True) or only respond once per human input (False).