AgentConfig

initial_message
Optional[BaseMessage]

The initial message the agent should send at the start of the conversation.

generate_responses
bool

Whether the agent should generate responses continuously (True) or only respond once per human input (False).

allowed_idle_time_seconds
Optional[float]

The maximum number of seconds the agent is allowed to go without speaking before being considered idle.

allow_agent_to_be_cut_off
bool

Whether the human can interrupt the agent while it is speaking.

end_conversation_on_goodbye
bool

Whether the agent should end the conversation when it detects the human saying goodbye.

send_filler_audio
Union[bool, FillerAudioConfig]

Whether to play filler audio (typing noises or phrases like “uh” and “um”) when the agent is thinking. Can be configured via FillerAudioConfig.

webhook_config
Optional[WebhookConfig]

Configuration for sending webhooks from the agent.

track_bot_sentiment
bool

Whether to track the sentiment of the agent’s responses.

actions
Optional[List[ActionConfig]]

Configuration for custom actions the agent can take, such as making API calls.

LLMAgentConfig

prompt_preamble
str

The preamble text to prepend before the prompt when generating responses. This allows priming the model.

expected_first_prompt
Optional[str]

The prompt to use for generating the first response from the agent. If not provided, the agent will generate a response to the first user message without any initial prompt.

model_name
str

The name of the OpenAI model to use, e.g. “text-curie-001”.

temperature
float

The sampling temperature to use for the model. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.

max_tokens
int

The maximum number of tokens to generate in the completion.

cut_off_response
Optional[CutOffResponse]

The response for the agent to give when it is cut off mid-response.

ChatGPTAgentConfig

prompt_preamble
str

The preamble text to prepend before the prompt when generating responses. This allows priming the model.

expected_first_prompt
Optional[str]

The prompt to use for generating the first response from the agent. If not provided, the agent will generate a response to the first user message without any initial prompt.

model_name
str

The name of the OpenAI model to use, e.g. “gpt-3.5-turbo-0613”.

temperature
float

The sampling temperature to use for the model. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.

max_tokens
int

The maximum number of tokens to generate in the completion.

cut_off_response
Optional[CutOffResponse]

The response for the agent to give when it is cut off mid-response.

azure_params
Optional[AzureOpenAIConfig]

Configuration when using the Azure OpenAI API.

vector_db_config
Optional[VectorDBConfig]

Configuration for hitting a vector database for retrieval before generating a response.

ChatAnthropicAgentConfig

prompt_preamble
str

The preamble text to prepend before the prompt when generating responses. This allows priming the model.

model_name
str

The name of the Anthropic model to use. e.g. “claude-v1”.

max_tokens_to_sample
int

Max tokens to sample for autocompletion stream. Defaults to 200.

ChatVertexAIAgentConfig

prompt_preamble
str

The preamble text to prepend before the prompt when generating responses. This allows priming the model.

model_name
str

The name of the Vertex model to use. e.g. “chat-bison@001”.

generate_responses
bool

Whether the agent should generate responses continuously (True) or only respond once per human input (False). Google Vertex AI doesn’t support streaming, set to False

EchoAgentConfig

The EchoAgent simply echoes back the user’s messages. It does not take any configuration parameters.

GPT4AllAgentConfig

prompt_preamble
str

The preamble text to prepend before the prompt when generating responses. This allows priming the model.

model_path
str

The path to the model weights file.

generate_responses
bool

Whether the agent should generate responses continuously (True) or only respond once per human input (False).

LlamacppAgentConfig

prompt_preamble
str

The preamble text to prepend before the prompt when generating responses. This allows priming the model.

llamacpp_kwargs
dict

Additional kwargs to pass to the LlamaCpp model initializer.

prompt_template
Optional[Union[PromptTemplate, str]]

The template to use for formatting the prompt with the conversation history and current user input. Can be a string referring to a built-in template like “alpaca”, or a custom PromptTemplate.

InformationRetrievalAgentConfig

recipient_descriptor
str

A description of the intended recipient that the agent is trying to reach.

caller_descriptor
str

A description of who the agent is representing in the call.

goal_description
str

A description of the goal or task that the agent is trying to accomplish, to provide context for the information retrieval.

fields
str

A list of the information fields the agent is trying to collect.

RESTfulUserImplementedAgentConfig

respond
RESTfulEndpointConfig

Configuration for the REST endpoint to hit to get responses from the user implemented agent.

generate_responses
bool

Whether the agent should generate responses continuously (True) or only respond once per human input (False).