Use LangChain to determine your agent’s responses.
init_chat_model()
method described here.
This implementation allows users to create a LangChain agent using a variety of different model providers
by passing in the relevant model
and provider
params into the LangchainAgentConfig
. For example, if I want to use an OpenAI agent, I would pass in an agent config like:
LangchainAgent
is designed to make it easy to plug in your own custom LangChain chains. You can either:
LangchainAgent
LangchainAgent
and build custom processing to create a chain based off a LangchainAgentConfig
LangchainAgent
constructor has a chain
parameter where you can directly pass your chain. So, to use this in a conversation, you can create a custom AgentFactory
that builds
your chain when initializing the langchain agent.
For example, we will design a factory which makes a custom chain querying Anthropic Claude Opus to make a poem at each agent turn:
LangchainAgentConfig
LangchainAgent
and overwriting the self.create_chain()
method. This method is called when a LangchainAgent
is initialized without a chain
manually passed into the constructor.
Within this method, you can directly access the agent config at self.agent_config
and build your own chain using its fields.
For example below, we will design agent that builds a custom chain to query a Gemini LLM to generate a poem on a topic.
The topic and LLM setup (provider and model name) will all be passed in via the config, allowing for strong customization.
As a further example of this customizability, we will confirm the LLM provider is set to Google GenAI and raise an error otherwise.