Dobby
Back to Academy
DeveloperBeginner

Managing LangChain Agents with a Control Plane

Route LangChain agent LLM calls through a control plane for cost tracking, governance, and multi-provider support.

8 min read Gil KalMar 27, 2026

What you will learn

  • Route LangChain agent LLM calls through the Dobby Gateway
  • Track costs per LangChain agent and chain
  • Switch between LLM providers without code changes
  • Add governance policies to existing LangChain workflows

LangChain + Control Plane

LangChain excels at building agent chains — sequences of LLM calls, tool uses, and retrieval steps. But as chains grow in complexity, so do costs, debugging difficulty, and compliance requirements. A control plane adds the operational layer LangChain does not provide.

Without Dobby

Each LangChain chain calls OpenAI directly. Costs are invisible until the monthly bill. Debugging a 10-step chain means reading logs in 3 different systems.

With Dobby

Every LLM call in every chain flows through one gateway. Per-chain cost tracking. Real-time request feed. One audit trail for all chains.

Integration Code

python
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate

# Point LangChain to the Dobby Gateway
llm = ChatOpenAI(
    model="gpt-4o",
    openai_api_base="https://dobby-ai.com/api/v1/gateway",
    openai_api_key="gk_svc_your_service_key"
)

# Use LangChain as usual — the Gateway handles the rest
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful research assistant."),
    ("human", "{input}"),
])

agent = create_openai_tools_agent(llm, tools=[], prompt=prompt)
executor = AgentExecutor(agent=agent, tools=[], verbose=True)

result = executor.invoke({"input": "Analyze Q1 sales data"})

One line changes the game: openai_api_base points to the Gateway. Every LangChain call — from simple completions to complex ReAct loops — is now tracked, governed, and auditable.

Multi-Provider Routing

A major advantage of routing through a gateway is provider flexibility. Your LangChain chain can use GPT-4o for complex reasoning, Claude for long documents, and Gemini for simple classification — all through the same endpoint. Switch models by changing a parameter, not your code.

python
# Same gateway, different models
llm_reasoning = ChatOpenAI(model="gpt-4o", openai_api_base=GATEWAY_URL, openai_api_key=KEY)
llm_documents = ChatOpenAI(model="claude-sonnet-4-20250514", openai_api_base=GATEWAY_URL, openai_api_key=KEY)
llm_classify  = ChatOpenAI(model="gemini-2.5-flash", openai_api_base=GATEWAY_URL, openai_api_key=KEY)

# Each call is tracked separately with provider-specific cost

What You Get

  • Per-chain cost breakdown — see how much each step in your chain costs
  • Provider comparison — which model delivers the best cost-to-quality ratio
  • Real-time monitoring — watch chain execution live as requests flow through
  • Budget enforcement — chains stop when they hit cost limits, not after
  • Audit trail — every LLM call in every chain, queryable and exportable

LangSmith + Dobby

LangSmith provides detailed chain tracing. Dobby provides cost governance and multi-framework management. They are complementary — use LangSmith for debugging chain logic, use Dobby for cost control, approvals, and fleet management across all your agents (not just LangChain).

Dobby includes a LangSmith integration in the Gateway agent proxy. You can view LangSmith traces alongside Dobby's cost and governance data — best of both worlds.

Related Features

Ready to try this yourself?

Start free — no credit card required.

Start Free