Dobby
Back to Blog
Architecturemulti-frameworkcrewailangchain

Managing AI Agents Across Multiple Frameworks

Your team uses CrewAI for orchestration, LangChain for RAG, and OpenAI Assistants for customers. Here is how to manage them all from one place.

Gil KalMarch 24, 20266 min read

No single agent framework does everything well. CrewAI excels at multi-agent orchestration. LangChain has the deepest tool ecosystem. OpenAI Assistants provide the simplest API for conversational agents. In practice, most teams end up using two or three frameworks — and that is when the management challenge begins. Different dashboards, different logging formats, different cost tracking, different deployment models. The question is not whether to standardize on one framework. It is how to manage the diversity.

Why Teams Use Multiple Frameworks

The reasons are practical, not political. A DevOps team adopts CrewAI because its crew-based model maps naturally to their workflows — define a crew of agents, each with a role and tools, and let them collaborate on complex tasks. The data team picks LangChain because they need complex RAG pipelines with custom retrievers, chunking strategies, and embedding models. The product team chooses OpenAI Assistants because their agents need to be customer-facing and the API is the most polished for conversational interactions.

Each team made a reasonable choice. The problem is that nobody planned for the aggregate. Six months later, the organization has 15 agents across three frameworks, three separate billing accounts for LLM usage, no unified view of what is running, and no consistent way to enforce policies. When the CEO asks how much the company is spending on AI agents, nobody can answer without checking three dashboards and a spreadsheet.

The Four Management Gaps

When agents span multiple frameworks, four gaps emerge that no single framework can close on its own:

  • Visibility — you cannot see all agents across frameworks in one dashboard. Each framework has its own monitoring, its own logs, its own health checks. Correlating an issue across a CrewAI crew and an OpenAI Assistant requires switching between tools
  • Cost attribution — LLM costs land in one billing account with no way to split by agent, team, or project. A single OpenAI invoice covers your product agents, your DevOps agents, and your data pipeline agents with no breakdown
  • Policy enforcement — governance rules exist per-framework but not across the organization. Your CrewAI agents might have approval gates, but your LangChain agents run without any human oversight
  • Incident response — when something breaks, you check three dashboards instead of one. A runaway agent in one framework can exhaust provider quotas that affect agents in every other framework

Protocol-Based Integration

The key insight is that you do not need agents to share a framework — you need them to share a protocol. When all agents communicate through standard protocols like A2A (Agent-to-Agent), MCP (Model Context Protocol), or REST webhooks, a control plane can monitor and govern them regardless of which framework built them. This is the same principle that made the internet work: TCP/IP does not care if you are running Apache or Nginx.

In practice, protocol-based integration means every agent — whether built with CrewAI, LangChain, OpenAI, or a custom framework — routes its LLM calls through a central gateway and registers its identity with a control plane. The gateway captures cost data. The control plane tracks health, enforces policies, and provides a unified dashboard. The agent's framework becomes an implementation detail, not a management boundary.

The best agent management strategy is framework-agnostic. Standardize on protocols, not on frameworks — and let teams choose the best tool for each job.

Unified Agent Identity

Every agent — regardless of framework — needs a unique identity in your control plane. This identity tracks the agent's name, framework, capabilities, cost history, health status, and policy compliance. When a CrewAI agent makes an LLM call, the control plane logs it under that agent's identity. When a LangChain agent triggers an approval gate, the same identity system routes the approval to the right human. One identity, many frameworks.

Unified identity also enables cross-framework analytics. You can compare the cost-per-task of your CrewAI agents against your LangChain agents. You can see which framework's agents fail most often, which ones require the most human intervention, and which ones deliver the best ROI. Without unified identity, these comparisons are impossible because each framework tracks agents differently.

How Dobby Unifies Frameworks

Dobby is designed to be the home for agents from any framework. When an agent makes its first LLM call through the Dobby gateway, auto-discovery registers it in the control plane with its framework, model, and capabilities. From that point, the agent appears in the unified dashboard alongside every other agent in the organization — whether it was built with CrewAI, LangChain, OpenAI Assistants, or a custom Python script.

The integration is lightweight. Any framework that can make an OpenAI-compatible API call can use Dobby's gateway. You change two lines of code — the base URL and the API key — and the agent is connected. No SDK to install, no wrapper classes to write, no framework-specific plugins to configure.

The Gateway Pattern

An Agentic Gateway routes all LLM calls through a single proxy, regardless of which agent or framework initiated the call. This gives you three things instantly: cost tracking per agent, policy enforcement per organization, and the ability to switch LLM providers without changing agent code. Instead of each framework managing its own provider credentials and rate limits, the gateway handles it centrally.

# Any framework, same gateway
from openai import OpenAI

client = OpenAI(
    base_url="https://dobby-ai.com/api/v1/gateway",
    api_key="gk_user_your_gateway_key"
)

response = client.chat.completions.create(
    model="claude-sonnet-4-20250514",  # or gpt-4o, gemini-2.5-flash
    messages=[{"role": "user", "content": "Analyze this PR"}]
)

The beauty of the gateway pattern is that it works with frameworks that do not even know about Dobby. A vanilla LangChain agent that uses the OpenAI SDK will work out of the box — just point it at the gateway URL. A CrewAI crew that uses multiple models can route all of them through the same gateway key. The gateway is transparent to the agent code while providing full visibility to the operations team.

Start With Visibility

If you are managing agents across multiple frameworks today, start by connecting them all to a single control plane. Do not try to enforce policies on day one — first, get visibility. See which agents are running, what they cost, and how often they fail. That data will tell you where to focus governance efforts. Most teams discover that 80 percent of their agent costs come from 20 percent of their agents, and that is where you apply controls first.

The path from visibility to governance is incremental. Week one: connect all agents to the gateway and see the unified dashboard. Week two: identify the top cost drivers and set token budgets. Month one: roll out approval gates for high-risk actions. Month two: standardize model assignments to cut costs on routine tasks. Each step builds on the data from the previous one, so decisions are informed rather than guessed.

Ready to take control of your AI agents?

Start free with Dobby AI — connect, monitor, and govern agents from any framework.

Get Started Free