How to Connect CrewAI Agents to a Control Plane
Connect your CrewAI crews to Dobby for unified monitoring, cost tracking, and governance. Step-by-step integration guide.
What you will learn
- Connect a CrewAI crew to Dobby via the Gateway
- Track costs and token usage for each crew member
- Set up approval gates for crew task execution
- Monitor crew runs in the unified dashboard
Why Connect CrewAI to a Control Plane?
CrewAI is excellent at orchestrating multi-agent workflows — defining roles, assigning tasks, and managing collaboration between agents. But it was not designed to manage costs, enforce policies, or provide enterprise-grade audit trails.
By routing CrewAI LLM calls through a control plane, you get cost tracking, policy enforcement, and observability without changing how your crews work.
CrewAI runs autonomously with no cost visibility. A crew of 5 agents burns $200 overnight on GPT-4 calls. You find out when the invoice arrives.
Every LLM call is tracked per crew member. Budget alert fires at $50. The crew is automatically throttled before exceeding the limit.
Integration in 3 Steps
Create a Gateway API key at dobby-ai.com → Gateway → API Keys. Use a service key (gk_svc_*) for production crews — it supports 500 requests per minute.
Configure your CrewAI agents to use the Dobby Gateway as their LLM endpoint. This is a one-line change — point the base URL to the Gateway instead of OpenAI directly.
Run your crew. All LLM calls now flow through the Gateway — costs, tokens, and latency are tracked automatically per agent.
Code Example
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
# Point to Dobby Gateway instead of OpenAI
llm = ChatOpenAI(
model="gpt-4o",
base_url="https://dobby-ai.com/api/v1/gateway",
api_key="gk_svc_your_service_key" # Gateway key
)
# Define agents with the Gateway-routed LLM
researcher = Agent(
role="Senior Researcher",
goal="Find comprehensive information",
llm=llm,
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Write clear, engaging content",
llm=llm,
verbose=True
)
# Create tasks and crew as usual
task = Task(
description="Research AI agent governance trends",
agent=researcher,
expected_output="A summary report"
)
crew = Crew(agents=[researcher, writer], tasks=[task])
result = crew.kickoff()
# Every LLM call is now tracked in Dobby:
# - Cost per agent (researcher vs writer)
# - Total crew run cost
# - Token usage breakdown
# - Full audit trailDobby auto-discovers CrewAI agents on their first Gateway call. Each crew member appears in the Agent Fleet dashboard with its own cost tracking, health status, and activity timeline — no manual registration needed.
What You See in the Dashboard
- Agent Fleet — each crew member listed with status, cost, and last activity
- Cost by Agent — researcher vs writer cost comparison
- Live Feed — real-time LLM requests from the crew as they happen
- Audit Trail — full history of every decision and output
Adding Governance to Crews
Once connected, you can add governance layers without changing your CrewAI code. Set budget limits per crew, require approval before high-cost operations, restrict which models crew members can use, and set up Slack alerts for crew completion or errors.
You can also build CrewAI crews directly in Dobby using the YAML Crew Builder — define agents, tasks, and tools in YAML and let Dobby handle the orchestration, monitoring, and governance automatically.