Learn how to browse, install, and connect MCP servers using the Agent-CoreX dashboard — then call Claude with only the tools it actually needs.
Model Context Protocol (MCP) is the open standard that lets AI models like Claude connect to external tools and data sources — databases, file systems, APIs, and more. Agent-CoreX makes managing those MCP connections simple with a built-in marketplace and a semantic retrieval API that cuts token waste.
MCP defines a standard way for an AI model to discover and call external tools. Instead of hard-coding integrations, any tool that implements the MCP spec can be plugged in and used by any compatible model.
Think of it like USB-C for AI — one standard, infinite compatibility.
Agent-CoreX provides two things on top of raw MCP:
This means you never have to manually decide which of your enabled servers are relevant to a specific query. The API does that for you.
Sign up at agentcorex.com. After logging in, go to Dashboard → API Keys and create a new key. Copy it — you'll use it in every API call.
# Store it somewhere safe — treat it like a password
ACX_API_KEY=acx_xxxxxxxxxxxxxxxxxx
Go to Dashboard → MCP Servers. You'll see the full marketplace with categories like:
Each server card shows:
Click Enable on any server to add it to your account.
Once you have servers enabled, use the /retrieve_tools endpoint to get only the tools relevant to a specific user query:
const response = await fetch(
`${ACX_API_BASE}/retrieve_tools?query=${encodeURIComponent(userQuery)}&top_k=5`,
{
headers: {
Authorization: `Bearer ${ACX_API_KEY}`,
},
}
)
const { tools } = await response.json()
// tools is an array of MCP tool definitions — ready to pass to Claude
The top_k parameter controls the maximum number of tools returned. For most queries, 3–5 is enough. This is where the token savings come from: instead of passing all your enabled tools to Claude, you pass only the ones that matter for this query.
Pass the retrieved tools directly to Claude's messages API:
import Anthropic from "@anthropic-ai/sdk"
const anthropic = new Anthropic()
const message = await anthropic.messages.create({
model: "claude-opus-4-5",
max_tokens: 1024,
tools: tools, // only the relevant ones from /retrieve_tools
messages: [{ role: "user", content: userQuery }],
})
Claude sees a focused, minimal toolset and responds more accurately with less context noise.
When Claude decides to use a tool, execute it via the /execute_tool endpoint:
if (message.stop_reason === "tool_use") {
const toolUse = message.content.find(block => block.type === "tool_use")
const execResponse = await fetch(`${ACX_API_BASE}/execute_tool`, {
method: "POST",
headers: {
Authorization: `Bearer ${ACX_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
tool: toolUse.name,
args: toolUse.input,
}),
})
const result = await execResponse.json()
// Feed result back to Claude as a tool_result message
}
Agent-CoreX routes the execution to the correct MCP server, handles the protocol, and returns the result.
Before integrating into your codebase, try it out in Dashboard → Playground. The Playground lets you:
It's the fastest way to verify your server setup and understand which tools get selected for different queries.
For larger setups, Dashboard → Custom Packs lets you group MCP servers into domain-specific bundles — for example, a "data-pack" with your database and file system servers, or a "comms-pack" with Slack and email.
Packs help the retrieval API narrow its search space, which improves both accuracy and speed.
512K lines of production AI agent code: a rare opportunity. Five lessons from the Claude Code leak that should shape how you build AI agents.
The Anthropic leak is a blueprint for how production AI agents handle tools. We built the same system with one key change: retrieve first, inject second.
The Anthropic leak exposed Claude Code's 29,000-line tool definition. Here's the engineering problem this reveals — and why it doesn't scale.
Connect 100+ MCP tools. Cut LLM costs by 60%. Setup in 2 minutes.
Get started free