Developer docs
Build on mybrain
Three integration surfaces let any AI app use mybrain as its memory layer — Model Context Protocol for Claude, drop-in LangChain/Python memory, or OpenAI-compatible chat completions.
Build agents that remember
mybrain is designed to be the memory layer for your AI agents. Your agent reads context before responding, writes memories after each turn — and every downstream session sees the same brain. Two calls per turn is all it takes:
POST /v1/brain/ask— read relevant context for the user's message.POST /v1/brain/remember— write anything new the turn revealed.
// Turn 1: read → respond → write
const { answer: context } = await fetch(
"https://silverline-control-plane.fly.dev/v1/brain/ask",
{
method: "POST",
headers: {
Authorization: "Bearer mbk_live_...",
"Content-Type": "application/json",
},
body: JSON.stringify({ question: userMessage }),
}
).then((r) => r.json());
const reply = await yourLLM.chat({
system: `You know the user. Context: ${context}`,
user: userMessage,
});
await fetch("https://silverline-control-plane.fly.dev/v1/brain/remember", {
method: "POST",
headers: {
Authorization: "Bearer mbk_live_...",
"Content-Type": "application/json",
},
body: JSON.stringify({ content: `User asked: ${userMessage}. I said: ${reply}` }),
});For OpenAI-style drop-in memory (no code changes), jump to OpenAI-compatible chat completions.
Overview
All mybrain APIs live at a single base URL. Every request is authenticated with a personal API key and carries rate-limit headers so clients can back off cleanly.
- Base URL
- https://silverline-control-plane.fly.dev
- Default limits
- 20 req/min · 1,000 req/day
Authentication
Pass your key as a bearer token. Keys start with mbk_live_ and are shown exactly once at creation time.
curl https://silverline-control-plane.fly.dev/v1/brain/summary \
-H "Authorization: Bearer mbk_live_..."Scopes: brain:read, brain:write, brain:export.
Node.js / TypeScript
Node 18+ has fetch built in — no SDK required. Every endpoint returns JSON and accepts JSON.
// Ask the brain a question in natural language
const res = await fetch("https://silverline-control-plane.fly.dev/v1/brain/ask", {
method: "POST",
headers: {
Authorization: "Bearer mbk_live_...",
"Content-Type": "application/json",
},
body: JSON.stringify({ question: "What are my current priorities?" }),
});
const { answer } = await res.json();
console.log(answer);// Append a memory
await fetch("https://silverline-control-plane.fly.dev/v1/brain/remember", {
method: "POST",
headers: {
Authorization: "Bearer mbk_live_...",
"Content-Type": "application/json",
},
body: JSON.stringify({
content: "User prefers concise replies.",
metadata: { source: "chat" },
}),
});// Typed wrapper — drop into any Node service
type AskResponse = { answer: string; sources?: string[] };
export async function askBrain(
question: string,
apiKey = process.env.MYBRAIN_KEY!
): Promise<AskResponse> {
const res = await fetch("https://silverline-control-plane.fly.dev/v1/brain/ask", {
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ question }),
});
if (!res.ok) throw new Error(`mybrain ${res.status}: ${await res.text()}`);
return res.json();
}MCP — Claude Desktop & Claude Code
mybrain speaks the Model Context Protocol natively. Drop this block into your Claude Desktop config and the assistant gains three tools: search_memory, add_memory, and get_context.
{
"mcpServers": {
"mybrain": {
"url": "https://silverline-control-plane.fly.dev/mcp",
"headers": {
"Authorization": "Bearer YOUR_MYBRAIN_API_KEY"
}
}
}
}Test the endpoint:
# List the tools mybrain exposes
curl https://silverline-control-plane.fly.dev/mcp \
-H "Authorization: Bearer mbk_live_..."
# Call a tool directly
curl -X POST https://silverline-control-plane.fly.dev/mcp \
-H "Authorization: Bearer mbk_live_..." \
-H "Content-Type: application/json" \
-d '{"name":"search_memory","arguments":{"query":"dietary preferences"}}'LangChain / Python
The mybrain Python package wraps the full API and ships a LangChain-style memory adapter you can hand to any agent expecting a save_context / load_memory_variables contract.
pip install mybrainfrom mybrain import MyBrainMemory
memory = MyBrainMemory(api_key="mbk_live_...")
# Save a turn
memory.save_context(
{"input": "I prefer Python over JavaScript"},
{"output": "Got it!"},
)
# Recall relevant context
vars = memory.load_memory_variables(
{"input": "what language do I prefer?"}
)
# vars["mybrain_context"] -> "- User said: I prefer Python ..."Prefer the raw memory API? Hit the HTTP endpoints directly:
# Add a memory
curl -X POST https://silverline-control-plane.fly.dev/v1/memory/add \
-H "Authorization: Bearer mbk_live_..." \
-H "Content-Type: application/json" \
-d '{"content":"User prefers Python","metadata":{"source":"conversation"}}'
# Semantic search
curl -X POST https://silverline-control-plane.fly.dev/v1/memory/search \
-H "Authorization: Bearer mbk_live_..." \
-H "Content-Type: application/json" \
-d '{"query":"programming preferences","limit":5}'OpenAI-compatible chat completions
Point the OpenAI SDK at mybrain and every call is automatically enriched with memories relevant to the last user message — no prompt-engineering required.
from openai import OpenAI
client = OpenAI(
base_url="https://silverline-control-plane.fly.dev/v1",
api_key="mbk_live_...", # your mybrain key, not an OpenAI key
)
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "user", "content": "What should I focus on today?"},
],
)
print(response.choices[0].message.content)Any model supported by your brain instance works here — Claude, GPT, Grok, Gemini, Llama, and more. The response shape matches OpenAI's chat.completion object exactly.
stream: false (the default).Endpoint reference
Every endpoint is authenticated with Authorization: Bearer mbk_live_... and enforces the listed scope.
Brain
| Method | Path | Scope | Description |
|---|---|---|---|
| GET | /v1/brain/summary | brain:read | High-level brain summary and sections. |
| POST | /v1/brain/ask | brain:read | Ask a natural-language question against the vault. |
| GET | /v1/brain/search | brain:read | Semantic + keyword search over memories. |
| POST | /v1/brain/remember | brain:write | Append a fact to the user's vault. |
| GET | /v1/brain/timeline | brain:read | Chronological memory timeline (grouped by week). |
| GET | /v1/brain/changes | brain:read | Trend view for a topic across months. |
| GET | /v1/brain/export | brain:export | Full vault dump. |
Memory
| Method | Path | Scope | Description |
|---|---|---|---|
| POST | /v1/memory/add | brain:write | Store a memory with optional metadata. |
| POST | /v1/memory/search | brain:read | Semantic search with similarity scores. |
| GET | /v1/memory/list | brain:read | Paginated list of all stored memories. |
| DELETE | /v1/memory/:id | brain:write | Delete a memory by id. |
Other integrations
| Method | Path | Scope | Description |
|---|---|---|---|
| GET | /mcp | brain:read | MCP tool manifest. |
| POST | /mcp | brain:read/write | Invoke an MCP tool (direct or JSON-RPC). |
| POST | /v1/chat/completions | brain:read | OpenAI-format chat completions with auto context injection. |
Full request/response shapes and error codes are documented inline in the control-plane source. File an issue there for questions or bugs.