Developer docs

Build on mybrain

Three integration surfaces let any AI app use mybrain as its memory layer — Model Context Protocol for Claude, drop-in LangChain/Python memory, or OpenAI-compatible chat completions.

Build agents that remember

mybrain is designed to be the memory layer for your AI agents. Your agent reads context before responding, writes memories after each turn — and every downstream session sees the same brain. Two calls per turn is all it takes:

  1. POST /v1/brain/ask — read relevant context for the user's message.
  2. POST /v1/brain/remember — write anything new the turn revealed.
javascript
// Turn 1: read → respond → write
const { answer: context } = await fetch(
  "https://silverline-control-plane.fly.dev/v1/brain/ask",
  {
    method: "POST",
    headers: {
      Authorization: "Bearer mbk_live_...",
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ question: userMessage }),
  }
).then((r) => r.json());

const reply = await yourLLM.chat({
  system: `You know the user. Context: ${context}`,
  user: userMessage,
});

await fetch("https://silverline-control-plane.fly.dev/v1/brain/remember", {
  method: "POST",
  headers: {
    Authorization: "Bearer mbk_live_...",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ content: `User asked: ${userMessage}. I said: ${reply}` }),
});

For OpenAI-style drop-in memory (no code changes), jump to OpenAI-compatible chat completions.

Overview

All mybrain APIs live at a single base URL. Every request is authenticated with a personal API key and carries rate-limit headers so clients can back off cleanly.

Base URL
https://silverline-control-plane.fly.dev
Default limits
20 req/min · 1,000 req/day

Authentication

Pass your key as a bearer token. Keys start with mbk_live_ and are shown exactly once at creation time.

bash
curl https://silverline-control-plane.fly.dev/v1/brain/summary \
  -H "Authorization: Bearer mbk_live_..."

Scopes: brain:read, brain:write, brain:export.

Node.js / TypeScript

Node 18+ has fetch built in — no SDK required. Every endpoint returns JSON and accepts JSON.

javascript
// Ask the brain a question in natural language
const res = await fetch("https://silverline-control-plane.fly.dev/v1/brain/ask", {
  method: "POST",
  headers: {
    Authorization: "Bearer mbk_live_...",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ question: "What are my current priorities?" }),
});
const { answer } = await res.json();
console.log(answer);
javascript
// Append a memory
await fetch("https://silverline-control-plane.fly.dev/v1/brain/remember", {
  method: "POST",
  headers: {
    Authorization: "Bearer mbk_live_...",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    content: "User prefers concise replies.",
    metadata: { source: "chat" },
  }),
});
typescript
// Typed wrapper — drop into any Node service
type AskResponse = { answer: string; sources?: string[] };

export async function askBrain(
  question: string,
  apiKey = process.env.MYBRAIN_KEY!
): Promise<AskResponse> {
  const res = await fetch("https://silverline-control-plane.fly.dev/v1/brain/ask", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${apiKey}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ question }),
  });
  if (!res.ok) throw new Error(`mybrain ${res.status}: ${await res.text()}`);
  return res.json();
}

MCP — Claude Desktop & Claude Code

mybrain speaks the Model Context Protocol natively. Drop this block into your Claude Desktop config and the assistant gains three tools: search_memory, add_memory, and get_context.

json
{
  "mcpServers": {
    "mybrain": {
      "url": "https://silverline-control-plane.fly.dev/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_MYBRAIN_API_KEY"
      }
    }
  }
}

Test the endpoint:

bash
# List the tools mybrain exposes
curl https://silverline-control-plane.fly.dev/mcp \
  -H "Authorization: Bearer mbk_live_..."

# Call a tool directly
curl -X POST https://silverline-control-plane.fly.dev/mcp \
  -H "Authorization: Bearer mbk_live_..." \
  -H "Content-Type: application/json" \
  -d '{"name":"search_memory","arguments":{"query":"dietary preferences"}}'

LangChain / Python

The mybrain Python package wraps the full API and ships a LangChain-style memory adapter you can hand to any agent expecting a save_context / load_memory_variables contract.

bash
pip install mybrain
python
from mybrain import MyBrainMemory

memory = MyBrainMemory(api_key="mbk_live_...")

# Save a turn
memory.save_context(
    {"input": "I prefer Python over JavaScript"},
    {"output": "Got it!"},
)

# Recall relevant context
vars = memory.load_memory_variables(
    {"input": "what language do I prefer?"}
)
# vars["mybrain_context"] -> "- User said: I prefer Python ..."

Prefer the raw memory API? Hit the HTTP endpoints directly:

bash
# Add a memory
curl -X POST https://silverline-control-plane.fly.dev/v1/memory/add \
  -H "Authorization: Bearer mbk_live_..." \
  -H "Content-Type: application/json" \
  -d '{"content":"User prefers Python","metadata":{"source":"conversation"}}'

# Semantic search
curl -X POST https://silverline-control-plane.fly.dev/v1/memory/search \
  -H "Authorization: Bearer mbk_live_..." \
  -H "Content-Type: application/json" \
  -d '{"query":"programming preferences","limit":5}'

OpenAI-compatible chat completions

Point the OpenAI SDK at mybrain and every call is automatically enriched with memories relevant to the last user message — no prompt-engineering required.

python
from openai import OpenAI

client = OpenAI(
    base_url="https://silverline-control-plane.fly.dev/v1",
    api_key="mbk_live_...",  # your mybrain key, not an OpenAI key
)

response = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[
        {"role": "user", "content": "What should I focus on today?"},
    ],
)
print(response.choices[0].message.content)

Any model supported by your brain instance works here — Claude, GPT, Grok, Gemini, Llama, and more. The response shape matches OpenAI's chat.completion object exactly.

Heads up: streaming is not yet supported on this endpoint — pass stream: false (the default).

Endpoint reference

Every endpoint is authenticated with Authorization: Bearer mbk_live_... and enforces the listed scope.

Brain

MethodPathScopeDescription
GET/v1/brain/summarybrain:readHigh-level brain summary and sections.
POST/v1/brain/askbrain:readAsk a natural-language question against the vault.
GET/v1/brain/searchbrain:readSemantic + keyword search over memories.
POST/v1/brain/rememberbrain:writeAppend a fact to the user's vault.
GET/v1/brain/timelinebrain:readChronological memory timeline (grouped by week).
GET/v1/brain/changesbrain:readTrend view for a topic across months.
GET/v1/brain/exportbrain:exportFull vault dump.

Memory

MethodPathScopeDescription
POST/v1/memory/addbrain:writeStore a memory with optional metadata.
POST/v1/memory/searchbrain:readSemantic search with similarity scores.
GET/v1/memory/listbrain:readPaginated list of all stored memories.
DELETE/v1/memory/:idbrain:writeDelete a memory by id.

Other integrations

MethodPathScopeDescription
GET/mcpbrain:readMCP tool manifest.
POST/mcpbrain:read/writeInvoke an MCP tool (direct or JSON-RPC).
POST/v1/chat/completionsbrain:readOpenAI-format chat completions with auto context injection.

Full request/response shapes and error codes are documented inline in the control-plane source. File an issue there for questions or bugs.