Configuration

Model providers, MCP servers, and settings.
Alpha Release. Session history is persisted as raw JSON files in ~/.wmind/sessions/. There is no search, no tagging, no smart retrieval, and no way to browse sessions from outside the TUI. These will be improved in future releases.

Configuration File

Working Mind stores its configuration in ~/.wmind/config.json. You can edit it directly or use the --configure wizard.

Model Providers

Working Mind supports 7 primary providers. Any provider with an OpenAI-compatible API also works with custom configuration.

Local Fast (wmind-serve)

npm install -g wmind-serve
wmind-serve start
wmind --model local-fast

No API key needed. 4-5x faster than Ollama on Apple Silicon. All processing on your machine.

Ollama (Local)

ollama pull llama3.1
wmind --model ollama/llama3.1

No API key needed. All processing on your machine.

OpenRouter (Universal)

export OPENROUTER_API_KEY=sk-or-...
wmind --model openrouter/anthropic/claude-sonnet-4-20250514

Access to 200+ models including Anthropic, DeepSeek, and Google via OpenRouter.

Together AI

export TOGETHER_API_KEY=...
wmind --model together/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8

Google

export GEMINI_API_KEY=...
wmind --model google/gemini-2.5-flash

OpenAI

export OPENAI_API_KEY=sk-...
wmind --model openai/gpt-5.4

Groq

export GROQ_API_KEY=...
wmind --model groq/llama-3.3-70b-versatile

MCP Servers

MCP (Model Context Protocol) servers provide tools for the agent. The starter pack declares two:

ServerPackageRequiredWhat It Does
brave-search@brave/brave-search-mcp-serverNoWeb search for current information
firecrawlfirecrawl-mcpNoWeb scraping from URLs

Memory is built-in (native SQLite, no MCP server needed). Filesystem access is provided by a third MCP server (@modelcontextprotocol/server-filesystem).

Connecting MCP Servers

Use the /mcp-connect command in the TUI to configure servers interactively. You can also set API keys as environment variables:

export BRAVE_API_KEY=...
export FIRECRAWL_API_KEY=...
wmind

The agent will auto-connect to servers that have their required environment variables set.

Adding Custom MCP Servers

Any MCP server that uses the standard stdio transport works with Working Mind. Add it to your pack's pack.json:

{
  "mcpServers": {
    "my-server": {
      "package": "my-mcp-server-package",
      "required": false,
      "env": {
        "MY_API_KEY": {
          "setting": "MY_API_KEY",
          "sensitive": true,
          "required": true,
          "label": "My API Key",
          "hint": "Get one at https://example.com"
        }
      }
    }
  }
}

Settings

Configure in ~/.wmind/config.json under the agents key:

SettingDefaultDescription
maxTurns20Maximum agent loop iterations per conversation
autoApprovefalseAuto-approve all tool calls without confirmation
noThinkingfalseDisable reasoning/thinking output
maxTokens(model default)Maximum tokens per LLM response
thinkingBudget(model default)Reasoning token budget for thinking models

Environment Variables

VariableDescription
OPENAI_API_KEYOpenAI API key
OPENROUTER_API_KEYOpenRouter API key (access to Anthropic, DeepSeek, etc.)
TOGETHER_API_KEYTogether AI API key
GEMINI_API_KEYGoogle AI API key
GROQ_API_KEYGroq API key
BRAVE_API_KEYBrave Search API key
FIRECRAWL_API_KEYFirecrawl API key
MEMORY_FILE_PATHCustom path for memory SQLite database
WMIND_DEBUGSet to 1 to enable debug logging
WMIND_PORTCustom port for wmind-serve (default: 19421)

Custom System Prompts

Override the pack's system prompt:

wmind --prompt "You are a helpful research assistant"

Or save named prompts in your config:

{
  "systemPrompts": {
    "analyst": "You are a thorough research analyst...",
    "writer": "You are a clear technical writer..."
  }
}

Then activate: wmind --prompt analyst