Configuration
~/.wmind/sessions/. There is no search, no tagging, no smart retrieval, and no way to browse sessions from outside the TUI. These will be improved in future releases.Configuration File
Working Mind stores its configuration in ~/.wmind/config.json. You can edit it directly or use the --configure wizard.
Model Providers
Working Mind supports 7 primary providers. Any provider with an OpenAI-compatible API also works with custom configuration.
Local Fast (wmind-serve)
npm install -g wmind-serve
wmind-serve start
wmind --model local-fast
No API key needed. 4-5x faster than Ollama on Apple Silicon. All processing on your machine.
Ollama (Local)
ollama pull llama3.1
wmind --model ollama/llama3.1
No API key needed. All processing on your machine.
OpenRouter (Universal)
export OPENROUTER_API_KEY=sk-or-...
wmind --model openrouter/anthropic/claude-sonnet-4-20250514
Access to 200+ models including Anthropic, DeepSeek, and Google via OpenRouter.
Together AI
export TOGETHER_API_KEY=...
wmind --model together/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
export GEMINI_API_KEY=...
wmind --model google/gemini-2.5-flash
OpenAI
export OPENAI_API_KEY=sk-...
wmind --model openai/gpt-5.4
Groq
export GROQ_API_KEY=...
wmind --model groq/llama-3.3-70b-versatile
MCP Servers
MCP (Model Context Protocol) servers provide tools for the agent. The starter pack declares two:
| Server | Package | Required | What It Does |
|---|---|---|---|
| brave-search | @brave/brave-search-mcp-server | No | Web search for current information |
| firecrawl | firecrawl-mcp | No | Web scraping from URLs |
Memory is built-in (native SQLite, no MCP server needed). Filesystem access is provided by a third MCP server (@modelcontextprotocol/server-filesystem).
Connecting MCP Servers
Use the /mcp-connect command in the TUI to configure servers interactively. You can also set API keys as environment variables:
export BRAVE_API_KEY=...
export FIRECRAWL_API_KEY=...
wmind
The agent will auto-connect to servers that have their required environment variables set.
Adding Custom MCP Servers
Any MCP server that uses the standard stdio transport works with Working Mind. Add it to your pack's pack.json:
{
"mcpServers": {
"my-server": {
"package": "my-mcp-server-package",
"required": false,
"env": {
"MY_API_KEY": {
"setting": "MY_API_KEY",
"sensitive": true,
"required": true,
"label": "My API Key",
"hint": "Get one at https://example.com"
}
}
}
}
}
Settings
Configure in ~/.wmind/config.json under the agents key:
| Setting | Default | Description |
|---|---|---|
maxTurns | 20 | Maximum agent loop iterations per conversation |
autoApprove | false | Auto-approve all tool calls without confirmation |
noThinking | false | Disable reasoning/thinking output |
maxTokens | (model default) | Maximum tokens per LLM response |
thinkingBudget | (model default) | Reasoning token budget for thinking models |
Environment Variables
| Variable | Description |
|---|---|
OPENAI_API_KEY | OpenAI API key |
OPENROUTER_API_KEY | OpenRouter API key (access to Anthropic, DeepSeek, etc.) |
TOGETHER_API_KEY | Together AI API key |
GEMINI_API_KEY | Google AI API key |
GROQ_API_KEY | Groq API key |
BRAVE_API_KEY | Brave Search API key |
FIRECRAWL_API_KEY | Firecrawl API key |
MEMORY_FILE_PATH | Custom path for memory SQLite database |
WMIND_DEBUG | Set to 1 to enable debug logging |
WMIND_PORT | Custom port for wmind-serve (default: 19421) |
Custom System Prompts
Override the pack's system prompt:
wmind --prompt "You are a helpful research assistant"
Or save named prompts in your config:
{
"systemPrompts": {
"analyst": "You are a thorough research analyst...",
"writer": "You are a clear technical writer..."
}
}
Then activate: wmind --prompt analyst