Getting Started
Install, configure, and run Working Mind. Your knowledge graph starts here.
Alpha Release. Working Mind v0.0.1 is an early development release. The knowledge graph uses SQLite with FTS5 search and pattern-based contradiction detection, but has no semantic search, no fuzzy entity matching, and no conflict resolution beyond known patterns. Message history is persisted as raw JSON with no search, no tagging, and no smart retrieval. These will be improved in future releases. Expect rough edges.
Prerequisites
- Node.js 18+ -- download
- An LLM API key -- OpenAI, Anthropic, Google, or OpenRouter. Or use Ollama for local models.
Install
Install
npm install -g wmind
Configure
wmind --configure
This walks you through selecting a model provider and entering your API key. Keys are stored as environment variables on your machine. Working Mind never sends your keys anywhere except to your chosen LLM provider.
First Run
wmind
The starter pack loads. The knowledge graph initializes. You see a terminal UI.
What Happens Next
- Type a message. The agent responds.
- Tell it to remember something: "Remember that Project Alpha uses React 19."
- It saves an entity with observations to your knowledge graph.
- Close the app. Reopen it. Ask "What do I know about Project Alpha?"
- It remembers.
Using Local Models
Ollama
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.1
# Run Working Mind with Ollama
wmind --model ollama/llama3.1
No API key needed. Everything runs locally.
Local Fast (wmind-serve)
# Install wmind-serve
npm install -g wmind-serve
# Start a local model
wmind-serve start
# Run Working Mind with Local Fast
wmind --model local-fast
4-5x faster than Ollama on Apple Silicon. No API key needed.
Command Line Options
| Flag | Description |
|---|---|
--configure | Interactive setup wizard |
--model <spec> | Model to use (e.g., openai/gpt-4o, ollama/llama3.1) |
--pack <name> | Pack to load (default: starter) |
--prompt <text> | System prompt override |
--auto-approve | Auto-approve all tool calls |
--no-thinking | Disable reasoning/thinking output |
--max-turns <n> | Maximum agent loop turns (default: 20) |
Next Steps
- Usage -- day-to-day TUI, commands, and sessions
- Pack system -- understand how packs specialize the agent
- Knowledge graph -- how memory works and compounds
- Commands -- builtin and pack slash commands
- Configuration -- model providers, MCP servers, settings
- Architecture -- how Working Mind works under the hood