23 MB.
One Binary.
Your AI Agent.
A single-binary AI agent configured through Markdown.
Ollama local, Claude cloud, or any OpenAI-compatible endpoint.
No runtime dependencies. No code required.
❯
Why LocalKin
Escape the framework tax
Traditional AI Agents
10 GB+ Python environment
Bloated runtimes, slow cold starts, gigabytes before you write a single prompt.
Dependency hell
Version conflicts, broken pip installs, fragile virtualenvs that rot overnight.
Always-online required
Every request leaves your machine. Your data, your prompts — all in someone else's cloud.
Sluggish interpreted code
Python startup lag, high memory footprint, painful on edge devices.
LocalKin
23 MB single binary
Zero runtime dependencies. Drop it in, run it. That's the whole install.
Zero dependencies
No pip, no node_modules, no Docker. One file, every platform.
Local-first with Ollama
Run fully offline by default. Your data never leaves your device. Cloud when you choose.
Go-compiled, instant launch
Native binary. Sub-second startup. Runs on a Raspberry Pi.
The Magnificent 6
What lives inside 23 MB
3 steps
Up and running in 60 seconds
Get your license & download
Purchase Core ($1.99) or Pioneer ($9.99), then download here.
localkin -activate YOUR-LICENSE-KEY
Pick a soul
# 3 pre-configured souls included: souls/ ├── default.soul.md # Ollama local (qwen3-vl:8b) ├── claude.soul.md # Claude (claude-sonnet-4-6) └── cloud.soul.md # Ollama Cloud (kimi-k2.5) # Or write your own .soul.md from scratch
Run
# Local — fully offline, private localkin -soul souls/default.soul.md # Claude — API key or Pro/Max subscription localkin -soul souls/claude.soul.md # Cloud — Ollama cloud models localkin -soul souls/cloud.soul.md
Release Matrix
Pick your build
- --3 ready-to-run soul files (local / Claude / cloud)
- --Ollama & Claude providers
- --6 built-in skills + forge
- --Parallel tool calls
- --Skill store — search & install from terminal
- --Interactive REPL with hot-reload
- --Autonomous mode (-exec)
- --HTTP API & Web Chat UI
- --SQLite persistent memory
- --DuckDuckGo / Brave / SearXNG search
- --Extensible — drop a SKILL.md to add any CLI tool
Tested on Raspberry Pi robot cars.
- ✓Everything in Core, plus:
- ✓Voice daemon — Streaming VAD + multilingual TTS
- ✓Three-tier perception — Vision / STT / TTS with local → edge → cloud fallback
- ✓Robot skills — motor control, camera, ultrasonic, battery
- ✓LiteLLM provider — DeepSeek, Groq, Together, vLLM, and any OpenAI-compatible endpoint
- ✓Autonomous soul hot-switch mid-conversation
- ✓Spatial memory — topological navigation map
- ✓Skill environment sandboxing with mutex groups
- ✓Parallel skill execution
- ○Everything in Pioneer, plus:
- ○Genesis Protocol — self-bootstrapping robot: bare binary + soul, zero pre-installed skills
- ○Hardware discovery (probe) — scan /dev, USB, I2C, GPIO automatically
- ○Dry-run mode — preview genesis flow without creating files (-dryrun)
- ○Checkpoint resume — interrupted genesis continues from forge_log.jsonl
- ○Genesis auto-downgrade — switches to lightweight self-test after first boot
- ○Dynamic tool registration — forged skills available immediately, no restart
Runs on macOS, Linux, Raspberry Pi, and Android. Windows via WSL2. Not available on iOS/iPadOS.
Secure Delivery
Verify & Download
Enter your license key. We'll auto-detect your product tier and deliver the right build.
╔══════════════════════════════════════════════╗
║ LocalKin — Secure Delivery Gateway ║
║ Enter your license key to begin. ║
╚══════════════════════════════════════════════╝