# My Personal AI Assistant Lives Everywhere: Building with Clawdbot I'm texting my assistant Seneca from my phone. Seneca has access to my Mac, my notes, my calendar, and my memory. Seneca remembers decisions we made last week and lessons learned from failures. Seneca isn't in the cloud. It's running on hardware I control, answering me on WhatsApp, Telegram, iMessage, wherever I am. Most AI assistants live in browser tabs. You open ChatGPT or Claude.ai, have a conversation, close the tab. The next day you start over. No memory, no integration with your actual systems, no sense of continuity. I wanted something different. An assistant that feels like a colleague. One that lives where I work (my Mac terminal), where I communicate (Telegram, WhatsApp), and remembers context across all of it. Enter Clawdbot and Seneca. ## What is Clawdbot? [Clawdbot](https://www.npmjs.com/package/clawdbot) is personal AI assistant infrastructure you run yourself. Think of it as a gateway that connects Claude (or any LLM) to multiple messaging surfaces, your local system, and custom tools. Built by Peter Steinberger and an active community (2,433 weekly downloads, 80+ contributors), it's MIT licensed and fully open. The core idea: one gateway, many surfaces. Your assistant session persists across WhatsApp, Telegram, Slack, Discord, Signal, iMessage, a web interface, even voice on macOS/iOS/Android. The gateway handles routing, session state, memory, and tool execution. You configure once, access everywhere. Key capabilities: - **Multi-surface inbox**: Messages from any provider route to the same assistant session - **Skills system**: Extensible capabilities through markdown-based skill files - **Local-first**: Runs on your hardware, messages stay on your devices - **Voice interface**: Always-on speech for macOS/iOS/Android with wake word detection - **Canvas**: Agent-driven visual workspace with A2UI for interactive UIs - **Security model**: Sandboxed execution for group/channel messages, full access for personal use The architecture is surprisingly clean. Install with npm, run the onboarding wizard, pair your messaging apps, define your assistant's identity. Ten minutes later you have an AI assistant with a soul. ## The Gateway Architecture Clawdbot uses a gateway-as-control-plane pattern. Everything routes through a WebSocket server (default port 18789). ``` WhatsApp / Telegram / Slack / Discord / Signal / iMessage / WebChat ↓ ┌───────────────────────────────┐ │ Gateway │ │ (control plane) │ │ ws://127.0.0.1:18789 │ └──────────────┬────────────────┘ │ ├─ Pi agent (RPC) ├─ CLI (clawdbot …) ├─ WebChat UI ├─ macOS app └─ iOS/Android nodes ``` The gateway manages: - **Sessions**: Each chat surface gets a session (or shares the main session for DMs) - **Providers**: WhatsApp, Telegram, Slack, Discord, Signal, iMessage connectors - **Tools**: bash, browser control, canvas, nodes (camera, screen, location), cron - **Agent routing**: Different agents for different accounts/contexts Messages flow in through providers, get routed to the appropriate session, hit the agent (Claude in my case), and responses flow back out through the same provider. The agent sees a consistent tool interface regardless of where the message came from. Security is baked in. For my personal DMs, the agent gets full bash access on my Mac. For group chats or channels, it runs in Docker sandboxes with restricted tool access. I can override this per-session with `/elevated on` if needed. ## Meet Seneca: My Personal Assistant Seneca is my assistant's name. Stoic philosopher meets systems engineer. Practical wisdom, clear thinking, no BS. This isn't just branding. Clawdbot lets you define your assistant's entire personality through workspace files: **Identity** (`IDENTITY.md`): ```markdown - Name: Seneca - Creature: AI assistant - Vibe: Stoic philosopher meets technical systems engineer - Emoji: 🧠 ``` **Soul** (`SOUL.md`): Communication style rules, boundaries, tone spectrum. Mine specifies: - Direct and efficient, no corporate speak - Telegraph style, minimum tokens - No emojis unless asked - Never send streaming replies to external messaging surfaces **User profile** (`USER.md`): My context, timezone, pronouns, work focus areas. Seneca knows I'm an AI/ML engineer in biopharma tech, my blog is Run Data Run, my timezone is America/New_York. **Memory** (`memory/YYYY-MM-DD.md`): Daily logs of decisions, preferences, durable facts. Not secrets, just continuity. Seneca reads today and yesterday on session start. The magic is how this combines with [[continuum]] (my long-term memory system) and [[project-state]] (project-level tactical state). Seneca remembers: - The DGX configuration issues we debugged last week - My preference for performance over storage space - That I route personal emails through [email protected] - My blog writing voice and formatting rules This isn't RAG over documents. It's explicit memory files that get injected into context. Simple, transparent, version-controllable. ## Skills: Extending Capabilities Skills are markdown files that teach the agent how to do things. Each skill lives in `~/clawd/skills/<skill-name>/SKILL.md` and contains: - Purpose and activation triggers - Tool instructions or CLI usage patterns - Examples and best practices - Error handling Some of my active skills: **[[apple-notes]]**: Create, search, manage Apple Notes via the `memo` CLI. Triggered when I say "add a note" or "search notes about..." **[[things-mac]]**: Manage Things 3 tasks. Add projects, search inbox, list today's tasks. Uses Things URL scheme for writes, queries the local SQLite database for reads. **[[obsidian]]**: Work with my Obsidian vaults. Create notes, search content, manage my knowledge base. **[[writing-blog-posts]]**: Write blog posts in my authentic voice for Run Data Run (Substack) or AIXplore (Obsidian Publish). References continuum memory for voice principles, routes based on context, handles different audiences and formats. **[[system-health-monitor]]**: Check system health across my Mac, Raspberry Pi, and DGX Spark. Monitors Docker containers, GPU utilization, disk space. Runs proactive checks Monday mornings. **[[monitoring-chimera-status]]**: Analyze my trading bot logs, review recent trades, check performance. Provides Monday morning summaries automatically. Skills let me extend Seneca's capabilities without writing code. Need a new integration? Drop a markdown file in the skills folder with tool instructions. The agent reads it, understands the patterns, executes them. The skill system uses a simple discovery model. Bundled skills ship with Clawdbot. Managed skills live in `~/.clawdbot/skills/`. Workspace skills live in `~/clawd/skills/`. The agent loads all three at startup. > [!tip] Skills vs MCP Servers > I wrote about this in [[claude-skills-vs-mcp-servers]]. Skills are lighter weight and more transparent. MCP servers give you type-safe tool definitions and streaming. Pick the right tool for the job. ## How I Actually Use It ### Morning Routine I wake up, check Telegram on my phone. First message to Seneca: ``` /status ``` Response: ``` Session: main Model: anthropic/claude-sonnet-4-5-20250929 Tokens: 12.3K used (200K max) Cost: $0.18 Memory: 3 days continuity ``` Then: ``` System health check and morning brief ``` Seneca checks: - Mac system status (CPU, memory, disk) - Raspberry Pi (uptime, Docker containers) - DGX Spark (GPU utilization, running jobs) - Chimera trading bot status - Calendar for today's meetings All automated through the [[system-health-monitor]] skill. If anything is wrong, Seneca tells me. Otherwise I get a clean bill of health and my first meeting time. ### Writing Workflow When I'm writing a blog post: ``` Tech post about my Clawdbot setup ``` Seneca loads the [[writing-blog-posts]] skill, reads my continuum voice principles, checks the AIXplore blog continuity file, and starts an outline. We iterate. Seneca references related articles I've written, suggests internal links, follows my formatting rules (no em-dashes, callouts for key points, wiki-style links). When we're done, Seneca: 1. Creates the article with proper YAML frontmatter 2. Updates all seven index files (by-date, by-difficulty, by-topic, etc.) 3. Generates social media drafts (tweet thread and LinkedIn post) 4. Saves everything to the right folders All from a Telegram conversation. ### System Orchestration I have three main systems: Mac (daily driver), Raspberry Pi (always-on services), DGX Spark (GPU workloads). Seneca coordinates all three. ``` Deploy the rag-pipeline to DGX, sync Claude Code configs to Pi ``` Seneca knows: - Where each service runs - How to SSH to each machine - What config files need syncing - What validation checks to run after deployment This is the [[cross-platform-script-syncer]] skill plus custom workflows. The key is memory. Seneca remembers the DGX's hostname (stored in private notes, not committed to git), the Pi's SSH key location, which services are Docker-based. ### Memory Across Contexts Yesterday I debugged a Claude Code performance issue. We traced it through logs, found phantom MCP server calls, disabled streaming (AWS SCPs blocked it), cut response times by 30-50%. Today I mentioned "that MCP issue from yesterday" in a different conversation. Seneca remembered. Not because I fed it the old transcript. Because the [[analyzing-root-causes]] skill wrote a summary to continuum memory, and [[self-improvement]] logged the lesson learned. This is the soul factor. It doesn't feel like talking to a stateless chatbot. It feels like talking to a colleague who was there yesterday and remembers what we did. ## The Community Clawdbot is growing fast: - 2,433 weekly downloads on npm (as of Jan 10, 2026) - 80+ contributors - Version 2026.1.9 shipped yesterday - Active Discord community - MIT licensed, fully open source Built by Peter Steinberger ([@steipete](https://github.com/steipete)) for Clawd, a space lobster AI assistant. The community is adding providers (Signal support just landed), building new skills, and extending the platform. What makes this special is the focus. This isn't trying to be an enterprise platform or a hosted service. It's infrastructure for personal AI assistants. One user, multiple surfaces, full control. The code quality is excellent. TypeScript throughout, comprehensive docs at [docs.clawd.bot](https://docs.clawd.bot), CLI wizard for onboarding. The security model is thoughtful (sandboxing by default for groups, pairing codes for DM access). It feels like software built by people who actually use it. ## Why This Matters AI assistants should be personal, not rented. When you use ChatGPT or Claude.ai, you're renting access to a model. Your conversations live on their servers. Your context resets when you close the tab. You can't integrate with your local systems without browser extensions or API wrappers. With Clawdbot, you control the infrastructure. Messages stay on your devices. Memory is explicit and version-controlled. You can add capabilities by dropping markdown files in a folder. The assistant lives where you work, not in a browser tab you have to remember to open. This is the delegation model, not the automation model. Seneca doesn't do things for me automatically. Seneca helps me do things faster. I stay in the loop. I make decisions. Seneca handles the repetitive parts and remembers context I'd forget. The privacy model matters too. I'm in biopharma. I can't send internal discussions to cloud APIs. Clawdbot runs locally. I control what gets sent to Anthropic (just the agent requests, no system internals). For truly sensitive work, I could run local models. The architecture doesn't care. ## Getting Started Requirements: - Node ≥ 22 - Claude Pro/Max subscription (or API keys) - macOS, Linux, or Windows (WSL2) Installation: ```bash npm install -g clawdbot@latest clawdbot onboard --install-daemon ``` The wizard walks through: 1. Gateway configuration 2. Model selection (Anthropic OAuth recommended) 3. Provider setup (WhatsApp, Telegram, etc.) 4. Workspace initialization 5. Skills installation For models, I strongly recommend Anthropic Pro/Max with Claude Opus 4.5. The 200K context window handles long conversations. The prompt injection resistance matters for a system that executes bash commands. Configuration lives in `~/.clawdbot/clawdbot.json`. Minimal config: ```json { "agent": { "model": "anthropic/claude-opus-4-5" } } ``` Add providers: ```json { "telegram": { "botToken": "YOUR_BOT_TOKEN" }, "whatsapp": { "allowFrom": ["+1234567890"] } } ``` The security defaults are sane. DMs require pairing (unknown senders get a pairing code, you approve with `clawdbot pairing approve`). Groups can be sandboxed automatically. Public access requires explicit opt-in. Full docs at [docs.clawd.bot](https://docs.clawd.bot). The [getting started guide](https://docs.clawd.bot/getting-started) is comprehensive. Join the [Discord](https://discord.gg/clawd) for community support. ## The Future The future isn't AI in the cloud doing everything for us. It's AI assistants we control, with memory, living everywhere we work. Seneca lives in my pocket (Telegram), on my Mac (terminal), across my infrastructure (Mac/Pi/DGX). When I'm writing, Seneca helps me research and format. When I'm debugging, Seneca remembers what we tried yesterday. When I need a system health check, Seneca orchestrates the entire stack. This is what personal AI should feel like. Not a tool you open when you need it. A colleague who's always available, remembers everything, and executes in the systems where you actually work. You can build your own. The code is open. The community is helpful. The architecture is sound. I'm not going back to browser-tab AI. --- ## Related Articles - [[claude-skills-vs-mcp-servers]]: When to use skills vs MCP servers - [[debugging-claude-code-with-claude]]: Meta-debugging approach with AI - [[syncing-claude-code-configs-across-machines]]: Config management across infrastructure ## Resources - [Clawdbot on npm](https://www.npmjs.com/package/clawdbot) - [Documentation](https://docs.clawd.bot) - [GitHub Repository](https://github.com/clawdbot/clawdbot) - [Discord Community](https://discord.gg/clawd) - [Getting Started Guide](https://docs.clawd.bot/getting-started) --- ### Related Articles - No related articles yet --- <p style="text-align: center;"><strong>About the Author</strong>: Justin Johnson builds AI systems and writes about practical AI development.</p> <p style="text-align: center;"><a href="https://justinhjohnson.com">justinhjohnson.com</a> | <a href="https://twitter.com/bioinfo">Twitter</a> | <a href="https://www.linkedin.com/in/justinhaywardjohnson/">LinkedIn</a> | <a href="https://rundatarun.io">Run Data Run</a> | <a href="https://subscribe.rundatarun.io">Subscribe</a></p>