# Syncing Claude Code Configurations Across Multiple Machines: A Practical Guide
If you're running Claude Code across multiple machines—say a Mac for development, a Raspberry Pi for edge testing, and a DGX box for heavy compute—you've probably run into configuration drift. Your carefully crafted agents, slash commands, and hooks get out of sync. Even worse, blindly copying configs can break things when machines need different model endpoints or API configurations.
This guide shows you how to build an intelligent sync system that keeps everything in harmony while respecting machine-specific needs.
<div class="callout" data-callout="success">
<div class="callout-title">Get the Code</div>
<div class="callout-content">
**Full implementation available on GitHub:** [BioInfo/claude-code-sync](https://github.com/BioInfo/claude-code-sync)
Quick install:
```bash
curl -o ~/sync-claude-config.sh https://raw.githubusercontent.com/BioInfo/claude-code-sync/main/sync-claude-config.sh
chmod +x ~/sync-claude-config.sh
```
</div>
</div>
## The Problem: Configuration Drift and the 404 Error
Here's what typically happens: you add a new agent on your Mac, customize some hooks, create a slash command. Then you switch to your DGX box and... none of that is there. You try to copy your `.claude` directory over, and suddenly:
```
API Error: 404 Model not found
```
Why? Because your Mac uses Bedrock via an Azure gateway, but your DGX needs direct Anthropic API access. The model ARN that works on one machine doesn't exist on the other.
<div class="callout" data-callout="warning">
<div class="callout-title">Common Mistake</div>
<div class="callout-content">
Copying the entire `.claude` directory between machines will overwrite machine-specific configurations like model endpoints, API keys, and database connections. This breaks authentication and causes cryptic 404 errors.
</div>
</div>
## What Needs to Sync vs. What Needs to Diverge
**Should sync across all machines:**
- Custom agents (your 18+ specialized agents)
- Slash commands (commit helpers, documentation generators, etc.)
- Skills (like `ai-newsletter`)
- Hooks (auto-formatting, linting, notifications)
- Status line scripts
- Global `CLAUDE.md` instructions
- Plugin configurations
**Must remain machine-specific:**
- `ANTHROPIC_MODEL` - Different model identifiers per machine
- `ANTHROPIC_BEDROCK_BASE_URL` - Only for Bedrock-enabled machines
- `CLAUDE_CODE_USE_BEDROCK` - Feature flag per environment
- `AWS_REGION` - For Bedrock configurations
- `ANTHROPIC_API_KEY` - Direct API credentials
- MCP server connection strings (database URLs, etc.)
**Never sync:**
- Session history and todos
- Debug logs and shell snapshots
- File history and project state
## Building the Smart Sync System
### Architecture Overview
The solution has three key components:
1. **Machine Profile Detection** - Auto-identifies Mac, Pi, or DGX and applies appropriate config template
2. **Conflict Analysis** - Compares local vs remote configs and groups differences
3. **Smart Merge Engine** - Syncs shared configs while preserving machine-specific settings
### Prerequisites
First, install a modern bash (macOS ships with ancient bash 3.2):
```bash
brew install bash
```
You'll also need passwordless SSH set up:
```bash
# Generate SSH key if you don't have one
ssh-keygen -t ed25519
# Copy to your remote machines
ssh-copy-id
[email protected]
ssh-copy-id
[email protected]
```
### The Smart Sync Script
Create `~/scripts/sync-claude-config-smart.sh`. The script uses associative arrays (bash 4+) to map machine types to configuration profiles:
```bash
#!/opt/homebrew/bin/bash
# Machine configuration profiles
declare -A MACHINE_PROFILES=(
["mac"]="bedrock-azure"
["pi"]="api-direct"
["dgx"]="api-direct"
)
# Files that need machine-specific handling
MACHINE_SPECIFIC_CONFIGS=(
"settings.json:env.ANTHROPIC_MODEL"
"settings.json:env.ANTHROPIC_BEDROCK_BASE_URL"
"settings.json:env.CLAUDE_CODE_USE_BEDROCK"
".mcp.json:mcpServers.postgresql.env.POSTGRES_CONNECTION_STRING"
)
```
The key innovation is the `merge_configs_preserve_machine_specific()` function using Python for JSON manipulation:
```python
def merge_configs_preserve_machine_specific(base, new, machine):
# Merge: new data takes precedence
result = {**base, **new}
# But machine-specific env vars override everything
if 'env' in machine and 'env' in result:
result['env'].update(machine['env'])
return result
```
<div class="callout" data-callout="tip">
<div class="callout-title">Why Python for JSON?</div>
<div class="callout-content">
While `jq` is great for simple queries, Python's JSON library handles complex merging logic more elegantly. We embed Python scripts directly in bash using heredocs for a single-file solution.
</div>
</div>
### Workflow Commands
Add these aliases to your `.zshrc`:
```bash
# Setup (run once on each machine)
alias cc-setup='sync-claude-config-smart.sh setup-machine'
# Analysis and merging
alias cc-analyze-pi='sync-claude-config-smart.sh analyze pi'
alias cc-merge-pi='sync-claude-config-smart.sh merge pi'
alias cc-smart-pi='sync-claude-config-smart.sh smart-merge pi'
# Same for DGX
alias cc-analyze-dgx='sync-claude-config-smart.sh analyze dgx'
alias cc-merge-dgx='sync-claude-config-smart.sh merge dgx'
alias cc-smart-dgx='sync-claude-config-smart.sh smart-merge dgx'
# Safety
alias cc-backup='sync-claude-config-smart.sh backup'
```
## Usage Patterns
### Initial Setup: Configure Each Machine
Run once on each machine to set up machine-specific configs:
```bash
cc-setup
```
The script auto-detects machine type (Mac, Pi, DGX) and applies the appropriate configuration template. For Pi and DGX, you'll also need to set your API key:
```bash
echo 'export ANTHROPIC_API_KEY="sk-ant-..."' >> ~/.zshrc
source ~/.zshrc
```
### Daily Workflow: Smart Merge
When you've added agents or commands on your Mac and want to sync to other machines:
```bash
# Check what's different first
cc-analyze-pi
# Then sync with smart merge
cc-smart-pi
```
The smart merge:
1. Creates automatic backup
2. Syncs all agents, commands, skills, plugins
3. Merges JSON configs intelligently
4. Preserves model settings, API keys, and machine-specific env vars
### Interactive Conflict Resolution
For complex scenarios where you want control:
```bash
cc-merge-pi
```
This presents four options:
1. **Push local** - Send your configs to remote (overwrites remote)
2. **Pull remote** - Get configs from remote (overwrites local)
3. **Smart merge** - Auto-merge preserving machine settings
4. **Review file-by-file** - Manually decide for each conflict
Option 4 is powerful—it shows side-by-side diffs (using `delta` or `colordiff` if available) and lets you choose per file:
```
[L] Use Local [R] Use Remote [M] Merge intelligently
[S] Skip [Q] Quit review
```
### Conflict Analysis
Before syncing, always analyze:
```bash
cc-analyze-pi
```
Output:
```
╔═══════════════════════════════════════════════════╗
║ Analyzing Conflicts with pi ║
╚═══════════════════════════════════════════════════╝
⚡ CONFLICT: settings.json
ℹ Local vs Remote: +12/-8 lines
⚡ CONFLICT: agents/python-pro.md
ℹ Local vs Remote: +3/-0 lines
▶ Conflict Summary
• settings.json
• agents/python-pro.md
```
This grouped view helps you understand the scope before making changes.
## Machine Configuration Templates
### Mac (Bedrock via Azure)
For enterprise setups using Bedrock through an Azure gateway:
```json
{
"env": {
"CLAUDE_CODE_USE_BEDROCK": "1",
"ANTHROPIC_BEDROCK_BASE_URL": "https://your-gateway.azure-api.net/bedrock",
"ANTHROPIC_MODEL": "arn:aws:bedrock:us-east-1:123456789:inference-profile/...",
"AWS_REGION": "us-east-1"
}
}
```
### Pi/DGX (Direct API)
For direct Anthropic API access:
```json
{
"env": {
"ANTHROPIC_MODEL": "claude-sonnet-4-5",
"ANTHROPIC_API_KEY": "${ANTHROPIC_API_KEY}"
}
}
```
The script automatically applies these templates during `cc-setup`.
## Advanced Features
### Automatic Backups
Every sync operation creates a timestamped backup:
```bash
/Users/bioinfo/backups/claude-config/claude-config-20251020-090348.tar.gz
```
The system keeps the last 10 backups automatically. To restore:
```bash
cc-sync list # See available backups
cc-sync restore /path/to/backup.tar.gz
```
### Rsync for Directories
For large directories like `agents/` with 18+ files, the script uses `rsync` instead of `scp`:
```bash
rsync -avz --delete "$CLAUDE_DIR/agents/" "${target_host}:~/.claude/agents/"
```
This is more efficient and handles deletions properly.
### Color-Coded Output
The script uses ANSI color codes for clear visual feedback:
- 🟢 Green: Success
- 🔵 Blue: Info
- 🟡 Yellow: Warnings
- 🔴 Red: Errors
- 🟣 Magenta: Conflicts
<div class="callout" data-callout="info">
<div class="callout-title">Implementation Detail</div>
<div class="callout-content">
Unicode symbols (✓, ⚡, ℹ) combined with colors make terminal output scannable. Use `\033[0;32m` for green, `\033[0m` to reset. Store in variables at script start for maintainability.
</div>
</div>
## Troubleshooting Common Issues
### Issue: "404 Model not found" After Sync
**Cause:** Machine-specific model configuration was overwritten.
**Solution:**
```bash
cc-setup # Reapply machine-specific config
```
### Issue: Configs Keep Getting Overwritten
**Cause:** Using `cc-push-*` instead of `cc-smart-*`.
**Solution:** Always use smart merge for regular syncing:
```bash
cc-smart-pi # Not cc-push-pi
```
### Issue: Want to See Exactly What Will Change
**Solution:** Use analyze first:
```bash
cc-analyze-pi # Shows grouped diffs
```
### Issue: Bash Version Too Old
**Error:** `declare: -A: invalid option`
**Solution:**
```bash
brew install bash
# Script checks version and provides this message
```
## Real-World Scenarios
### Scenario 1: Added New Agent on Mac
You created a new specialized agent `database-architect.md`:
```bash
cc-smart-pi # Sync to Pi
cc-smart-dgx # Sync to DGX
```
Both remote machines now have the agent, but keep their own model configs.
### Scenario 2: Modified Hooks on DGX
You improved auto-formatting hooks on DGX and want them on Mac:
```bash
# On Mac
cc-analyze-dgx # See what changed
cc-pull-dgx # Pull the changes
```
Or use interactive merge if you're unsure:
```bash
cc-merge-dgx # Choose option 2 (Pull)
```
### Scenario 3: Complete Standardization
You want all machines to match your Mac setup:
```bash
cc-backup # Safety first
cc-smart-pi # Smart merge to Pi
cc-smart-dgx # Smart merge to DGX
```
Each machine gets your agents, commands, and hooks, but keeps its own connection config.
## Best Practices
1. **Always analyze before syncing** - `cc-analyze-*` is your friend
2. **Use smart merge by default** - Only use push/pull when you're certain
3. **Run `cc-setup` once per machine** - Establishes baseline config
4. **Backups are automatic** - But manual backups before experiments don't hurt
5. **Test on one machine first** - Sync to Pi, verify, then DGX
6. **Document machine differences** - Keep notes on why configs diverge
<div class="callout" data-callout="success">
<div class="callout-title">Pro Tip</div>
<div class="callout-content">
Add `cc-backup && cc-smart-pi && cc-smart-dgx` to a daily cron job or git hook. Your configs stay synchronized automatically while you work.
</div>
</div>
## Extending the System
### Adding New Machines
To add a new machine type:
```bash
# In the script
declare -A MACHINES=(
["pi"]="
[email protected]"
["dgx"]="
[email protected]"
["gpu-rig"]="
[email protected]" # New
)
declare -A MACHINE_PROFILES=(
["mac"]="bedrock-azure"
["pi"]="api-direct"
["dgx"]="api-direct"
["gpu-rig"]="api-direct" # New
)
```
Then add aliases:
```bash
alias cc-smart-gpu='sync-claude-config-smart.sh smart-merge gpu-rig'
```
### Custom Configuration Templates
Add your own profile types:
```bash
get_machine_config_template() {
local machine_type=$1
local profile=${MACHINE_PROFILES[$machine_type]}
case "$profile" in
"bedrock-azure")
# Azure Bedrock config
;;
"api-direct")
# Direct API config
;;
"vertex-ai") # New profile
cat << 'EOF'
{
"ANTHROPIC_MODEL": "claude-sonnet-4",
"GOOGLE_CLOUD_PROJECT": "${GCP_PROJECT_ID}",
"VERTEX_AI_ENDPOINT": "us-central1-aiplatform.googleapis.com"
}
EOF
;;
esac
}
```
### Integration with Git
For team environments, you might want to version control parts of your config:
```bash
cd ~/.claude
git init
git add agents/ commands/ skills/
git commit -m "Team-shared Claude Code configs"
git remote add origin
[email protected]:team/claude-configs.git
git push -u origin main
```
Then have the sync script pull from git first, then sync to machines.
## Performance Considerations
For large configurations:
- **Rsync is faster** than scp for directories (10x speedup for 18+ files)
- **Parallel syncing** - Sync to multiple machines concurrently:
```bash
cc-smart-pi & cc-smart-dgx & wait
```
- **Compression** helps over slow connections - rsync's `-z` flag enables this
- **Exclude patterns** prevent syncing unnecessary data:
```bash
--exclude='.claude/debug/*' \
--exclude='.claude/history.jsonl'
```
## Conclusion
Syncing Claude Code configs across multiple machines doesn't have to be fragile. By separating shared configurations (agents, commands, hooks) from machine-specific settings (model endpoints, API keys), you can:
1. Keep your development workflow consistent
2. Avoid cryptic 404 errors from model misconfigurations
3. Safely sync without fear of breaking authentication
4. Recover quickly with automatic backups
The smart sync approach respects machine boundaries while keeping the good stuff synchronized. Your agents, custom commands, and carefully tuned hooks work everywhere, while each machine maintains its own connection to Claude.
Start with `cc-setup` on each machine, then use `cc-smart-*` for day-to-day syncing. You'll wonder how you ever managed without it.
<div class="callout" data-callout="info">
<div class="callout-title">Get Started</div>
<div class="callout-content">
**Clone the repo:** [github.com/BioInfo/claude-code-sync](https://github.com/BioInfo/claude-code-sync)
The full script, documentation, and examples are available with MIT license. Star the repo if you find it useful!
</div>
</div>
## Related Articles
- [[Practical Applications/model-context-protocol-implementation|Implementing the Model Context Protocol]]
- [[custom-modes-quick-start|Quick Start Guide to Custom Modes]]
- [[AI Development & Agents/building-autonomous-research-agent|Building an Autonomous Research Agent]]
---
**What's your multi-machine setup like?** Have you solved configuration drift in other development tools? Share your approaches on [[https://twitter.com/bioinfo|Twitter]] or reach out if you're building something similar.
---
### Related Articles
- [[debugging-claude-code-with-claude|Debugging Claude Code with Claude: A Meta-Optimization Journey]]
- [[cline-roo-code-quick-start|Cline and Roo Code: Quick Start Guide]]
- [[roo-code-codebase-indexing-free-setup|Supercharging Code Discovery: My Journey with Roo Code's Free Codebase Indexing]]
---
<p style="text-align: center;"><strong>About the Author</strong>: Justin Johnson builds AI systems and writes about practical AI development.</p>
<p style="text-align: center;"><a href="https://justinhjohnson.com">justinhjohnson.com</a> | <a href="https://twitter.com/bioinfo">Twitter</a> | <a href="https://www.linkedin.com/in/justinhaywardjohnson/">LinkedIn</a> | <a href="https://rundatarun.io">Run Data Run</a> | <a href="https://subscribe.rundatarun.io">Subscribe</a></p>