# Auto-Sync Twitter Bookmarks to Obsidian with Bird CLI
I bookmark a lot on Twitter/X. AI tools, Claude Code plugins, research papers, workflow tips. The good stuff that shows up in my feed but I don't have time to process immediately.
The problem? Twitter bookmarks are a black hole. You save them, they pile up, and you never see them again.
I wanted these bookmarks flowing into my Obsidian vault automatically. Here's how I built it using Bird CLI and a simple Python script.
## The Problem
Twitter bookmarks are where good ideas go to die. I was bookmarking 10-20 tweets a day about AI tools, agent patterns, and dev workflows, but they just sat there. No search, no integration with my knowledge base, no way to process them systematically.
I needed bookmarks flowing into Obsidian where I could:
- Search across them semantically
- Link them to projects and notes
- Process them with AI for categorization
- Actually reference them later
## The Solution
Bird CLI + Python + LaunchAgent = automated bookmark sync every 2 hours.
### Stack
- **[Bird CLI](https://github.com/steipete/bird)**: Fast Twitter/X client for terminal access by [@steipete](https://twitter.com/steipete)
- **Python script**: Polls bookmarks, deduplicates, formats to Obsidian markdown
- **LaunchAgent**: Runs every 2 hours on macOS
### Setup
**1. Install Bird CLI**
[Bird CLI](https://github.com/steipete/bird) is a fast Twitter/X client built by [@steipete](https://twitter.com/steipete).
```bash
brew install steipete/tap/bird
```
**2. Authenticate Bird**
Login to X/Twitter in Safari (Bird reads cookies from your browser):
```bash
bird whoami
```
**3. Create the processor script**
`~/scripts/twitter-bookmark-processor.py`:
```python
#!/usr/bin/env python3
"""
Twitter Bookmark Processor
Polls Bird CLI for new bookmarks and processes them into Obsidian format.
"""
import json
import subprocess
import sys
from datetime import datetime
from pathlib import Path
BOOKMARKS_FILE = Path.home() / "vaults/bioinfo/Reference/Bookmarks.md"
STATE_FILE = Path.home() / ".config" / "bird" / "last_processed.json"
def get_bookmarks(count=50):
"""Fetch bookmarks from Bird CLI."""
try:
result = subprocess.run(
["bird", "bookmarks", "-n", str(count), "--json"],
capture_output=True,
text=True,
timeout=30,
)
if result.returncode != 0:
return []
bookmarks = json.loads(result.stdout)
return bookmarks if isinstance(bookmarks, list) else []
except Exception as e:
print(f"Failed to fetch bookmarks: {e}", file=sys.stderr)
return []
def load_state():
"""Load the last processed bookmark ID."""
if not STATE_FILE.exists():
return None
try:
with open(STATE_FILE) as f:
return json.load(f).get("last_id")
except Exception:
return None
def save_state(bookmark_id):
"""Save the last processed bookmark ID."""
STATE_FILE.parent.mkdir(parents=True, exist_ok=True)
with open(STATE_FILE, "w") as f:
json.dump({"last_id": bookmark_id, "timestamp": datetime.now().isoformat()}, f)
def format_bookmark(bookmark):
"""Format a bookmark into Obsidian markdown."""
tweet_id = bookmark.get("id", "")
author = bookmark.get("author", {})
username = author.get("username", "")
name = author.get("name", "")
text = bookmark.get("text", "").replace("&", "&")
quoted = bookmark.get("quotedTweet")
url = f"https://x.com/{username}/status/{tweet_id}"
output = [f"## @{username} - {name}"]
if quoted:
quoted_author = quoted.get("author", {})
quoted_text = quoted.get("text", "")
output.append(f"> *Quoting @{quoted_author.get('username', '')}:* {quoted_text}")
output.append(">")
for line in text.split("\n"):
output.append(f"> {line}")
output.append("")
output.append(f"- **Tweet:** {url}")
output.append("- **What:** [AI analysis placeholder]")
output.append("")
return "\n".join(output)
def main():
bookmarks = get_bookmarks(count=50)
if not bookmarks:
return
last_id = load_state()
# Filter to new bookmarks only
if last_id:
new_bookmarks = []
for bm in bookmarks:
if bm.get("id") == last_id:
break
new_bookmarks.append(bm)
bookmarks = new_bookmarks
if bookmarks:
# Process and append to markdown file
# ... formatting logic ...
save_state(bookmarks[0].get("id"))
if __name__ == "__main__":
main()
```
Make it executable:
```bash
chmod +x ~/scripts/twitter-bookmark-processor.py
```
**4. Create LaunchAgent for automation**
`~/Library/LaunchAgents/com.bioinfo.twitter-bookmarks.plist`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.bioinfo.twitter-bookmarks</string>
<key>ProgramArguments</key>
<array>
<string>/opt/homebrew/bin/python3</string>
<string>/Users/YOUR_USERNAME/scripts/twitter-bookmark-processor.py</string>
</array>
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin</string>
</dict>
<key>StartInterval</key>
<integer>7200</integer>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
```
**5. Load the agent**
```bash
launchctl load ~/Library/LaunchAgents/com.bioinfo.twitter-bookmarks.plist
```
## How It Works
### Polling Strategy
Every 2 hours, the script:
1. Fetches your 50 most recent bookmarks via Bird CLI
2. Checks the last processed bookmark ID from state file
3. Filters to only new bookmarks since last run
4. Formats them as Obsidian markdown
5. Prepends to your bookmarks file
6. Saves the newest bookmark ID
### Deduplication
The script tracks state in `~/.config/bird/last_processed.json`:
```json
{
"last_id": "2010330642714894391",
"timestamp": "2026-01-12T09:40:33.069702"
}
```
On each run, it only processes bookmarks newer than `last_id`, preventing duplicates.
### Format
Bookmarks get formatted as:
```markdown
# Sunday, January 11, 2026
## @username - Display Name
> Tweet text here
> Multiple lines preserved
- **Tweet:** https://x.com/username/status/123456789
- **What:** [AI analysis placeholder]
```
### Chronological Sorting
The script groups bookmarks by date and sorts chronologically (newest first), so your most recent bookmarks always appear at the top of the file.
## Why This Approach Works
### Lightweight
No API keys, no authentication flows, no rate limits. [Bird CLI](https://github.com/steipete/bird) reads cookies from your browser. The Python script is 200 lines with zero dependencies beyond stdlib.
### Resilient
If Bird fails to fetch bookmarks, the script logs the error and exits cleanly. LaunchAgent retries in 2 hours. State file ensures no bookmarks are lost or duplicated.
### Extensible
The "What:" placeholder is perfect for AI analysis. You can pipe bookmarks through Claude to categorize them:
```python
def analyze_bookmark(text):
# Call Claude API to categorize
# "AI tool", "Research paper", "Workflow tip", etc.
pass
```
Or link extraction to expand t.co URLs:
```python
def expand_urls(text):
# Resolve shortened URLs
# Extract GitHub repos, papers, blog posts
pass
```
## Real-World Usage
I've been running this for a few weeks. Here's what I've learned:
### Search is Key
Once bookmarks are in Obsidian, you can search across them. With semantic search via vector embeddings, you can find tweets about "agent orchestration patterns" even if they never used those exact words.
### Processing Workflow
I have a weekly review where I:
1. Open `Bookmarks.md`
2. Scan new entries (everything since last review)
3. Extract actionable items to TODO lists
4. Link relevant tweets to project notes
5. Archive or delete noise
### Integration Points
Bookmarks feed into:
- **Project notes**: Link relevant tweets to active projects
- **Research notes**: Pull in papers and technical threads
- **Tool tracking**: Monitor new AI tools and frameworks
- **Learning queue**: Tutorials and guides to try later
## Variations
### Different Intervals
For less frequent syncing:
```xml
<key>StartInterval</key>
<integer>21600</integer> <!-- 6 hours -->
```
For real-time (every minute):
```xml
<key>StartInterval</key>
<integer>60</integer>
```
### Different Formats
Want a flat list instead of grouped by date?
```python
def format_flat(bookmarks):
output = []
for bm in bookmarks:
output.append(format_bookmark(bm))
return "\n".join(output)
```
### Multiple Collections
Bird supports bookmark folders (collections). Sync different folders to different files:
```bash
bird bookmarks --folder-id abc123 -n 50 --json
```
## Troubleshooting
### "No Twitter cookies found in Safari"
Bird needs you logged into X/Twitter in Safari. If you use Chrome or Firefox:
```bash
bird --chrome-profile Default bookmarks -n 5
```
### LaunchAgent not running
Check if it's loaded:
```bash
launchctl list | grep twitter-bookmarks
```
View logs:
```bash
tail -f ~/.logs/twitter-bookmarks.log
```
### Bookmarks out of order
The script sorts chronologically. If existing entries are misordered, run a one-time reorganization:
```python
# Parse all date headers
# Sort by actual datetime
# Rewrite file in proper order
```
## Alternatives
### Official Twitter API
Requires developer account, app creation, OAuth flow. Rate limits are strict (75 requests per 15 minutes for bookmark endpoints).
[Bird CLI](https://github.com/steipete/bird) bypasses all of this by reading browser cookies.
### Browser Extensions
Extensions like [Obsidian Web Clipper](https://github.com/kepano/obsidian-web-clipper) can save individual tweets, but require manual clicking.
This approach is fully automated and processes all bookmarks in batch.
### Readwise + Obsidian
Readwise has a Twitter integration, but it's designed for highlights and threads, not bookmarks. Also requires a paid subscription.
## Next Steps
This system handles the sync, but the real value comes from processing. I'm experimenting with:
1. **AI categorization**: Claude analyzes each bookmark and tags it (tool, paper, workflow, etc.)
2. **Link expansion**: Resolve t.co URLs and fetch page titles
3. **Thread extraction**: When a bookmark is part of a thread, fetch the entire thread
4. **Duplicate detection**: Identify when multiple people share the same link
5. **Priority scoring**: Rank bookmarks by relevance to active projects
The syncing is solved. Now it's about turning raw bookmarks into actionable knowledge.
---
### Related Articles
- [[AI-Systems-Architecture/building-knowledge-graphs-from-unstructured-data|Building Knowledge Graphs from Unstructured Data]]
- [[Practical-Applications/obsidian-as-a-second-brain|Obsidian as a Second Brain for AI Practitioners]]
- [[AI-Development-Agents/automating-knowledge-capture-with-ai-agents|Automating Knowledge Capture with AI Agents]]
---
<p style="text-align: center;"><strong>About the Author</strong>: Justin Johnson builds AI systems and writes about practical AI development.</p>
<p style="text-align: center;"><a href="https://justinhjohnson.com">justinhjohnson.com</a> | <a href="https://twitter.com/bioinfo">Twitter</a> | <a href="https://www.linkedin.com/in/justinhaywardjohnson/">LinkedIn</a> | <a href="https://rundatarun.io">Run Data Run</a> | <a href="https://subscribe.rundatarun.io">Subscribe</a></p>