Making AI coding sessions persistent across agents
Drift is a tool that makes AI coding sessions persistent across different AI agents like Claude, GPT, and Gemini by capturing decisions, rejected approaches, and context in a structured markdown brief. It enables seamless handoffs between models without re-explaining work, stores reasoning in git alongside code, and supports local-first, vendor-neutral AI collaboration. The tool integrates with existing workflows via git notes and provides commands to trace, blame, and audit AI-generated code. This turns transient AI chat histories into durable, shareable project memory.
- ▪Drift creates portable markdown briefs that capture AI coding context, decisions, and rejected paths for smooth handoffs between different AI agents.
- ▪It stores AI session data locally and binds it to git commits using git notes, enabling traceability without altering commit history.
- ▪The tool supports reverse lookup with 'drift blame' to see which AI agent or human contributed each line of code and why.
- ▪Drift integrates with MCP-compatible clients and runs as a local daemon to continuously capture AI coding activity.
- ▪Users can audit AI contributions per commit with 'drift log' and control cost and privacy via configuration, including model choice and data-in-git settings.
Full article excerpt tap to expand
🌐 English · 日本語 · 简体中文 · 繁體中文 drift_ai Vendor-neutral handoff for AI coding tasks — between Claude, GPT, Gemini, DeepSeek, local LLMs. Reads from Claude Code, Codex, Cursor, Aider. Local-first. 🧠 AI coding breaks when you switch agents Claude stalls. Codex refuses. Cursor goes off-track. You spend 30 minutes re-explaining context you already solved. What decisions were made? What approaches already failed? Which file you were editing? 👉 None of that survives a session. ❌ Git tracks code — not AI reasoning Git shows what changed. It does NOT tell you: why it was written what was rejected which agent produced it what context led here That reasoning disappears. ✅ Drift solves this Drift is git blame for AI decisions ⚡ Before / After Before Copy-paste chat history Re-explain everything Lose decisions AI repeats mistakes After Structured markdown brief Decisions + rejected paths included Resume instantly No re-explaining The problem: Your AI coding agent stalled — refused, rate-limited, or just got dumb. Now you need to transfer 30 minutes of context to another agent. Re-pasting a chat history doesn't work; the new agent doesn't know which decisions are settled, which approaches you already rejected, or which file you were halfway through. drift handoff packages your in-progress task into a markdown brief any LLM can absorb cold: $ drift handoff --branch feature/oauth --to claude-code ⚡ scanning .prompts/events.db ⚡ extracting file snippets and rejected approaches ⚡ compacting brief via claude-opus-4-7 ✅ written to .prompts/handoffs/2026-04-25-1530-feature-oauth-to-claude-code.md The brief lists what you've decided, what you tried and rejected, what's open, and where to resume. Paste it into the next agent and they pick up mid-task without you re-explaining. drift is built on top of an attribution engine that watches the local session logs of your AI coding agents (Claude Code, Codex, Cursor, Aider...), LLM-compacts each completed session, stores the result in .prompts/ inside your git repo, and binds each session to its matching commit via git notes. The handoff feature is the cross-agent task-transfer wedge; the attribution engine powers drift blame / drift log underneath. After installation, drift log still shows multi-agent attribution per commit: commit abc1234 — Add OAuth login 💭 [claude-code] 7 events accepted, 0 rejected 💭 [codex] 3 events accepted, 1 rejected ✋ [human] 2 manual edits …and drift blame still resolves any line back to its full timeline. See docs/VISION.md for the broader thesis. Why drift exists AI coding stopped being a single-agent workflow. A real session today involves: Switching agents mid-task: Claude rate-limits, Codex stalls, the model goes off the rails — refusing tasks, hallucinating tool calls, or producing low-quality output. You move to another agent and burn 10 minutes re-explaining what you'd already decided and rejected. Forgetting your own work: a week later, git blame tells you which line you wrote, but not which prompt produced it, which approach was settled, or why. Onboarding teammates with zero context: they see the code, but the reasoning lived in someone else's chat history on someone else's laptop. drift turns that disposable AI trail into durable project memory: Capture, locally: drift capture (and drift watch for live mode) reads the session JSONL your agents already write under ~/.claude/projects/ and ~/.codex/sessions/. Nothing leaves your machine except an optional Anthropic…
This excerpt is published under fair use for community discussion. Read the full article at GitHub.