WeSearch

I Use AI in 2026

Federico Paolinelli· ·9 min read · 0 reactions · 0 comments · 3 views

How I use AI in my daily work as a maintainer and developer, from coding to triaging PRs and CI failures

Original article
Github · Federico Paolinelli
Read full at Github →
Full article excerpt tap to expand

How I use AI in 2026 April 25, 2026 10-minute read It’s funny Link to heading I had a draft post sitting in my local repo for a while, where I was about to scream about how AI is overestimated. Well, that post aged pretty badly. I never published it, and looking back at the notes I’m glad I didn’t. So what I’m going to write today will only be about my current workflow and how I actually use AI in my daily work — no hype, no predictions, just what I’ve found useful. My setup Link to heading I run Claude Code with --dangerously-skip-permissions inside a libvirt VM. Running it in a VM adds a layer of isolation I’m comfortable with when giving an agent broad permissions to run commands. My configuration and scripts for setting this up live at clauderunner. I work with tmux and keep at most 3 sessions running in parallel, each working on a different task. Beyond that, it becomes hard to keep up — I want to review what each agent produces before moving forward, and three is about the limit where I can do that without losing track. I found Mitchell’s Hashimoto suggestion to always have an agent running interesting, and trying to build my own variation of it. It’s also true that sometimes I need to stop and gather all the open threads I left hanging, so I don’t want to have too many of them. I also use caveman to cut down Claude’s verbosity and reduce the number of tokens a bit. By default it narrates everything it’s doing in great detail, which I find more distracting than helpful. I don’t need the narration, I just need the results. Adding new features to the projects I am working on Link to heading This is the most obvious use case and probably where I get the most value. I work primarily on MetalLB and OpenPerouter, both of which are non-trivial Go projects with real users, so the bar for quality is high. I use speckit intensively. Unsurprisingly, the more time I spend upfront drafting a precise spec and carefully reviewing each intermediate artifact — the plan, the task breakdown — the less I need to iterate on the generated code. Vague instructions produce vague code. A well-structured spec acts as a forcing function that keeps the agent on track and reduces the number of correction cycles significantly. Also, if I have a specific structure or architecture in mind and I describe it carefully, the quality of the output is much better. For larger features I enable CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 and spin up a team: typically 3 coding agents working in parallel, 1 reviewer, and 1 QE agent writing tests alongside the implementation. For smaller, well-scoped changes a context.md file with hand-written instructions is sufficient — no need to over-engineer the scaffolding. Once the code is generated I use diffity to review it. It gives me a convenient way to annotate the diff with comments and then ask the agents to iterate on them. It’s a tighter feedback loop than editing files by hand. So why I am not pushing out one thing after the other? The initial outcome, even after I’ve reviewed it, is never the finished product. It still has to pass CI (and we know it’s painful!), and it still has to survive the GitHub review process. Real reviewers (or other agents) catch things that neither I nor the agent noticed. When a comment is straightforward to address, I just tell the agent to read the review and fix it. For anything more subtle, I stay involved. On pushing code without reviewing it Link to heading I feel it’s unfair to push code I…

This excerpt is published under fair use for community discussion. Read the full article at Github.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from Github