WeSearch

I corrected my own benchmark claim from 91.5% to 88%. Here's what changed.

·6 min read · 0 reactions · 0 comments · 1 view
I corrected my own benchmark claim from 91.5% to 88%. Here's what changed.

A week after shipping a flattering tokens-saved number for my AI context tool, I noticed it was apples-to-oranges. Here's the workload-matched redo, the smaller honest number, and what I learned about benchmarking small dev tools.

Original article
DEV.to (Top)
Read full at DEV.to (Top) →
Opening excerpt (first ~120 words) tap to expand

try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 3424070) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } Mohan Krishna Alavala Posted on Apr 30 I corrected my own benchmark claim from 91.5% to 88%. Here's what changed. #ai #opensource #benchmarking #tooling A week ago I shipped v4.4.3 of context-router with a number on the README: "91.5% fewer tokens than code-review-graph." It was true in the narrow sense that both numbers came from real benchmark runs. It was also wrong in every way that matters.

Excerpt limited to ~120 words for fair-use compliance. The full article is at DEV.to (Top).

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from DEV.to (Top)