WeSearch

What benchmark would you build for “reply quality” in SDR generation? [D]

· 0 reactions · 0 comments · 3 views

Working on evaluating some AI-generated outbound (SDR-style emails along with follow-ups), and I’m running into a weird problem. Everyone talks about better personalisation or higher reply rates, but when you actually try to benchmark quality it gets messy fast. A few things we’ve looked at: a)reply rate (obvious, but noisy with a delayed signal) b)positive vs negative replies (hard to label cleanly at scale) c)factual accuracy about the prospect/company d)how much editing a human has to do befo

Original article
Machine Learning
Read full at Machine Learning →
Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from Machine Learning