I spent 6 months testing every major prompting technique. Here's what actually works (and what's overhyped) — with real examples.
·
0 reactions
·
0 comments
·
3 views
I work as an AI engineer and I've been obsessively documenting my results across GPT-4, Claude, and Gemini. This is the distillation of hundreds of hours of testing. No fluff, just what moved the needle. TL;DR Chain-of-thought still reigns supreme — but only when you scaffold it correctly Role prompting alone is weak; combine it with persona + goal + constraint XML tags outperform markdown in structured prompts by ~30% accuracy Negative examples ("don't do X") are underused and wildly effective
Original article
PromptEngineering
Anonymous · no account needed