"Algorithm" has become a catch-all word for whatever the feed product is doing under the hood. In practice it usually means a ranking model trained on engagement: clicks, dwell, scroll, share, return. The model decides which headlines you see and in what order, and the publishers downstream of it bend their reporting toward whatever traits the model rewards. WeSearch is, deliberately, news without that.
This page is the precise version of the claim — exactly which algorithmic layers other news products use that WeSearch doesn't, and what the alternatives are.
Layers we don't run
Ranking model. No model scores headlines for you. No "for you" feed. No Reels-style algorithmic surface. The home feed is sorted by publish time, period.
Engagement-velocity boost. Many algorithmic feeds detect that a story is "going viral" within minutes and amplify it accordingly. We don't. A story trends with us when many distinct anonymous handles react to it, and that surfaces only in the explicitly-labeled trending row, not in the main feed.
Personalization vector. No model that learns your preferences from prior taps and shows you more of the same thing. We don't have a per-reader profile to learn from.
Topic clustering. No semantic-similarity model that groups stories. Categories on WeSearch come from a static directory we maintain — each source is hand-classified into a topic — not from cluster output.
Recommendation system. No "you might also like." No related-stories model. The "more from this source" block on a story page is exactly that — a list of recent stories from the same publisher, by publish time.
Engagement-prediction layer. No model that predicts whether you'll click a headline and uses that to reorder.
Why we don't run them
Each of those layers has the same structural property: it makes the feed reflect what the platform thinks will keep you engaged, which is not the same as what's actually happening. A platform that runs them at scale slowly bends what news is for its readers — toward headlines that test well in the model rather than headlines that are most informative. The longer argument.
What we do instead
- Chronology. Stories sort by publish time, newest first.
- Dedup by URL. Same headline from three wires shows up once.
- Categories from a static directory. You filter manually; the bucket is hand-classified.
- Trending counts, not predicts. Distinct-reactor count over 24 hours.
- Most-discussed counts, not predicts. Comment count over 24 hours.
- Pulse is a window, not a feed. Pulse shows community signal in real time but doesn't reorder anyone's home feed.
What about AI on story pages?
We use AI for two narrow things: a 3–5 sentence TL;DR per story page, clearly labeled, and a daily editorial note at /daily, also labeled. Neither affects feed ordering. Neither personalizes. The TL;DR is generated once per story and is the same for every reader who lands on that page.
What about search?
Search ranks by lexical relevance to your query, not by engagement. A search for "kashmir" returns the most recent stories that mention Kashmir, ranked by lexical match. There is no learn-from-clicks signal injected.
The constraint, made plain
If we ever introduce an algorithmic layer that affects feed ordering or what reaches you in push, we'll publish that fact prominently and explain what it does. Currently there is no such layer. The home feed today is the same chronological dedup it was a year ago and the same one it'll be a year from now.