"Unbiased" is impossible to fully achieve and easy to claim falsely, so let's be honest about what we mean. WeSearch tries to remove the structural biases that make most online news discussion tilt toward outrage, in-group signaling, and audience capture. We can't make individual humans unbiased, but we can refuse to run the systems that amplify the worst version of them.
Three biases we removed by design
1. Source bias from a narrow feed
Most news communities form around a single outlet or a single ideological lens. The reader's view of "what's happening" is filtered through one editorial line. WeSearch pulls from 700+ sources across the political spectrum, every continent, and every major beat. The home feed mixes left, right, center, US, EU, India, China, Africa, Australia, in chronological order. You can filter to a single source if you want, but the default is the cross-section.
2. Engagement bias from algorithmic amplification
An engagement-ranked feed rewards outrage because outrage produces taps. Over time, the discussion that survives is the discussion that drove engagement, which selects for the loudest takes and against the careful ones. WeSearch has no ranking model. The thread you see is the thread, by recency or by simple count. There is no boost for outrage and no suppression of nuance.
3. Identity bias from public follower counts
On platforms with public follower counts and persistent public identities, comments are partly performance — saying the thing your audience expects rather than the thing you actually think. WeSearch is anonymous by default. Your handle isn't tied to a real-name profile, doesn't accumulate followers in a leaderboard sense, and doesn't get optimized for. The only currency in the room is whether your point lands.
What we still can't do
We can't remove the bias of the underlying publishers — they each have editorial lines we don't control. We can't remove the bias of the readers — humans are partial by nature. We can't make the comment moderator (one human) perfectly impartial. What we can do is publish our standards openly, refuse to run the structural amplifiers above, and apologize plainly when we get specific calls wrong.
How threads stay substantive in practice
The combination of (anonymous handles + no algorithmic boost + cross-spectrum sources + small audience) means the comments under a typical WeSearch story tend to:
- Engage with the actual story rather than the source's reputation.
- Cite specific facts more often than identity-based attacks.
- Disagree without escalating, because there's no follower-count incentive to dunk.
- Trail off when the story stops being interesting, rather than getting boosted by a virality model into rage-bait status.
When threads go bad
It happens. We hide comments that target individuals, dox non-public figures, post spam, or constitute incitement. We don't hide comments for being unpopular or wrong; people read with their judgment intact. Full moderation policy.