19 stories tagged with #deepseek, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.
⌘ RSS feed for this tag → or search "Deepseek"
Deepseek v4 pricing is genuinely silly, did the math and now i am questioning my entire stack
A 3D Flappy Bird side-scroller game built with DeepSeek V4 Pro
100M tokens for $2.65 (Deepseek V4 Pro)
DeepSeek Unveils Newest Flagship AI Model a Year After Upending Silicon Valley
China’s DeepSeek rolls out a long-anticipated update of its AI model - AP News
Comprehensive up-to-date news coverage, aggregated from sources all over the world by Google News.…
Deepseek Vision Coming
From Xiaokang Chen on 𝕏:…
DeepSeek mystery: who is speaking for start-up as CEO Liang Wenfeng remains out of sight?
Researcher Chen Deli is emerging as DeepSeek’s new public face as speculation over the whereabouts of the company’s founder and CEO lingers.…
Kimi K2.6 vs DeepSeek V4 Pro
DeepSeek temporarily slashing prices on V4-Pro by 75%
DeepSeek-V4 arrives with near state-of-the-art intelligence at 1/6th the cost of Opus 4.7, GPT-5.5
No GGUFs for DeepSeek V4-Flash as yet?
Wondering why there aren't any "name brand" (like unsloth, bartowski) GGUFs as yet for DeepSeek V4 Flash?…
China's DeepSeek slashes prices for new AI model - Reuters
China's DeepSeek slashes prices for new AI model Reuters…
anyone actually tried deepseek v4 pro for coding?
so v4 pro dropped and barely anyone is talking about it. feels weird since when kimi k2.6 came out i seen post about it everywhere anyone here tried v4 pro for actual code work? ho…
DeepSeek V4 - almost on the frontier, a fraction of the price
Chinese AI lab DeepSeek's last model release was V3.2 (and V3.2 Speciale) last December . They just dropped the first of their hotly anticipated V4 series in the shape of two previ…
The exact KV cache usage of DeepSeek V4
Figure 1 of DSV4 paper seems to imply that DSV3.2 uses ~50GB at 1m context and DSV4 uses ~5GB: ***Numbers updated with the KV cache breakdown from vllm*** From my own calculations,…
llama.cpp DeepSeek v4 Flash experimental inference
Hi, here you can find experimental llama.cpp support for DeepSeek v4, and here there is the GGUF you can use to run the inference with "just" (lol) 128GB of RAM. The model, even qu…
Decreased Intelligence Density in DeepSeek V4 Pro
In the V3.2 paper, they mentioned: Second, token efficiency remains a challenge; DeepSeek-V3.2 typically requires longer generation trajectories (i.e., more tokens) to match the ou…
DeepSeek V4 Update
DeepSeek V4 Update…
DeepSeek-V4 on Day 0: From Fast Inference to Verified RL with SGLang and Miles
We are thrilled to announce Day-0 support for DeepSeek-V4 across both inference and RL training. SGLang and Miles form the first open-source stack to serve and train DeepSeek-V4 on…