5 stories tagged with #unsloth, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.
⌘ RSS feed for this tag → or search "Unsloth"
Unsloth solved bug in Mistral Medium 3.5 implementation
"May 1, 2026 Update: We worked with Mistral to fix Mistral Medium 3.5 inference affecting some implementations, and released updated GGUFs with the fix (NOT related to Unsloth or o…
DeepSeek V4–almost on the frontier, a fraction of the price
Chinese AI lab DeepSeek’s last model release was V3.2 (and V3.2 Speciale) last December. They just dropped the first of their hotly anticipated V4 series in the shape of two ……
Are Unsloth models as good as I read?
Has anybody done some comparing between the models that Unsloth offers and their counter part? For example: I've been using qwen3.6:35b-a3b Q4_K_M , and on my MBP 64GB I get around…
[Qwen3.6 35b a3b] Used the top config for my setup 8gb vram and 32gb ram, and found that somehow the Q4_K_XL model from Unsloth runs just slightly faster and used less tokens for output compared to Q4_K_M despite more memory usage
Config CtxSize: 131,072 GpuLayers: 99 CpuMoeLayers: 38 Threads: 16 BatchSize/UBatchSize: 4096/4096 CacheType K/V: q8_0 Tool Context: file mode (tools.kilocode.official.md) Metric M…
MagicQuant (v2.0) - Hybrid Mixed GGUF Models + New Unsloth Dynamic Learned Configs
MagicQuant v2.0 is here. Introducing hybrid GGUF mixed models, utilization of learned Unsloth Dynamic tensors, a new benchmark philosophy that skips the nonsense! Smaller files. Be…