WeSearch
Hub / Tags / Vram
TAG · #VRAM

Vram coverage.

Every story in the WeSearch catalog tagged with #vram, chronological, with view counts. Subscribe to the per-tag RSS feed to follow this topic in your reader of choice.

6 stories tagged with #vram, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.

⌘ RSS feed for this tag →   or   search "Vram"

RELATED TAGS
#nvidia1#rtx-50701#laptop-gpu1#gddr71
TOM'S HARDWARE

Nvidia quietly launches 12GB RTX 5070 laptop GPU — midrange mobile gaming gets more VRAM amid the RAMpocalypse

The new model will use 3GB modules, so memory bandwidth should stay close to the RTX 5070 8GB mobile part.…

3 views ·
#nvidia#rtx 5070#laptop gpu
TOM'S GUIDE

Nvidia RTX 5070 laptop GPU officially has 12GB of VRAM — and it’s about time

Nvidia has officially announced the RTX 5070 laptop GPU with 12GB of GDDR7 VRAM. This could be a huge win for mid-range gaming laptops.…

3 views ·
#nvidia#rtx 5070#laptop gpu
REDDIT

To 16GB VRAM users, plug in your old GPU

For those who want to run latest dense ~30b models and only have 16GB VRAM, if you have a old card with 6GB VRAM or more, plug it in. It matters that everything fits on the VRAM, e…

6 views ·
REDDIT

VRAM.cpp: Running llama-fit-params directly in your browser

Lots of people are always asking on this subreddit if their system can run a certain model. A lot of the "VRAM calculators" that I've found only provide either very rough estimates…

7 views ·
REDDIT

[Qwen3.6 35b a3b] Used the top config for my setup 8gb vram and 32gb ram, and found that somehow the Q4_K_XL model from Unsloth runs just slightly faster and used less tokens for output compared to Q4_K_M despite more memory usage

Config CtxSize: 131,072 GpuLayers: 99 CpuMoeLayers: 38 Threads: 16 BatchSize/UBatchSize: 4096/4096 CacheType K/V: q8_0 Tool Context: file mode (tools.kilocode.official.md) Metric M…

5 views ·
LOCALLLAMA

Quant Qwen3.6-27B on 16GB VRAM with 100k context length

I have experimented how to run Qwen3.6-27B on my laptop with an A5000 16GB GPU. I have created an own IQ4_XS GGUF "qwen3.6-27b-IQ4_XS-pure.gguf" with the Unsloth imatrix and compar…

5 views ·