Mini PC for local LLMs in 2026
The demand for mini PCs capable of running local large language models (LLMs) has surged in 2026, driven by AMD's Strix Halo platform featuring unified memory architecture. Prices for top-tier models like the GMKtec EVO-X2 have more than doubled since late 2025 due to high AI demand and memory cost increases. While these devices can run 70B-parameter models locally, buyers face challenges including inflated pricing and hardware limitations such as power caps on external GPUs.
- ▪The GMKtec EVO-X2 with Ryzen AI MAX+ 395 and 128GB RAM now costs $3,299, up from $2,099 in late 2025.
- ▪AMD's Strix Halo platform enables up to 128GB of unified LPDDR5x memory, allowing full local execution of 70B-parameter LLMs.
- ▪Most Strix Halo-based mini PCs limit AMD external GPUs to 120W via Oculink, reducing their effectiveness for AI inference expansion.
- ▪The MINISFORUM AI X1 Pro-470 and Beelink SER10 MAX are mid-tier options with Ryzen AI 9 HX 470 and 86 TOPS NPU performance.
- ▪Budget options like the origimagic A3 offer entry-level capability for local LLM use at $609 with upgradeable RAM.
Opening excerpt (first ~120 words) tap to expand
I bookmarked a GMKtec EVO-X2 listing in October last year. 128GB Ryzen AI MAX+ 395, listed at $2,099. I closed the tab, told myself I’d think about it for a week, and went to bed. Six months later I checked again. The exact same SKU is now $3,299. That’s not a typo. The “rampocalypse” (LPDDR5 prices spiking, AI demand, take your pick) has eaten 60% on top of the original price. Corsair quietly raised their AI Workstation 300 by $1,100. Reddit threads on r/LocalLLaMA are full of people kicking themselves for not buying when these things first launched. So here’s the thing. AMD just announced their own in-house Halo Box at AI Dev Day, ships in June. Every mini PC vendor on the planet is now slapping “Ryzen AI MAX+ 395” on something.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Hacker News: Front Page.