WeSearch
Hub / Tags / Gemma
TAG · #GEMMA

Gemma coverage.

Every story in the WeSearch catalog tagged with #gemma, chronological, with view counts. Subscribe to the per-tag RSS feed to follow this topic in your reader of choice.

13 stories tagged with #gemma, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.

⌘ RSS feed for this tag →   or   search "Gemma"

RELATED TAGS
#ai2#local-models2#ollama2#coding2#transformers-js2#llms1#agents1#claude1#gemma41#qwen3-coder-next1#qwen3-6-35b1#llm1
LOCALLLAMA

gemma-4-31B-it-DFlash has been released

I guess we'll have to wait until this PR is merged before we can test it.…

4 views ·
R/HOMELAB

GPU server for hosting Gemma 4 possibilities

5 views ·
R/LOCALLLAMA

I stumbled on a Gemma 4 chat template bug for tools and fixed it

10 views ·
R/LOCALLLAMA

great work, Gemma

8 views ·
LOCALLLAMA

I've created a LoRA for Gemma 3 270M making it probably the smallest thinking model?

Here is an example of the output: ``` ==================== THINKING ==================== Here is the thinking process: This is a large community with a wide range of interests User…

17 views ·
BOING BOING

Cartoonist Gemma Correll's moving and funny book about her lifelong mental illness: Anxietyland

Gemma Correll's Anxietyland covers panic attacks, agoraphobia, depression, and a hospitalization program — and it's also funny.…

10 views ·
#mental health#graphic memoir#anxiety
REDDIT

AMD Radeon RX 6900 XT - ROCm vs Vulkan - Gemma 4 and Qwen 3.5 speed benchmarks

Did some quick tests after building llama.cpp with ROCm 6.4.2 and latest Vulkan for my 6900 XT gemma4 E2B Q4_K ubatch ROCm pp512 Vulkan pp512 ROCm tg128 Vulkan tg128 32 1536.60 142…

10 views ·
HUGGINGFACE

How to Use Transformers.js in a Chrome Extension

We’re on a journey to advance and democratize artificial intelligence through open source and open science.…

5 views ·
#transformers.js#chrome extension#manifest v3
PATLOEBER

How to run a local coding agent with Gemma 4 and Pi

Set up Gemma 4 running in LM Studio, connected to Pi as the terminal agent…

6 views ·
#ai#coding#local models
REDDIT

Most efficient way of running Gemma 4 E4B with multimodal capabilities on a laptop?

The gemma 4 E4B and E2B models have built-in multimodal capabilities. However, as far as I am aware, llama.cpp does not have proper support for vision and audio inputs (specially a…

10 views ·
WILLIAMANGEL

Offline Agentic Coding

Offline Agentic Coding: Ollama and Claude code…

5 views ·
#ai#llms#agents
REDDIT

How to run a local coding agent with Gemma 4 and Pi | Patrick Loeber

Tutorial from the Google guy, I use very similar setup (llama.cpp instead of lmstudio)…

15 views ·
REDDIT

Speculative decoding with Gemma-4-31B + Gemma-4-E2B enables 120 - 200 tok/s output speed for specific tasks

So for my project I was using up until now either Gemini 3 / 2.5 Flash or Flash-lite. All my use cases are not agentic, simply LLM workflows for atomic tasks like extracting refere…

14 views ·