WeSearch
Hub / Search / ai memory
SEARCH · AI MEMORY

Results for "ai memory".

19 stories match your query across our 700+ source catalog. Ranked by relevance and recency.

19 results for "ai memory"

ARXIV.ORG

ZenBrain: A Neuroscience-Inspired 7-Layer Memory Architecture for Autonomous AI Systems

Despite a century of empirical memory research, existing AI agent memory systems rely on system-engineering metaphors (virtual-memory paging, flat LLM storage, Zettelkasten notes), none integrating pr…

· 2 views
GITHUB

Show HN: AI memory with biological decay (52% recall)

Most RAG setups fail because they treat memory like a static filing cabinet. When every transient bug fix or abandoned rule is stored forever, the context window eventually chokes on noise, spiking to…

· 5 views
EUROGAMER

Xbox boss Asha Sharma hints memory costs "will impact" pricing and availability of next-gen Project Helix console

Xbox boss Asha Sharma is carefully examining the memory crisis situation as the brand makes plans for Project Helix's launch in the future.…

· 9 views
KUBERNETES

A container with 32 millicores sometimes finished builds faster than a 4-core Jenkins server. That felt wrong. Digging into why led to a bigger question — CPU scheduling got dramatically smarter over a decade. Why does memory still behave like it's 2015?

· 2 views
/R/TECHNOLOGY

Samsung phone division could post its first ever loss as AI drives memory costs higher

· 3 views
REDDIT

Samsung workers threaten strike, demand share of $38 billion AI memory windfall

· 8 views
ARXIV.ORG

HeLa-Mem: Hebbian Learning and Associative Memory for LLM Agents

Long-term memory is a critical challenge for Large Language Model agents, as fixed context windows cannot preserve coherence across extended interactions. Existing memory systems represent conversatio…

· 10 views
SOLIDDARK

Show HN: Gate – AI workers handle dev tickets in a visual workspace

Four AI desks. Named robots with memory. Every token costed and attributed. Rashomon security proxy on every call.…

· 3 views
ARXIV.ORG

AI Identity: Standards, Gaps, and Research Directions for AI Agents

AI agents are now running real transactions, workflows, and sub-agent chains across organizational boundaries without continuous human supervision. This creates a problem no current infrastructure is …

· 2 views
ARXIV.ORG

Thinking Like a Clinician: A Cognitive AI Agent for Clinical Diagnosis via Panoramic Profiling and Adversarial Debate

The application of large language models (LLMs) in clinical decision support faces significant challenges of "tunnel vision" and diagnostic hallucinations present in their processing unstructured elec…

· 2 views
REDDIT

Got OpenAI's privacy filter model running on-device via ExecuTorch

Been experimenting with running OpenAI's privacy filter model on mobile through ExecuTorch. Sharing in case it's useful to others working on similar problems. Setup: - Runtime: ExecuTorch - Memory foo…

· 5 views
LOCALLLAMA

Comparison of upcoming x86 unified memory systems

AMD Gorgon halo summer this year. 15% faster memory clock speeds / bandwidth, than strix halo . Intel nova lake ax expected early next year. 2027 summer: AMD Medusa Halo , 50% performance improvement …

· 5 views
REDDIT

Qwen 3.6 27B in Claude Code says it will do something then stops and prompts for user reply (not failing a tool call)

I'm running Qwen/Qwen3.6-27B-FP8 via vLLM using this command: vllm serve Qwen/Qwen3.6-27B-FP8 --tensor-parallel-size 4 --gpu-memory-utilization 0.95 --max-num-seqs 8 \ --enable-auto-tool-choice --tool…

· 6 views
Y COMBINATOR

Terra API (YC W21) Hiring: Applied AI Strategist(Health Intelligence)

What this role actually is This is not “market research.” No 60‑page decks. No generic “digital health is big” observations. This is a continuous loop: market → signal → implication → decision → shi…

· 6 views
REDDIT

Skymizer Taiwan Inc. Unveils Breakthrough Architecture Enabling Ultra-Large LLM Inference on a Single Card

Source Article excerpt: With a single PCIe card — powered by six HTX301 chips and 384 GB of memory — enterprises can now run 700B-parameter model inference locally at just ~240W per card. The memory-b…

· 3 views
STABLEDIFFUSION

LTX 2.3 Prompt Relay with a messy zombie chase scene(Prompt Relay test)

I just pushed my LTX 2.3 Prompt Relay workflow in ComfyUI to the absolute limit with a new zombie chase test to see if we could fix this. I purposely engineered this scene to fail. We added: Full-body…

· 2 views
AXIOS

Axios Finish Line: Prompt like a pro

I'm going to offer four specific ways for you to get more out of AI this week: better prompting (tonight), improving AI memory (Tuesday), starting a business using AI (Wednesday) and running a busines…

· 7 views
REDDIT

VRAM.cpp: Running llama-fit-params directly in your browser

Lots of people are always asking on this subreddit if their system can run a certain model. A lot of the "VRAM calculators" that I've found only provide either very rough estimates or are severely lim…

· 5 views
ARS TECHNICA - ALL CONTENT

Report: Samsung execs worried company could lose money on smartphones for the first time

The AI-driven memory shortage is hitting Samsung's bottom line.…

· 5 views