Singapore's Foreign Minister Builds an AI "Second Brain" Using NanoClaw
Singapore's Foreign Minister Vivian Balakrishnan has developed a personal AI 'second brain' using open-source tools NanoClaw and the LLM Wiki pattern, running on a Raspberry Pi to support diplomatic work. The system processes speeches, articles, and web content into a structured knowledge graph, enabling it to answer questions, draft materials, and deliver briefings. Designed for privacy, it operates locally without relying on cloud services, with on-device processing for voice and isolated data containers. Balakrishnan emphasizes the strategic advantage AI offers diplomats who integrate it practically into their workflows.
Full article excerpt tap to expand
Singapore’s Foreign Minister Builds An AI “Second Brain” Using NanoClaw, Says It Can Answer Every Question For A Diplomat April 25, 2026April 25, 2026 OfficeChai TeamMany politicians across the world are talking about how they want to promote AI, but a Singaporean politician is building AI bots to help with his daily work.Dr. Vivian Balakrishnan, Singapore’s Minister for Foreign Affairs, has publicly shared that he has built a personal AI assistant he describes as a “second brain” for a diplomat — one that answers every question, researches topics, drafts speeches, provides daily briefings, and condenses information on demand. “It has become invaluable — I don’t dare switch it off!” he wrote in a Facebook post.Who Is Vivian Balakrishnan?Dr. Balakrishnan is not your typical politician dabbling in tech buzzwords. A trained ophthalmologist educated at the Anglo-Chinese School and National Junior College, he earned a President’s Scholarship to study medicine at the National University of Singapore in 1980, later becoming a Fellow of the Royal College of Surgeons of Edinburgh in 1991. He has served in Singapore’s Cabinet for over two decades and is currently the country’s top diplomat.That a minister of his standing is not just endorsing AI but actually building and running his own system — on a Raspberry Pi, no less — is a signal worth paying attention to.What He Built: NanoClaw on a Raspberry PiThe system is built on two open-source foundations. The first is NanoClaw, a self-hosted Claude assistant created by developer Gavriel Cohen. It runs locally on a Raspberry Pi, connects to messaging channels like WhatsApp, Telegram, Slack, and Discord, processes voice notes and images, and runs scheduled tasks — all without relying on a cloud service.The second is the LLM Wiki pattern conceived by Andrej Karpathy, the former Tesla Director of AI. Karpathy has written extensively about how standard LLMs suffer from a form of amnesia — they forget everything between sessions. His wiki pattern addresses this by extracting structured knowledge from raw sources rather than indexing them wholesale, building a compounding knowledge base over time.Balakrishnan has combined both into a system that ingests his speeches, articles, and web clips, synthesises them into a structured knowledge graph, and surfaces relevant information automatically every time he interacts with the assistant.The Technical ArchitectureThe full technical write-up, which Balakrishnan published as a GitHub Gist, reveals a surprisingly sophisticated stack for a side project.At its core is a three-layer design. Raw sources — speeches, articles, and web clips saved via the Obsidian mobile app — feed into a custom knowledge graph tool called mnemon, which stores discrete facts as structured nodes in a SQLite database. These nodes are then synthesised into human-readable wiki pages, organised by entity, concept, and timeline, and browsable in Obsidian on macOS and iOS.The key insight: rather than doing simple retrieval-augmented generation (RAG), which fetches chunks of raw text, mnemon stores synthesised facts. Every time Balakrishnan asks a question, the system runs a semantic query against the knowledge graph and injects the most relevant facts as context before the AI responds — making the assistant progressively smarter as more material is ingested.For privacy, the system is deliberately self-contained. Vector embeddings that power the semantic search run locally using Ollama on the…
This excerpt is published under fair use for community discussion. Read the full article at OfficeChai.