HubLensLLMnashsu/llm_wiki
// archived 2026-04-27
nashsu

llm_wiki

AI🌱 NEW PROJECT BOOST#LLM#Knowledge Graph#RAG#Tauri#React
View on GitHub
66

// summary

LLM Wiki is a cross-platform desktop application that automatically transforms your documents into an organized, interlinked knowledge base. It utilizes a two-step chain-of-thought ingestion process to maintain a persistent wiki that remains current as your sources evolve. The system features advanced graph-based insights, vector semantic search, and seamless Obsidian compatibility to help users manage and discover knowledge effectively.

// technical analysis

LLM Wiki is a cross-platform desktop application that evolves Andrej Karpathy's abstract knowledge base pattern into a robust, automated system for managing personal information. By implementing a two-step chain-of-thought ingestion process and a persistent knowledge graph, it solves the problem of fragmented, static documentation by continuously synthesizing and interlinking new data into a coherent, self-updating wiki. The project prioritizes structural integrity and human-in-the-loop oversight, utilizing a sophisticated relevance model and asynchronous review system to ensure that the generated knowledge remains accurate and actionable.

// key highlights

01
Two-Step Chain-of-Thought Ingest improves content quality by separating source analysis from wiki page generation.
02
A 4-signal knowledge graph automatically maps relationships between concepts using link, source, and community data.
03
Louvain community detection enables the automatic discovery and visualization of knowledge clusters within the user's data.
04
Deep Research capabilities allow the system to identify knowledge gaps and autonomously perform web searches to synthesize new information.
05
The application provides full Obsidian compatibility, allowing users to leverage the generated wiki as a standard local vault.
06
A dedicated Chrome extension enables one-click web clipping that automatically triggers the ingestion pipeline for seamless knowledge capture.

// use cases

01
Automated document ingestion and wiki page generation with source traceability
02
Knowledge graph visualization for discovering connections and identifying knowledge gaps
03
Deep research integration for web-based information gathering and automated synthesis

// getting started

To begin using LLM Wiki, download the appropriate installer for your operating system (macOS, Windows, or Linux) from the project's releases. Upon launching the application, configure your preferred LLM provider and API keys in the Settings panel. You can then start building your knowledge base by importing folders of documents or using the Chrome extension to clip web content directly into your project.