HubLensLLMhydropix/TranslateBooksWithLLMs
// archived 2026-04-27
hydropix

TranslateBooksWithLLMs

AI#LLM#Translation#Ollama#Python#Docker
View on GitHub
81

// summary

TranslateBooksWithLLMs is a versatile tool designed to translate books, subtitles, and documents of any length using various local or cloud-based AI models. It features an intelligent chunking system that preserves original formatting, styles, and structure while allowing users to resume interrupted tasks via automatic checkpoints. The software supports multiple file formats including EPUB, SRT, DOCX, and TXT, offering both a user-friendly web interface and a robust command-line tool.

// technical analysis

Translate Books with LLMs is a specialized tool designed to facilitate the translation of long-form documents, books, and subtitles by leveraging various AI models. Its architecture centers on an intelligent chunking system that manages large files while maintaining context, ensuring that formatting, styles, and structural elements like EPUB tags or SRT timecodes remain intact. By supporting both local execution via Ollama and various cloud-based API providers, the project offers a flexible trade-off between privacy, hardware requirements, and model performance.

// key highlights

01
Supports processing of documents of any length, from single pages to thousand-page novels, using an intelligent chunking system.
02
Ensures perfect preservation of file formatting, including EPUB styles, document structure, and synchronized SRT timecodes.
03
Features a built-in checkpoint system that allows users to pause and resume translation progress at any time.
04
Offers broad compatibility with multiple AI providers, including local options like Ollama and cloud services like OpenRouter, OpenAI, and Gemini.
05
Provides a user-friendly web interface alongside a robust command-line interface for advanced configuration and automation.
06
Includes optional features such as literary refinement passes and text-to-speech generation using Edge-TTS.

// use cases

01
Translating long-form books and documents while maintaining original formatting and structure.
02
Synchronizing and translating subtitle files like SRT while preserving timecodes.
03
Running private, local translations using Ollama or connecting to various cloud-based LLM providers.

// getting started

To begin, download the pre-built executable for your operating system or clone the repository to run from source using Python. Ensure Ollama is installed if you intend to use local models, then launch the application to access the web interface at http://localhost:5000. For advanced users, the project also supports deployment via Docker or direct execution through the command line.