The Speed Difference That Changes Everything
You type a sentence. How long until you see it in the PDF?
That delay shapes your entire writing experience. A one-second delay keeps you in flow. A thirty-second delay breaks your concentration. A two-minute delay makes you check email, lose your train of thought, and dread the next compile.
Here's what that delay looks like across different approaches:
| Approach | Typical Delay | Your Experience | |----------|---------------|-----------------| | Traditional LaTeX | 5-30 seconds | Frustrating wait | | Cloud editors (off-peak) | 5-15 seconds | Tolerable | | Cloud editors (peak) | 30-120 seconds | Productivity killer | | Thetapad (incremental) | 0.5-2 seconds | Flow state |
The difference isn't magic—it's architecture. Let's break down how compilation actually works and why Thetapad achieves sub-second updates.
Understanding Traditional Compilation
To appreciate the optimizations, you need to understand what LaTeX does during a full compile:
The Multi-Pass Process
LaTeX requires multiple passes through your document:
Pass 1: Read document structure
- Parse all commands
- Collect section headings
- Note label definitions
- Record citations needed
Pass 2: Process bibliography (BibTeX/Biber)
- Read .bib files
- Resolve citations
- Generate bibliography
Pass 3: Resolve cross-references
- Match \ref to \label
- Build table of contents
- Generate lists of figures/tables
Pass 4: Finalize content
- Fix page numbers
- Finalize TOC entries
- Complete index
Pass 5: Generate PDF
- Embed fonts
- Convert images
- Build final outputA 50-page document might take 20 seconds for each pass. Four passes means 80 seconds just for the core compilation—before fonts and images.
Why Changes Trigger Full Rebuilds
Traditional LaTeX lacks change detection. Every compilation starts from scratch:
- Changed a word in Chapter 5? Recompile all chapters.
- Fixed a typo in a figure caption? Regenerate all figures.
- Added a citation? Run the entire bibliography process.
This "rebuild everything" approach made sense when LaTeX was designed in the 1980s. Today, it's needlessly wasteful.
Cloud Editor Compilation
Cloud editors like Overleaf add network latency to the already slow compilation:
The Request-Response Cycle
Your keystrokes
↓ (network latency: 50-200ms)
Server receives change
↓ (queue time: 0-60s depending on load)
Your compile starts
↓ (compilation: 5-30s)
PDF generated
↓ (network latency: 50-200ms)
PDF download
↓ (rendering)
You see the resultThe Queue Problem
Cloud services share resources among users. During peak times—conference deadlines, end-of-semester rush—you compete for compilation slots:
- Free tier users wait behind paying users
- The same document that compiled in 5 seconds at 3 AM takes 2 minutes at 3 PM
- Server capacity can't scale infinitely
Timeout Limits
Complex documents may exceed server time limits:
- Extensive TikZ diagrams
- Large bibliographies with hundreds of entries
- Heavy use of computational packages
- Many high-resolution images
When you hit a timeout, you must simplify your document or upgrade your plan.
Thetapad's Architecture
Thetapad reimagines LaTeX compilation from first principles. The core insight: most of your document doesn't change most of the time.
Local-First Processing
Compilation runs on your device using WebAssembly:
Your keystrokes
↓ (no network)
Local LaTeX engine processes change
↓ (incremental: only what changed)
PDF updated
↓ (no network)
You see the resultBenefits:
- Zero network latency: No round-trips to remote servers
- No queuing: Your CPU is dedicated to your work
- No timeouts: Compile as long as needed
- Works offline: Full functionality without internet
Incremental Compilation
The key innovation: we track exactly what changed and only recompile that.
Traditional approach:
Chapter 1: Unchanged → Recompile anyway (8s)
Chapter 2: Unchanged → Recompile anyway (8s)
Chapter 3: You fixed a typo → Recompile (8s)
Chapter 4: Unchanged → Recompile anyway (8s)
Chapter 5: Unchanged → Recompile anyway (8s)
Total: 40sIncremental approach:
Chapter 1: Unchanged → Use cached result (0s)
Chapter 2: Unchanged → Use cached result (0s)
Chapter 3: You fixed a typo → Recompile (1.5s)
Chapter 4: Unchanged → Use cached result (0s)
Chapter 5: Unchanged → Use cached result (0s)
Total: 1.5sThat's a 26× improvement for a single-chapter change. The savings scale with document size—200-page theses see even larger gains.
Dependency Graph Analysis
We build a graph of how document components depend on each other:
main.tex
├── preamble.tex
│ ├── packages (cached after first load)
│ └── custom commands
├── frontmatter/
│ ├── title.tex
│ └── abstract.tex
├── chapters/
│ ├── chapter1.tex
│ │ ├── section1.1 → fig1.pdf
│ │ └── section1.2 → fig2.pdf
│ ├── chapter2.tex
│ │ └── section2.1 → tab1.tex
│ └── chapter3.tex (YOU EDITED THIS)
│ └── section3.1
└── references.bib → bibliographyWhen you edit chapter3.tex:
- Check if the change affects cross-references (section numbers, figure refs)
- If not, recompile only Chapter 3
- Merge the new Chapter 3 output with cached chapters
- Update the final PDF
Most changes don't affect cross-references, so most changes are truly incremental.
Intelligent Caching
We cache at multiple levels:
Package caches:
Packages like amsmath, graphicx, and hyperref are processed once and cached. Their format files persist across browser sessions.
Auxiliary file caches:
Files like .aux, .toc, and .bbl are cached. Only invalidated when their source data changes.
Figure caches: Converted images are hashed and cached. A 5MB PNG that takes 2 seconds to convert becomes a 0ms cache hit.
Font caches: Font subsets are expensive to generate. We cache them aggressively across all documents using the same fonts.
Cache invalidation: Caches are invalidated precisely. Changing a figure caption doesn't invalidate the figure image cache—only the text cache for that caption.
Parallel Processing
Modern devices have multiple CPU cores. We use them:
Main thread: Parse document structure, coordinate work
Worker 1: Compile chapters 1-2
Worker 2: Compile chapters 3-4
Worker 3: Process figure conversions
Worker 4: Build bibliography
→ All workers finish
→ Merge results
→ Generate final PDFFor a 10-chapter document, parallel compilation can approach 4× speedup on a quad-core machine.
Reference Tracking Optimization
Cross-references are tricky because they can cause cascading changes:
\section{Introduction}\label{sec:intro}
...
\section{Methods}\label{sec:methods} % If you add this...
...
See Section~\ref{sec:intro} % ...this number might changeWe optimize this:
- Predict stability: Most references don't change between compiles
- Fast-path stable refs: If no structural changes, skip re-resolution
- Lazy re-evaluation: Only recalculate references when actually needed
- Diff-based updates: Compare old and new reference tables; only update changes
Benchmark Results
We test against real-world documents, not synthetic benchmarks.
Test Documents
- Small: 5-page conference paper, 3 figures, 20 citations
- Medium: 50-page journal article, 15 figures, 100 citations
- Large: 200-page thesis, 50 figures, 300 citations, extensive math
Test Conditions
- Cold compile: First compile, empty caches
- Warm compile: Repeat full compile, populated caches
- Incremental: Single-line text change
Results: 200-Page Thesis
| Scenario | Thetapad | Traditional | Cloud (off-peak) | Improvement | |----------|----------|-------------|------------------|-------------| | Cold compile | 18s | 45s | 50s | 2.5-2.8× | | Warm compile | 8s | 45s | 50s | 5.6-6.2× | | Incremental | 1.2s | 45s | 50s | 37-42× |
The incremental case—the common case during active writing—shows the largest gains.
Results: 50-Page Journal Article
| Scenario | Thetapad | Traditional | Improvement | |----------|----------|-------------|-------------| | Cold compile | 8s | 20s | 2.5× | | Warm compile | 4s | 20s | 5× | | Incremental | 0.8s | 20s | 25× |
Even medium-sized documents see order-of-magnitude improvements.
Optimizing Your Documents
You can help the compilation system work efficiently:
Use Include for Chapters
% Good: clear chapter boundaries
\include{chapters/introduction}
\include{chapters/methodology}
\include{chapters/results}
% Less optimal: everything in one file
\input{chapter1}
\input{chapter2}
\input{chapter3}The \include command creates clear boundaries for incremental compilation. Each included file can be compiled independently.
Externalize TikZ Graphics
TikZ diagrams are computationally expensive. Externalize them:
\usetikzlibrary{external}
\tikzexternalize[prefix=tikz-cache/]
% First compile: generates PDF for each diagram
% Subsequent compiles: uses cached PDFsThis moves expensive TikZ rendering to a one-time cost.
Use Vector Formats
Vector PDFs are faster than rasterized images:
% Fast: vector PDF
\includegraphics{diagram.pdf}
% Slower: requires conversion
\includegraphics{diagram.png}If you have PNG or JPG figures, convert them to PDF outside LaTeX when possible.
Keep the Preamble Stable
Preamble changes invalidate many caches:
% Changing this...
\usepackage{newpackage}
% ...invalidates caches that depend on package stateAdd packages early in your project. Don't experiment with preamble changes during heavy editing sessions.
What Slows Down Compilation
Understanding bottlenecks helps you avoid them:
Large unoptimized images: A 20MB photograph takes seconds to process. Resize before importing.
Complex TikZ without externalization: Each compile regenerates every diagram. Externalize them.
Frequent preamble changes: Invalidates format caches. Stabilize early.
Unusual package combinations: Some packages interact in ways that defeat caching. Test combinations early.
Very deep nesting: Extremely nested environments can slow parsing. Consider flattening structure.
The Writing Experience
What does sub-second compilation mean in practice?
Continuous preview: Type → See result → Adjust → See result → Continue
The tight feedback loop enables iterative refinement. You see exactly how your formatting looks as you write.
Experimentation: "Does this equation look better inline or displayed?" "How does this table fit on the page?" "What if I use a different citation style?"
When each experiment costs one second instead of one minute, you try more things and find better solutions.
Large document confidence: Your 200-page thesis compiles as responsively as a 5-page paper. You don't avoid making changes because you dread the compile time.
Conclusion
Fast compilation isn't a luxury—it's a fundamental part of a productive writing workflow. The architecture that enables it:
- Local-first: Eliminates network latency entirely
- Incremental: Rebuilds only what changed
- Intelligent caching: Remembers expensive computations
- Parallel processing: Uses all available CPU cores
- Dependency tracking: Knows exactly what depends on what
The result: LaTeX that keeps up with your thinking. Type, see, refine, repeat—without waiting.
Experience compilation speed that doesn't interrupt your flow. Thetapad's architecture makes instant previews possible for documents of any size.