The Speed Difference
You type. You see the result. That's the ideal LaTeX experience.
But "you see the result" can mean:
- Traditional: Wait 5-30 seconds
- Cloud editors: Wait 5-60 seconds (depending on load)
- Thetapad: Sub-second for incremental updates
How does this work?
Traditional Compilation
Standard LaTeX compilation:
source.tex
↓
[First pass: Build structure]
↓
[BibTeX: Process citations]
↓
[Second pass: Resolve references]
↓
[Third pass: Finalize]
↓
[Font processing]
↓
[PDF generation]
↓
output.pdfEvery change = full recompilation = 5-30 seconds.
Cloud Editor Compilation
Shared server model:
Your change
↓
[Network upload]
↓
[Queue behind other users]
↓
[Full compilation on shared CPU]
↓
[Network download]
↓
PDF displayAdded delays:
- Network round-trip: 50-200ms
- Queue time: 0-30+ seconds (peak load)
- Resource contention
Thetapad's Approach
Local-First Processing
Most work happens on your machine:
Your change
↓
[Local incremental compile]
↓
PDF displayBenefits:
- No network latency
- No queuing
- Dedicated CPU
Incremental Compilation
Don't recompile unchanged content:
Traditional:
Chapter 1: Unchanged → Recompile anyway
Chapter 2: Unchanged → Recompile anyway
Chapter 3: You edited this → Recompile
Chapter 4: Unchanged → Recompile anywayIncremental:
Chapter 1: Unchanged → Use cached result
Chapter 2: Unchanged → Use cached result
Chapter 3: You edited this → Recompile only this
Chapter 4: Unchanged → Use cached resultResult: 80-90% time savings on large documents.
Smart Caching
We cache:
- Compiled auxiliary files (
.aux,.toc,.bbl) - Processed figures
- Font subsets
- Package initializations
Cache invalidation is intelligent—only what changed gets rebuilt.
Parallel Processing
Modern CPUs have multiple cores. We use them:
[Parse document structure] ← Main thread
[Compile chapter 1] ← Thread 2
[Compile chapter 2] ← Thread 3
[Process figures] ← Thread 4
[Build bibliography] ← Thread 5
↓
[Merge results]
↓
PDF outputFor multi-chapter documents, significant speedup.
Technical Details
Dependency Graph
We build a dependency graph of your document:
main.tex
├── preamble.tex (rarely changes)
├── chapter1.tex
│ ├── fig1.pdf
│ └── fig2.pdf
├── chapter2.tex
│ └── fig3.pdf
└── references.bib (occasional changes)When chapter2.tex changes:
- Recompile only that branch
- Reuse everything else
Reference Tracking
Cross-references require multiple passes:
See Section~\ref{sec:methods} % First pass: unknown
...
\section{Methods}\label{sec:methods} % CollectedWe optimize this:
- Track reference changes
- Skip re-passes when refs unchanged
- Predict stable references
Font Handling
Font processing is expensive. We:
- Cache font subsets
- Pre-process common fonts
- Reuse across compiles
Figure Optimization
Image conversion takes time. Our approach:
- Convert once, cache result
- Detect changes via hash
- Skip unchanged figures
figure.png (unchanged) → Use cached PDF
figure.png (modified) → Reconvert, update cacheBenchmark Methodology
Our speed claims come from real testing:
Test Setup
- Documents: 5-page, 50-page, 200-page
- Content: Text, equations, figures, citations
- Conditions: Cold start, warm cache, incremental
Metrics
- Cold compile: First compile, no cache
- Warm compile: Subsequent full compile
- Incremental: Single-line change
Results (200-page thesis)
| Scenario | Thetapad | Traditional | Improvement | |----------|----------|-------------|-------------| | Cold compile | 18s | 45s | 2.5x | | Warm compile | 8s | 45s | 5.6x | | Incremental | 1.2s | 45s | 37x |
The incremental case shows the biggest wins.
Enabling Fast Compilation
For Best Results
-
Structure your document:
\include{chapter1} % Not \input \include{chapter2}\includecreates clear boundaries for incremental compilation. -
Externalize TikZ:
\usetikzlibrary{external} \tikzexternalize -
Use PDF figures: Vector PDFs are faster than rasterized images.
-
Keep preamble stable: Preamble changes invalidate all caches.
What Slows Things Down
- Very large images (optimize before importing)
- Complex TikZ without externalization
- Frequent preamble changes
- Unusual package combinations
Comparison Summary
| Factor | Thetapad | Cloud Editors | Local Traditional | |--------|----------|---------------|-------------------| | Network latency | None | Yes | None | | Queue time | None | Variable | None | | Caching | Intelligent | Basic | None | | Incremental | Yes | Limited | No | | Parallelization | Yes | Limited | No |
The User Experience
What does this mean in practice?
Writing flow:
- Type a sentence
- See updated PDF in ~1 second
- Continue typing
Iteration speed:
- Try formatting change
- See result immediately
- Decide: keep or revert
Large documents:
- 200-page thesis
- Make a change
- Preview updates in seconds, not minutes
Conclusion
Fast compilation isn't magic—it's architecture:
- Local-first: No network delays
- Incremental: Rebuild only what changed
- Caching: Remember expensive computations
- Parallelization: Use all available CPU cores
The result: LaTeX compilation that keeps up with your thinking.
Experience the speed difference. Your next document will compile faster.