← back to temporal compression

What Each Field Has

Chapter 3 · Seely 2025 · Bilò et al. 2022 · Sullivan & Wiegand 1998

R-D optimization, impossibility classes, Hodge decomposition. Nobody has all three.

Three inventories

The previous chapter showed the one genuine parallel. Each field has tools the others lack, and the open problems live in the gaps.

Video codecs have TVG theory has Sheaf cohomology has
Rate-distortion optimization
min D + λR. Formal Lagrangian framework.
Error accumulation + reset
Quantization error grows through P-chains. I-frames reset.
Removable / irreducible split
Hodge decomposition: im D vs. ker Dᵀ.
Graceful degradation
Error concealment approximates missing frames.
Journey optimality
Foremost, fastest, shortest — formally distinct.
Impossibility classes
13 classes. Some can't support certain operations.
Temporal spanners
Sparse subgraphs preserving journeys. Edge count, not bits.
Max-flow / min-cut (sheaf)
H⁰ = flows. Duality gaps = exactness failures.
Learning gradient factorization
∂E/∂W = (harmonic error) · (source activation)ᵀ

The sheaf coboundary: prediction error, algebraically

Seely (2025) constructs a cellular sheaf over a computational graph. The coboundary operator δ⁰: C⁰ → C¹ maps activations (0-cochains) to prediction errors (1-cochains):

(δ⁰s)e = sv − We su

This has the same algebraic form as motion-compensated subtraction: current frame minus predicted frame. The predictive-coding energy is E = ½‖δ⁰s‖², and inference is gradient descent under the sheaf Laplacian.

With clamped inputs and outputs (known source, known target), the Hodge decomposition splits the forcing vector into a removable component (in im D — better encoder choices can eliminate it) and an irreducible component (in ker Dᵀ — no choice of activations can remove it). The irreducible part lives in H¹, the first cohomology group.

Temporal spanner: compressing for edge count

A temporal α-spanner removes edges while keeping journey distances within a multiplicative stretch factor α. The objective is fewer edges, not fewer bits. No encoding is specified, no bit budget is given.

Python

The invariance boundary

The temporal state machines paper establishes an invariance constraint for pure race logic: any function must satisfy f(t₁+δ,...,tₙ+δ) = f(t₁,...,tₙ)+δ. Compositions that break this symmetry require memory (state). Tropical multiplication — adding two time codes — cannot be done in a single stateless pass.

This draws a line between stateless temporal composition and stateful computation that requires buffering. The two-phase protocol — store the wavefront, then apply it — is suggestive of the I-frame/P-frame cycle (write a checkpoint, then compute deltas), though that analogy is ours, not the paper's.

The gaps

The spanner compresses by counting edges. The codec compresses by counting bits. The sheaf separates removable from irreducible error but has no temporal dynamics and no rate constraint. Each field has exactly what the others are missing. The next chapter states the open problems that live in those gaps.

Neighbors

Key references