Overview

Quantum Measurement via Flux Partition and Redundancy

Quantum measurement emerges from two separable mechanisms: flux partition into orthogonal channels yields Born rule weights, while redundant environments select which basis gets measured. No additional axioms needed. Just conservation, decoherence, and information theory working together.

Why tackle measurement differently?

The measurement problem haunts quantum mechanics: why do we see definite outcomes with Born rule probabilities in specific bases? Standard approaches either postulate the Born rule or assume preferred bases. This framework derives both from physical mechanisms: flux conservation determines weights, environmental redundancy determines basis. The result is testable, falsifiable, and mechanistic.

Two mechanisms, two questions

\[ \boxed{\text{Flux Partition} \rightarrow \text{Born Weights}} \qquad \boxed{\text{Environmental Redundancy} \rightarrow \text{Basis Selection}} \]

By separating these questions, we can solve each with well-understood physics rather than invoking new axioms.

Mechanism 1: Flux partition yields weights

Conservation of probability current \(\partial_t \rho + \nabla \cdot \mathbf{j} = 0\) means flux into orthogonal detection channels directly gives:

\[ \text{Weight}_i = \int\int \mathbf{j} \cdot d\mathbf{S} \, dt = |c_i|^2 \]

Born rule emerges from current conservation, not postulation.

Environmental redundancy selects basis

The measurement basis isn't arbitrary. It's selected by environmental dynamics. We define a pointer functional that quantifies measurement quality:

\[ P[\{\Pi_i\}] = \frac{S \times I \times R}{1 + B} \]

The basis that maximizes \(P\) becomes the measurement basis. No assumption required.

Stern-Gerlang model (concrete test)

We model a spin-1/2 particle in a Stern-Gerlach apparatus with controlled decoherence:

\[ H = \frac{p^2}{2m} \otimes I_2 + F x \otimes \sigma_z \]

The environment consists of Gaussian collisions that scramble momentum while preserving local position information:

\[ \rho \rightarrow (1-\kappa)\rho + \kappa \mathbb{E}_q[U(q)\rho U(q)^\dagger] \]

where \(U(q) = \exp(iq X)\) and \(q \sim \mathcal{N}(0, \sigma_q^2)\).

Phase boundary: when does position win?

Scanning collision rate \(\lambda\), fragment number \(n_{\text{frag}}\), and evolution time \(T\), we find a sharp transition:

The critical collision rate follows a scaling law:

\[ \lambda_c \approx A \frac{F^\alpha}{T^\beta n_{\text{frag}}^\gamma} \]

Concrete prediction: At \((\lambda, n_{\text{frag}}, T, \sigma_q) = (5, 30, 8, 1.5)\) with \(F = 0.6\), the functional selects position basis with distinguishability ratio \(D_{\text{pos}}/D_{\text{mom}} \approx 1.5\).

Verification: statistics tell the story

At the crossover point, measurement statistics confirm basis selection:

The position basis provides 50% better discrimination, exactly as the pointer functional predicts.

"How is this different from decoherence?"

Decoherence explains why coherence disappears, but not which basis survives or why outcomes have specific probabilities. Our approach adds the missing pieces:

Three-layer explanation:

Experimental predictions

The framework makes concrete, testable predictions:

  1. Phase boundary: \(\theta^*(\lambda, n_{\text{frag}}, T; F, \sigma_q)\) follows scaling laws
  2. Peak separation: \(\Delta x \propto F T^2\) (classical Stern-Gerlach)
  3. Broadening: \(\sigma \propto \sqrt{\lambda T}\) (collision-driven diffusion)
  4. Memory kernels: Basis-dependent correlation times \(\tau_c(\text{pos}) \neq \tau_c(\text{mom})\)

Experimental platforms

Several platforms can test these predictions:

Falsifiability (the crucial test)

If the pointer functional \(P\) fails to select the basis that maximizes redundancy and distinguishability in controlled experiments, the mechanism is wrong. This makes the theory genuinely falsifiable, unlike interpretational approaches.

Algorithmic implementation

The framework provides a concrete algorithm for predicting measurement outcomes:

  1. Evolve system-environment dynamics (Schrödinger equation)
  2. Fragment environment into overlapping pieces
  3. Compute pointer functional over trial bases
  4. Select optimal basis (maximize \(P\))
  5. Calculate flux partition into channels
  6. Predict probabilities from flux weights

Conceptual impact

This approach transforms measurement from mysterious collapse into mechanistic process:

Connection to temporal geometry

Like other papers in this series, measurement emerges from flux conservation. Here, probability current rather than energy flux. The same mathematical structure that drives gravitational time evolution also governs quantum measurement dynamics.

Bottom line: Quantum measurement isn't mysterious. It's flux partition plus environmental redundancy. Born weights emerge from current conservation, measurement basis from information-theoretic optimization. The result is testable, falsifiable, and mechanistic. No additional axioms required.