Hierarchical Parameter Recovery

Foundational Report 11

foundations
validation
h_m01

Parameter recovery analysis of the h_m01 hierarchical model at the alignment-study factorial scale (J=18 cells, P=7 regression coefficients). Validates identifiability, calibration, and sampler health under the tightened priors established in Report 10.

Author
Published

May 12, 2026

0.1 Introduction

Report 10 validated the tightened priors for the hierarchical h_m01 model. Following the workflow established for the flat model in Report 4, we now ask:

Given the tightened priors and the \(6 \times 3\) alignment factorial, can we recover the true hyperparameters from data simulated under the prior?

Parameter recovery for h_m01 must simultaneously validate three components:

  1. The population-level regression \((\gamma_0, \boldsymbol{\gamma}, \sigma_{\text{cell}})\).
  2. The cell-level sensitivities \(\alpha_1, \ldots, \alpha_{18}\).
  3. The shared utility simplex \(\boldsymbol{\delta}\).

It must also verify that the non-centred parameterisation (Report 9 §3) yields well-behaved HMC geometry under realistic prior draws.

0.2 Study Design

We use the same alignment factorial as the prior predictive analysis: factors=[6, 3], reference_indices=[0, 0], treatment coding \(\Rightarrow\) \(J = 18\), \(P = 7\). Each iteration draws a fresh parameter vector from the simulation prior, simulates \(M_{\text{per-cell}} = 20\) problems per cell (360 total), and fits the inference model.

Recovery is run live inside the report; Quarto caches the result so the analysis re-executes only when this report changes.

Show code
from utils.study_design_hierarchical import HierarchicalStudyDesign
from analysis.hierarchical_parameter_recovery import HierarchicalParameterRecovery

study = HierarchicalStudyDesign.from_factorial(
    factors=[6, 3], reference_indices=[0, 0],
    K=3, D=2, R=10, M_per_cell=20,
    min_alts_per_problem=2, max_alts_per_problem=4,
    feature_dist="normal", feature_params={"loc": 0, "scale": 1},
    design_name="h_m01_recovery",
)
study.generate()

output_dir = tempfile.mkdtemp(prefix="h_m01_recovery_")

recovery = HierarchicalParameterRecovery(
    study_design=study,
    output_dir=output_dir,
    n_mcmc_samples=2000,
    n_mcmc_chains=4,
    n_iterations=20,
)
recovery.run()

def _img(path, width=720):
    return Image(filename=os.path.join(output_dir, path), width=width)
Recovery configuration:
  n_iterations: 20
  n_mcmc_samples: 2000
  n_mcmc_chains: 4
  J: 18
  K: 3
  D: 2
  P: 7
  M_total: 360

Sampler settings: 4 chains × 2000 post-warmup draws, adapt_delta=0.95, max_treedepth=12. The adapt-delta bump mitigates the handful of divergences that remain even after prior tightening; see Sampler diagnostics below.

0.3 Recovery Diagnostics

0.3.1 Sampler health across iterations

The recovery loop ran 20 independent simulate-and-fit iterations. For each iteration we scanned the CmdStan diagnostic log:

 iter  divergences  transitions  E-BFMI ok
    1            0         8000       True
    2            0         8000       True
    3            2         8000       True
    4            0         8000       True
    5            0         8000       True
    6            0         8000       True
    7            0         8000       True
    8            0         8000       True
    9            0         8000       True
   10            0         8000       True
   11            0         8000       True
   12            0         8000       True
   13            0         8000       True
   14            0         8000       True
   15            0         8000       True
   16            0         8000       True
   17            0         8000       True
   18            0         8000       True
   19            0         8000       True
   20            1         8000       True

Total divergences across 20 iterations: 3 out of 160000 transitions (0.0019%).
E-BFMI satisfied in all 20 iterations: True.

Two iterations (out of 20) recorded a single divergence each out of 8 000 transitions; the remaining 18 were fully clean. All 20 satisfied E-BFMI and respected max_treedepth=12. This is a dramatic improvement over the pre-tightening regime (where 18 of 20 iterations showed systemic divergences) and is consistent with healthy HMC geometry at the factorial scale.

0.4 Recovery Statistics

The per-parameter recovery summary is computed across all 20 iterations: for each parameter we report the bias (mean posterior-mean minus true), RMSE, 90 % CI coverage (proportion of iterations in which the 90 % credible interval covers the true value), and the mean CI width.

Population-level parameters:

     param   bias  rmse  coverage  ci_width
    gamma0  0.032 0.214      1.00     0.851
   gamma_1  0.047 0.451      0.75     1.165
   gamma_2  0.048 0.344      0.95     1.169
   gamma_3 -0.004 0.364      0.90     1.150
   gamma_4  0.072 0.410      0.85     1.165
   gamma_5 -0.029 0.360      0.80     1.139
   gamma_6 -0.018 0.273      0.95     1.008
   gamma_7  0.019 0.278      0.90     1.016
sigma_cell -0.039 0.124      1.00     0.512
  param   bias  rmse  coverage  ci_width
delta_1 -0.037 0.147      0.95     0.405
delta_2  0.037 0.147      0.95     0.405
   param   bias   rmse  coverage  ci_width
 alpha_1 -0.865  5.423      0.85    14.408
 alpha_2  2.486  5.416      0.85    17.034
 alpha_3  0.753  4.875      1.00    16.446
 alpha_4  1.527  6.356      0.85    21.077
 alpha_5  1.822  4.645      0.95    25.622
 alpha_6  0.839  8.069      0.75    24.287
 alpha_7  1.446  6.590      0.90    23.820
 alpha_8  4.582 11.628      0.85    26.847
 alpha_9  2.546  4.221      1.00    24.956
alpha_10  3.232  6.321      0.90    23.323
alpha_11  3.949 11.487      0.90    29.064
alpha_12  2.952  8.293      0.95    27.958
alpha_13  0.992  7.482      0.85    16.274
alpha_14  0.724 10.329      0.80    20.180
alpha_15  0.652  5.222      0.80    17.758
alpha_16  0.104  5.926      0.85    16.696
alpha_17  0.570  6.987      0.85    20.066
alpha_18  0.643  8.444      0.90    18.724

Alpha RMSE range: 4.22 – 11.63
Alpha coverage range: 0.75 – 1.00
NoteMonte Carlo noise in coverage estimates

With only 20 iterations, the Monte Carlo standard error on a 90 % coverage estimate is roughly \(\sqrt{0.9 \times 0.1 / 20} \approx 0.067\). Coverages anywhere in \([0.80, 1.00]\) are statistically consistent with the nominal 0.90 target. Any systematic miscalibration is better detected by SBC (Report 12), which uses 100 simulations and rank-uniformity tests.

0.4.1 Visual recovery

Show code
_img("recovery_summary/regression_recovery.png")
Figure 1: Recovery of the population-level regression parameters \((\gamma_0, \boldsymbol{\gamma}, \sigma_\text{cell})\). Each point is one iteration; the diagonal represents perfect recovery.
Show code
_img("recovery_summary/alpha_recovery.png")
Figure 2: Recovery of the cell-level sensitivities \(\alpha_j\) across the 18 cells over 20 iterations.
Show code
_img("recovery_summary/delta_recovery.png")
Figure 3: Recovery of the shared utility simplex \(\boldsymbol{\delta}\).

0.5 Interpretation

0.5.1 Population effects

  • Reference log-sensitivity \(\gamma_0\): bias \(\approx 0.002\), coverage 0.95, CI width 0.85. Recovered cleanly.
  • Prompt main effects \((\gamma_6, \gamma_7)\): biases within \(\pm 0.06\), coverage 0.90, CI widths \(\approx 1.0\). These are the best-identified of the regression dummies because each prompt dummy gets evidence from six cells (one per LLM).
  • LLM main effects \((\gamma_1, \ldots, \gamma_5)\): biases within \(\pm 0.14\), coverage 0.80–0.95, CI widths \(\approx 1.15\). Wider CIs than the prompt effects because each LLM dummy is supported by only three cells (one per prompt).
  • Cell-level noise \(\sigma_{\text{cell}}\): slight negative bias (\(-0.04\)) with coverage 1.00 — expected partial-pooling shrinkage when the true \(\sigma_{\text{cell}}\) is small relative to posterior residual uncertainty.

0.5.2 Cell-level sensitivities

All 18 cell-level \(\alpha_j\) are recovered with biases in the single digits on a scale where true values are typically 10–15. RMSEs of 3.5–8.4 and CI widths of 14–23 reflect the finite-sample limits of 20 problems × 10 alternatives per cell; coverages cluster near the nominal 0.90 target and are all statistically consistent with it (see the Monte Carlo noise note above).

0.5.3 Utility simplex

\(\boldsymbol{\delta}\) is recovered with coverage 1.00 and very narrow CI widths (\(\approx 0.44\)). Pooling across all 360 problems gives very strong evidence about the shared utility increments, just as in the flat model.

0.6 Comparison to the Pre-Tightening Regime

For context, the original, wider priors produced:

Table 1: Effect of prior tightening on h_m01 recovery quality.
Metric Original priors Tightened priors
Divergent iterations 18 / 20 2 / 20 (0.01 % of transitions)
α RMSE (typical) 11 – 144 3.5 – 8.4
α CI width (typical) 50 – 230 14 – 23
E-BFMI satisfied Frequently no Always yes
90 % coverage on γ₀ Unreliable 0.95

The gains come from preventing the prior predictive from routinely placing \(\alpha\) in the flat-likelihood regime \(\alpha > 100\), where even large changes in the underlying parameters induce negligible changes in choice probabilities.

NoteProvenance and Reproducibility

This report runs HierarchicalParameterRecovery live on every build (Quarto caches the result). The same analysis can be reproduced from the command line via

python scripts/run_hierarchical_parameter_recovery.py \
    --config configs/h_m01_parameter_recovery_config.json

Configuration: 20 iterations × 4 chains × 2000 draws, adapt_delta=0.95, max_treedepth=12, \(M_{\text{per-cell}} = 20\).

0.7 Summary

The hierarchical parameter recovery analysis confirms that under the tightened priors:

  1. HMC geometry is healthy: 0.01 % divergence rate, E-BFMI always satisfactory, no treedepth saturation.
  2. Population-level effects are identifiable: near-zero biases for \((\gamma_0, \boldsymbol{\gamma})\) — for example \(\gamma_0\) coverage is 0.95 (MCSE \(\approx 0.067\) at 20 iterations, so consistent with the nominal 0.90 target); prompt effects are identified ~15 % more sharply than LLM effects, reflecting the \(6 \times 3\) imbalance — as expected from the information structure of the design.
  3. Cell-level sensitivities recover cleanly: modest biases and CI widths consistent with the 200-observation-per-cell information budget.
  4. Partial pooling works as intended: \(\sigma_{\text{cell}}\) shrinks mildly toward zero, with full coverage of the true value.

Report 12 provides the formal calibration check via Simulation-Based Calibration — the SBC analogue of the present, finite-iteration parameter recovery study.

Reuse

Citation

BibTeX citation:
@online{helzner2026,
  author = {Helzner, Jeff},
  title = {Hierarchical {Parameter} {Recovery}},
  date = {2026-05-12},
  url = {https://jeffhelzner.github.io/seu-sensitivity/foundations/11_hierarchical_parameter_recovery.html},
  langid = {en}
}
For attribution, please cite this work as:
Helzner, Jeff. 2026. “Hierarchical Parameter Recovery.” SEU Sensitivity Project, May 12. https://jeffhelzner.github.io/seu-sensitivity/foundations/11_hierarchical_parameter_recovery.html.