T_tau_gradr
This test checks the implementation of the control
use_T_tau_gradr_factor
, which modifies the radiative gradient so
that regions of low optical depth have a temperature that follows the
\(T(\tau)\) relation specified by atm_T_tau_relation
. This is useful
if you’d like to include regions of small optical depth as if they’re
part of the interior model.
The test changes atm_T_tau_relation
every 10 steps to cycle through
all the available options. At each step, it computes the root-mean-squared
differences (rms, stored as T_rms
) between the stellar model’s
temperature profile and the target \(T(\tau)\) relation in layers with \(\tau\)
< 0.1, and compares this to a target value set by x_ctrl(1)
.
If new atm_T_tau_relation
options are added to MESA, they
must be added to this test case by hand. That is, the implementation
does not automatically track all the available options.
The target \(T(\tau)\) relation is stored in the extra profile column
T_check
, which can be compared to T_face
and not T
, because
the optical depth tau
is evaluated at the cell faces.
If the test fails because T_rms
is slightly larger than the tolerance,
there are two possibly benign explanations.
The interpolation error in
T_face
contributes too much tochi2
. The tolerance can be increased or the mesh resolution increased.There are layers included in the sum that are becoming convective, in which case the temperature gradient won’t follow the (radiative) \(T(\tau)\) relation. The sum can be restricted to smaller optical depths.
If the test fails because T_rms
is much larger (orders of magnitude
larger) than the tolerance, then there might be a bug
in the implementation of the T_tau_gradr_factor
.
Most options are deliberately left at their default values because they shouldn’t influence the test’s result.
Last-Updated: 2021-05-27 (commit 6086259) by Warrick Ball