Run Options tab

The Run Options tab allows users to determine how Phoenix executes a model. For individual model­ing, the jobs are always run locally using the run method Naive-pooled, which cannot be changed. Population modelers have many more run options.

For detailed explanations of Phoenix Model run methods, see “Run modes”.

Individual modeling run options

Population modeling run options

Run Modes

Simple run mode table options

Predictive Check Options

Simulation Options

Individual modeling run options

The following options are displayed on the Run Options tab when the Population? checkbox is unchecked.

Run_Options_tab_individual.png 

Note:Computing the standard errors for a model can be done as a separate step after model fitting. This allows the user to review the results of the model fitting before spending the time computing stan­dard errors.

For engines other than QRPEM:
– After doing a model fitting, accept all of the final estimates of the fitting.
– Set the number of iterations to zero.
– Rerun with a method selected for the Stderr option.

For QRPEM, use the same steps, but also request around 15 burn-in iterations.

none (no standard error calculations are performed)

Central Diff for the second-order derivative of f uses the form:

nlmeuirunoptns00120.png 

(1)

Forward Diff for the second-order derivative of f uses the form:

nlmeuirunoptns00122.png 

(2)

Population modeling run options

The following options are displayed in the Run Options tab when the Population? checkbox is checked.

Run_Options_tab.png 

Not all run options are applicable for every run method. Some options are made available or unavail­able depending on the selected run method. For detailed explanations of Phoenix Model run options, see “Run Modes”.

FOCE L-B (First-Order Conditional Estimation, Lindstrom-Bates)

FOCE ELS (FOCE Extended Least Squares)

FO (First Order)

Laplacian 

Naive pooled 

IT2S-EM (Iterated two-stage expectation-maximization)

QRPEM (Quasi-Random Parametric Expectation Maximization)

If the NonParametric checkbox is selected, then the N NonPar field is made available.

In the N NonPar field, type the maximum number of iterations of nonparametric computations to complete during the modeling process.

Note:When a grid is selected, loading the grid can take some time and it may seem that the application has stopped working.

Make sure that you have adequate disk space on the grid for execution of all jobs. A job will fail on the grid if it runs out of disk space.

none (no standard error calculations are performed)

 

Central Diff, that uses the form:

nlmeuirunoptns00124.png 

(3)

Forward Diff, that uses the form:

nlmeuirunoptns00126.png 

(4)

Hessian: The Hessian method of parameter uncertainty estimation evaluates the uncertainty matrix as R-1, where R-1 is the inverse of the second derivative matrix of the -2*Log Likeli­hood function. Individual_modeling_icon_5.pngThis is the only method available for individual models.

Sandwich: Sandwich estimators for standard errors have both advantages and disadvan­tages relative to estimators based on just the second derivatives (Hessian matrix) and the log likelihood. The main advantage is that, in simple cases, sandwich estimators are robust for covariance model misspecification, but not for mean model misspecification. The main disad­vantage is that they can be less efficient than the simpler Hessian-based estimators when the model is correctly specified.

Fisher Score: The Fisher Score method is fast and robust, but less precise than the Sand­wich and Hessian methods.

Auto-detect: When selected, NLME automatically chooses the standard error calculation method. Specifically, if both Hessian and Fisher score methods are successful, then it uses the Sandwich method. Otherwise, it uses either the Hessian method or the Fisher score method, depending on which method is successful. The user can check the Core Status text output to see which method is used.

In the LAGL nDig field, enter the number of significant decimal digits for the LAGL algorithm to use to reach convergence. Used with FOCE ELS and Laplacian Run methods. LAGL, or LaPlacian General Likelihood, is a top level log likelihood optimization that applies to a log likelihood approximation summed over all subjects.

In the SE Step field, enter the standard error numerical differentiation step size. SE Step is the relative step size to use for computing numerical second derivatives of the overall log like­lihood function for model parameters when computing standard errors. This value affects all Run methods except IT2S-EM, which does not compute standard errors.

In the BLUP nDig field, enter the number of significant decimal digits for the BLUP estimation to use to reach convergence. Used with all run methods except Naive pooled. BLUP, or Best Linear Unbiased Predictor, is an inner optimization that is done for a local log likelihood for each subject. BLUP optimization is done many times over during a modeling run.

In the Modlinz Step field, enter the model linearization numerical differentiation step size. Modlinz Step is the step size used for numerical differentiation when linearizing the model function in the FOCE approximation. This option is used by the FOCE ELS and FOCE L-B Run methods, the IT2S-EM method when applied to models with Gaussian observations, and the Laplacian method when the FOCEhess option is selected and the model has Gaussian observations.

In the ODE Rel. Tol. field, enter the relative tolerance value for the Max ODE.

In the ODE Abs. Tol. field, enter the absolute tolerance value for the Max ODE.

In the ODE max step field, enter the maximum number of steps for the Max ODE.

The following are additional advanced options available only for the QRPEM method.

normal: Multivariate normal (MVN)

double-exponential: Multivariate Laplace (MVL). The decay rate is exponential in the nega­tive of the sum of absolute values of the sample components. The distribution is not spheri­cally symmetric, but concentrated along the axes defined by the eigenvectors of the covariance matrix. MVL is much faster to compute than MVT.

direct: Direct sampling.

T: Multivariate t (MVT). The MVT decay rate is governed by the degrees of freedom: lower values correspond to slower decay and fatter tails. Enter the number of degrees of freedom in the Imp Samp DOF field. A value between four and 10 is recommended, although any value between three and 30 is valid.

mixture-2: Two-component defensive mixture. (See T. Hesterberg, “Weighted average importance sampling and defensive mixture distributions,” Tech. report no. 148, Division of Biostatistics, Stanford University, 1991). Both components are Gaussian, have equal mixture weights of 0.5, and are centered at the previous iteration estimate of the posterior mean. both components have a variance covariance matrix, which is a scaled version of the estimated posterior variance covariance matrix from the previous iteration. One component uses a scale factor of 1.0, while the other uses a scale factor determined by the acceptance ratio.

mixture-3: Three-component defensive mixture. Similar to the two-component case, but with equal mixture weights of 1/3 and scale factors of 1, 2, and the factor determined by the acceptance ratio.

Note:ISAMPLE, Imp Samp Type, and Acceptance ratio can all be used to increase or decrease the coverage of the tails of the target conditional distribution by the importance sampling distribution.

Run Modes

Individual_modeling_icon_6.pngOnly the Simple and Simulation run modes are available for individual models. (Refer to “Run Modes” for more details about the following modes.)

Best_scenario_selected_-_Scenarios_tab.png 

Covariate Stepwise and Shotgun searches only generate the Overall worksheet as an output worksheet. To generate the full results, change the Run Mode to Scenarios in the Run Options tab and re-execute. Since the best scenario is selected automatically after the covariate search, it will be used during the Scenarios run.

Simple run mode table options

If the Simple run mode is selected, users can add extra tables to the output. For example, users can specify an output table whose rows represent instances where particular covariates are set, or partic­ular dosepoints receive a dose, or particular observables are observed.

Add_custom_table_options.png 

All model variables, including stparms, fixefs, secondary, and freeform assigned variables, can be added as output in the table.

Predictive Check Options

Main tab

Observation tabs

Main tab 

Predictive_Check_Options_area_Main_tab.png 

None: do not apply a correction

Proportional: use the proportional rule. Choosing this option displays the Pred. Variance Corr. option. If this checkbox is turned on, a prediction-variability corrected observation will be calculated and used in the plots.

Additive. use the additive rule. Choosing this option displays the Pred. Variance Corr. option. If this checkbox is turned on, a prediction-variability corrected observation will be cal­culated and used in the plots.

Note:If the observations do not typically have a lower bound of zero, the Additive option may be more appropriate.

Observation tabs 

For each continuous observation in the model, a separate tab is made available.

Predictive_Check_Options_area_Observations_tab.png 

t is time (the default)

TAD is time after dose

PRED is population (zero-eta) prediction

other... displays a field to type in the name of any other variable in the model

Prediction intervals, or the quantiles of the predictive check simulation, might not be very smooth if there are a lot of time deviations in the dataset. The predictive check has the option whether or not to bin the independent variable.

None: Every unique X-axis value is treated as its own bin (the default method)

K-means: Observations are initially grouped into bins according to their sequence order within each subject. Then the K-means algorithm rearranges the contents of the bins so every observation is in the bin that has the nearest mean X value or “center of gravity.” Start­ing with an initial set of bins containing observations (with a mean X value of these observa­tions), the K-means algorithm:

1) transfers each observation to the bin with the closest mean to the observation, and
2) recalculates the mean of each bin. This is repeated until no further observations change bins. Bins that lose all their observations are deleted.

Note that the algorithm is sensitive to the initial set of bins.

Explicit centers: Specify a series of X values to use as the center values for each bin. Observations are placed into the nearest bin.

Explicit boundaries: Specify a list of X value boundaries between the bins. Observations are placed in the nearest bin. The center value of each bin is taken as the average X value of the observations in the bin.

In the case of Explicit centers and Explicit boundaries, the numerical values, separated by commas, are automatically sorted into ascending order and duplicates are eliminated. In all cases, bins having no observations are eliminated.

Select Treat BQL as LLOQ to replace BQL data less than the LLOQ value with the LLOQ value in Observations and related worksheets.

Select BQL Fraction from the menu to have the amount of BQL data checked and its fraction compared with the quantile level. If the fraction of BQL data is more than the defined quantile, the corresponding observed data are not shown in the VPC plot or in PredCheck_ObsQ_­SimQCI/ PredCheck_SimQ and Pop PredCheck ObsQ_SimQCI/ Pop PredCheck SimQ.worksheets.

Quantile_graph.png 

Users can optionally select to calculate confidence intervals for the predicted intervals, or predic­tive check quantiles. Since each simulated replicate is like the original dataset, first the quantiles are obtained at each stratum-bin-time for each replicate. For each stratum-bin-observed quantile, users get a cloud, one for each replicate. Then quantiles of the quantiles are calculated by stra­tum-bin-time over all replicates corresponding to a confidence interval of the simulated quantiles.

With predictive check, there is one level of simulated quantiles of observations (e.g., the values entered in the Quantile % field). During simulation, the model is purposely disturbed so that the predicted values of the observations fall in a range. The intent is to determine if the range includes the actual observations that came in on the original data. The Pop PredCheck ObsQ_­SimQ plot shows the simulated quantiles:

pop_predcheck_obsq_simq_plot.png 

The blue dots are the actual original observations. The red lines are 5%, 50%, and 95% quantiles from the actual original observations. The black lines are the 5%, 50%, and 95% quantiles from the simulated observations. The example plot above shows a good match between them.

Sometimes the user wants to determine the confidence level of the simulated quantiles. To see how much the simulated quantiles are themselves variable, check the Quantile checkbox. This provides an additional plot of confidence intervals of the simulated quantiles (Pop PredCheck ObsQ_SimQCI plot).

pop_predcheck_obsq_simqci_plot.png 

In place of each simulated quantile black line, there are two black lines, representing the 10% and 90% confidence intervals of that quantile. The shading aids in visualizing the variation. Notice how the red lines fall inside the black lines (i.e., within the shaded area), which is a positive result.

If there are additional observations from a questionnaire involved, the observation cannot be pre­dicted, only the likelihood of each possible answer can be observed or predicted. If, for example, there are four answers to the questionnaire: 0, 1, 2, 3, then there will be four observations or pre­dictions. The following image shows the Pop PredCheck ObsQ_SimQ plot for observation 0.

pop_predcheck_obsq_simq2_plot.png 

The red line is the actual fraction of observations that were 0. The black lines are the 5%, 50%, and 95% quantiles of the predicted probability of seeing 0. Note that the red line falls between the black line most of the time.

Simulation Options

Predictive_Check_Options_area_Tables_tab.png 

A Phoenix Model can also perform a Monte Carlo population PK/PD simulation for population models when the Simulation run mode is selected. Simulations can be performed with built-in, graphical, or textual models. Two separate simulations are performed automatically if a table is requested, one simulation for the predictive check, and another simulation for each table. A simulation table contains only a time series and a set of variables. If one of the variables is a prediction, like C, then its observed variable, like CObs, is also included and simulated with observational error.

Specify the number of simulated data points to generate in the # points field. The maximum number of simulation points allowed is 1,000. The value entered applies to all dependent vari­ables.

Use the Max X Range field to specify the maximum time or independent variable value used in the simulation.

In the Y variables field, specify the desired output variable(s) to capture from the model. The captured variable(s) is displayed, in # points equal time/X increments, in the simulation work­sheet.

Check the Sim at observations? checkbox to have the simulation occur at the times that observations happen as well as at the given time points.

Add other simulation tables using the other tools in the same manner as under Predictive Check.

For Population models:

Specify the number of simulated replicates to generated in the # replicates field. The maxi­mum number of replicates allowed is 10,000.

If desired, designate a directory for the result files in the Copy result files to directory field or use the Browse_For_Folder_button.png button. If a directory is defined, a csv file with simulation results for all repli­cates will be placed there.

A worksheet and a simulation file are created in the results: PredCheckAll and Rawsimtbl01.csv. The simulation file is created externally because depending on the number of replicates it can be very large and affect performance. The PredCheckAll worksheet contains the predictive check simulated data points and Rawsimtbl01.csv contains the simulated data. All other results correspond to the model fit because Phoenix fits the model before performing simulations.

 


Last modified date:6/26/19
Certara USA, Inc.
Legal Notice | Contact Certara
© 2019 Certara USA, Inc. All rights reserved.