First Order is applicable to Gaussian data only. The FO engine is similar to the FOCE ELS algorithm in that it requires a single top-level minimization of an objective function representing an approximate negative marginal log likelihood. However, the FO objective is simpler and much faster to evaluate than the FOCE ELS objective and does not require any nested conditional optimizations. The FOCE approximation used in FOCE ELS and FOCE L-B requires a model linearization around the current optimal ETAs computed in a conditional step, the FO approximation always linearizes around ETA=0. This means that FO gains speed by omitting all conditional steps, but in doing so sacrifices accuracy. FO is often the fastest and most reliably convergent method, but produces results with the poorest statistical quality in terms of bias and RMSE with respect to true values.
In addition, for purposes of computing statistical weights in the residual error model, FO uses the population prediction obtained with ETA=0. This is less accurate than the individual prediction used by other methods and contributes to the relatively poor statistical quality of FO estimates.
Iterations for FO simply correspond to iterations of the quasi-Newton optimization algorithm. In principle only a single pass through the quasi-Newton method is required, but the Model implementation repeats the optimization from the final results of the previous run until successive runs output the same log likelihood to within a tolerance of 0.001.
Applicable to both Gaussian data and data modeled with general user-supplied likelihoods such as count, categorical, and time-to-event data. The Laplacian method has the same basic structure as FOCE ELS in that a top-level minimization of an approximation to the marginal negative log likelihood is performed, but the details of the approximation are different and more computationally complex than FOCE ELS.
The Laplacian engine is based on approximating the marginal log likelihood with a Laplacian approximation to the integral of the joint log likelihood. This replaces the joint log likelihood with a “nearby” Gaussian likelihood called the Gaussian kernel that can be integrated analytically. The determination of the approximating Gaussian function requires both a conditional step as well as a numerical evaluation of the second derivatives of the joint log likelihood function at each point where the top-level objective function is evaluated. This is usually more computationally intensive and less numerically reliable than a FOCE ELS objective function evaluation. Laplacian is often regarded as slightly more accurate but slower and numerically less reliable than FOCE algorithms in Gaussian data cases where FOCE is applicable.
Note: The Laplacian method with FOCE Hess selected is the same as NONMEM's FOCE engine when NAGQ=1. When FOCE Hess is not selected, it is similar to NONMEM's Laplacian engine.
Legal Notice | Contact Certara
© Certara USA, Inc. All rights reserved.