Control modes configuration

Phoenix is capable of implementing parallel computing in two distinct ways to optimize the use of computational grids and multicore computers. A description of these two methods and the advantages and challenges may help with selecting the optimal method for each project.

PBM (Parallelizing By Model): Individual NLME models are sent to individual computation cores for execution. One example of PBM would be a 200 replicate bootstrap, which requires 200 independent NLME models to be run. Using PBM, each of the 200 models would be sent to 200 separate compute nodes, with each model running on a single compute node from initial estimates to final parameters. PBM is useful when simultaneously executing many NLME models.

PWN (Parallelizing Within Model): An individual NLME model is spread across multiple computation cores for execution. One example of PWM would be a simple estimation of a single PK/PD model. Using PWM, the one model would be spread across 50 computation cores, allowing successful minimization to be done more quickly than if the local computer was used. PWM is useful with models that have long run times to achieve convergence.

Phoenix supports both PBM and PWM and even supports a combination of the two. An example of combining PBM and PWM can be seen in a stepwise covariate search. During the first step, let’s assume there are 8 models to be run (base + 7 possible covariates). Phoenix will run all 8 models simultaneously (PBM) with each model using 20 computation cores (PWM). Combining PBM and PWM can be extremely powerful to reduce overall run times with complex PK/PD model development activities.

The following outlines the parallelization method implemented for each run mode and each computation platform supported in Phoenix NLME 8.4:

Simple:

Windows: MultiCore (PBM), MPI (PWM)
Linux: MultiCore (PBM), MPI (PWM), SGE(_MPI)/LSF(_MPI)/TORQUE(_MPI)/SLURM(_MPI) (PWM)

Scenarios Run:

Windows: MultiCore (PBM), MPI (PWM and PBM), LSF_MPI (PWM and PBM)
Linux: MultiCore (PBM), MPI (PWM and PBM), SGE(_MPI)/LSF(_MPI)/TORQUE(_MPI)/SLURM(_MPI) (PWM and PBM)

Stepwise Cov Search Run:

Windows: MultiCore (PBM), MPI (PWM and PBM)
Linux: MultiCore (PBM), MPI (PWM and PBM), SGE(_MPI)/LSF(_MPI)/TORQUE(_MPI)/SLURM(_MPI) (PWM and PBM)

Shotgun Cov Search Run:

Windows: MultiCore (PBM), MPI (PWM and PBM)
Linux: MultiCore (PBM), MPI (PWM and PBM), SGE(_MPI)/LSF(_MPI)/TORQUE(_MPI)/SLURM(_MPI) (PWM and PBM)

Profile Run:

Windows: MultiCore (PBM), MPI (PWM and PBM)
Linux: MultiCore (PBM), MPI (PWM and PBM), SGE(_MPI)/LSF(_MPI)/TORQUE(_MPI)/SLURM(_MPI) (PWM and PBM)

Predictive Check Run: No parallelization method implemented.

Simulation Run: No parallelization method implemented.

Selection of the desired grid mode in the Phoenix Preferences dialog is critical to achieving the desired type of parallelization for the submitted run mode. For example, an NLME model in Simple run mode submitted to a compute grid set to Linux/SGE will result in the model running on a single core of the grid. However, that same model submitted to a compute grid set to Linux/SGE_MPI will be parallelized using PWM.


Legal Notice | Contact Certara
© Certara USA, Inc. All rights reserved.