August 2023: Entry for PHX-21179 added.
May 2023: Entry for INTG-4276 added.
August 2022: Entries for SUPPORT-1651 and PHX-20430 added.
March 2022: Entries fixed in Phoenix 8.3.5: QC 16931, QC 18125. QC 18126, QC 18152, QC 18155, QC 18159, QC 18172.
March 2022: Entries for
QC 17920,
QC 18046,
QC 18103,
QC 18165,
QC 18172.
QC 18180,
QC 18182 added.
January 2022: Entries for QC 18125, QC 18126, QC 18152, QC 18155, and QC 18159 added.
December 2021: Entry for QC 16931 added.
November 2021:
Removed QC 18110 and QC 18111 (fixed in 8.3.4). QC 18119 also fixed in 8.3.4
October 2021: Entries for QC 18110 and QC 18111 added. Removed QC 18053 (fixed in 8.3.3).
July 2021: Entries for
QC 18070
,
QC 18075
,
QC 18078
,
QC 18080
, and
QC 18087
added.
February 2021: Entry for
QC 17741
added.
January 2021: Entry for
PHX-8008
and
PHX-8011
added.
September 2020: Entries for
QC 18048
,
QC 18049
, and QC 18053 added.
June 2020: Phoenix 8.3 release
Column Properties
Data
Data Wizard
Plots
Reporter
Tables
BQL
Dependencies
File I/O
History
ggquickeda
JMS
Licensing
Locking
NONMEM
Object Browser
PKS
Projects
PsN
R Shell
User Interface
Word Export
Convolution
Levy Plot
Workflow
Engines
Grid Computing
Model Comparer
User Interface
Bioequivalence
LinMix
NCA
Nonparametric Superposition
WinNonlin Classic Modeling
Issue with Phoenix 8.3.4 installation (SUPPORT-1651): If you encounter the following error when installation Phoenix 8.3.4, please contact support (E-mail: support@certara.com Web: https://certaracommunity.force.com/support/s/).
"Microsoft Visual C++ Runtime 12.0 could not be installed... Phoenix Setup cannot continue."
An unhandled exception occurs when changing a column name to an existing column name, except differing in case (QC 18075).
The Unit Builder does not correctly convert units involving the prefix deca [dk] (QC 12027): When the prefix deca [dk] is used in unit conversion, the results are incorrect. The conversion process is using 1.E+2 instead of 1.E+1. In addition, the abbreviation of [dk] is incorrect and should be [da].
Two unique numbers that are identical out to the 14th significant digit or greater are treated as the same number (QC 12034, QC 13582): Computer accuracy is limited to about 14 significant digits. This can cause two unique numbers with more than 14 significant digits to be treated as the same number. As an example, using the NCA Slopes Selector when the time data has more than 14 significant digits, and two points differ at the 14th digit (e.g., 240.333333333333 and 240.333333333334), one point appears selected for the time range but it is not used in the Slopes Select. Furthermore, the point representing a number, such as 240.333333333333, is selectable in the Slopes Selector and can show as the End Time for a user-specified range, but will not be used in the slope calculation, causing Lambda_Z_Lower and Lambda_Z_Upper in the output to be off by 1e-14.
One workaround is to click on Slopes and paste the full number. Another possibility is, prior to NCA, to round the time data so it has 14 significant digits or fewer (use a custom transformation in the Data Wizard with a formula such as round(time,10) or roundsig(time,14)).
Numeric columns that contain more than 17 digits/characters are not displayed correctly (QC 12164): Very large integers lose precision due to rounding to significant digits. The workaround is to change the column type to Text.
Dataset is deleted when “Refresh from Source” is used after Edit in Excel with formulas saved (QC 15105): The workaround is to right-click the dataset in the Object Browser and select Edit in Excel again, make a change to the dataset (this is important) and then save the changes back to Phoenix, the worksheet will be restored.
Subject IDs with many digits get changed to G8 format, even if imported as text (QC 15528): Numeric subject IDs with many digits (e.g., 100010901) are changed by the worksheets to G8 format by default (e.g., 1.0001090E+08) even if they are imported as text. Using the G9 format retains the subject ID digits.
A new “Selection (Exclude)” filter cannot be added after executing Data Wizard; Filter Selection worksheet is empty (QC 11771): In Data Wizard Filter steps, in order to create selection-based exclusions, check Retain intermediate results, or the selection worksheet may be blank when attempting to make the selections.
History worksheet contains the source worksheet history when using “Copy to Data Folder” from Data Wizard Results (QC 14255): The History sheet of a Data Wizard result worksheet that has been copied to the Data folder contains all of the history records from the source worksheet that was mapped into the Data Wizard.
Substitution function fails for question marks (QC 17741): When trying to remove question marks ("?") from a dataset using the Data Wizard's Substitution function, the process fails. The workaround is to edit the dataset manually or use a different application to remove the question mark characters.
Font size in a plot annotation gets very small when the plot is exported (QC 18048): There is no known workaround for this problem.
Bringing the regression line to the front on semi-log scale generates an unexpected line (QC 18049): There is no known workaround for this problem.
**Issue Corrected in Phoenix 8.3.5** Reporter object in Phoenix 8.3 is wrapping data in Table cells differently than Phoenix 8.1 (QC 18155): A different padding constant is used in Phoenix 8.3. To obtain the same table appearance as in Phoenix 8.1, set the padding width to 0.38 inches. HTML type tables do not use case-sensitive comparisons when determining to combine rows together in a table (QC 6540): The Table object does not use case-sensitive comparisons when sorting and grouping data. Phoenix does not display Diagram workflow arrows when using "Copy/Paste" or "Move to New Workflow" (QC 17920): There is no known workaround for this issue. BQL Tool does not properly handle a duplicate name for a new column (QC 18046): If a duplicate name is entered, an uninformative error message appears stating, "An item with the same key has already been added." Enter a unique name and resubmit. Exporting dependencies does not export column units to CSV file (QC 18087). Word Export fails when exporting a plot with annotation (QC 18165): Remove the annotation and exporting works without issue. When data is pasted into the * row of the data grid the history does not record the row information (QC 7696): In order for a row-paste operation to be entered into the history, the data to be copied must be selected by highlighting the individual cells, and pasted by selecting the destination row number. The data can be pasted by selecting the destination cells, but the history entry will not be created. Similarly, the data can be copied by selecting a row number, and then pasted, but there will be no history entry. Moving contents of selected rows to another set of rows does not get correctly added as an event in the History tab (QC 12008): Audit events are not captured correctly when rows are dragged from one location to another in a worksheet. Integral plugin for Phoenix 8.3.5 prevents ggquickeda from working (PHX-20430): At this time ggquickeda objects are not compatible with the Integral 22.4.1 plug-in and should, therefore, not be used if you are working with Integral. Should you need to use ggquickeda for some projects, you may need to have a second Phoenix installation without the Integral plugin, but be aware these priojects will most likely not open in Phoenix where the Integral plugin is installed. JMS Merge fails with a project containing only one plot object (PHX-8008): If a project contains a single object that is a plot object, sending that object to JMS to run remotely and merging the results back to Phoenix generates an error loading project message. Clicking OK in the message results in an empty Object Browser. The workaround is to add a second object of any type to the workflow before sending it to JMS. User is not warned before closing and loses project modifications if framework license was lost and Phoenix left open (QC 8510): If a Named User Server license is disconnected while there are still unsaved projects open, then the user must ensure that a license has been retrieved before attempting to close Phoenix. If the user does close Phoenix without first re-acquiring a license, there will be no ability to save, and all changes will be lost. Locking a workflow that has an open object does not lock the open object (QC 18070/CS00205853): When a workflow that has an open object, is locked, the workflow is locked. However, the open window of the operational object remains unlocked and modifications can still be applied up until the operational object window is closed. The changes are not captured in the history of the object, but they are saved in the workflow. NONMEM TIME input required for Dose-effect model when not necessary for analysis (QC 10052): When doing analysis with NONMEM shell using Dose-Effect model, TIME is required as input field in data even though it is not used in analysis. The workaround is to add a dummy TIME column and map it. NONMEM Input statements containing equal signs are not recognized (QC 10658): If the NONMEM input statement contains equal signs like $INPUT ID TIME DV=DROP LNDV=DV MDV AMT RATE EVID DOSE, the Phoenix mappings do not recognize the drop option and will still present a DV variable in the mappings. In addition a statement like LNDV=DV will show up as LNDV_DV. The workaround is to modify the input statement in the NONMEM control file by manually removing any DROP variables (they will not be mapped and thus not used) and just providing the name of the variable that is needed. For example, the example above should be modified as $INPUT ID TIME DV MDV AMT RATE EVID DOSE and the column LNDV mapped to DV. NONMEM simulation script results in Scatterplot error (QC 12382/CRM 139546): Phoenix code that gets added to the script results in an incorrect identification of a NONMEM comparison occurring, when the script is actually calling for a NONMEM simulation only. As a result, Phoenix is looking for a table from which to generate a scatterplot for the comparison. This table is not created by the script and the script never successfully executes. Third party shell objects are marked as out-of-date after simply selecting them and navigating away (QC 18103): A selected third party object (e.g., R Shell or SAS Shell) becomes marked as out-of-date when you navigate away from the selected object, even though no changes to the object have been made. Re-executing the object will mark the object as up-to-date. Saving a new version of a PKS scenario can fail if the URL to the PKS middle-tier is changed from the URL used when the scenario was saved (QC 15086): Loading a PKS scenario and submitting the workflow to the JMS for processing can prevent the scenario from being saved if it was saved using the old PKS middle-tier URL and then loaded using the new URL. The workaround is to load the scenario, run it locally, and save it to the PKS again. The new URL is used to verify the existence of objects in PKS prior to the save. Error getting latest version of out-of-date dependent objects loaded from other scenarios (QC 15362): If a scenario (A) has dependencies to ScenarioDatasets or ScenarioObjects from other scenarios, and those objects go out-of-date by creating a new version of the scenarios, then the scenario (A) will display the option to get the latest versions, but it fails when trying to get the latest versions. The workaround is to load the scenario and use Refresh from Source to get the latest version of the dependent objects. Phoenix Code folder contains empty files after uploading to PKS (PHX-8011): This issue occurs if you are uploading projects to PKS that contain items in the Code folder (e.g., SAS scripts). The workaround is to install the Integral Plugin version 1.1.1 or later.
**Issue Corrected in Phoenix 8.3.5** Phoenix projects occasionally become corrupted during the save process (QC 16931)
Symptoms: In some cases, a user saves a Phoenix project, but when the user tries to reload the project, an error message occurs saying that the project is corrupted and cannot be loaded back into Phoenix.
Cause: When a Phoenix project is saved, the files in the user’s temp directory that are used to create the project are compressed into a zip file. If the compression process is not successful, the newly saved project can become corrupt. Potential causes are:
•One or more temporary files that are compressed into the saved project file are locked out by an active virus scanner during the project save process.
•The user’s temp directory is on a different file system (e.g., OneDrive or SharePoint) than the file system where the project is being saved.
Solution: A potential workaround to prevent projects from being corrupted during the save event is to configure Phoenix to use a temp area on the local file system where Phoenix is installed,, and that is outside of the user profile. An example procedure is provided below, which would redirect Phoenix to use C:\PHXTEMP as its temporary directory. To use a temp directory other than C:\PHXTEMP, modify steps 3, 8, 12, and 16 with the alternate temp path.
1. Close the Phoenix application.
2. Grant the required end-user “Full Control” or “Modify” permissions to the Phoenix installation directory in C:\Program Files (x86)\Certara\Phoenix, and its subfolders.
3. Create a new folder that will function as the Phoenix temp directory, e.g., C:\PHXTEMP.
4. Navigate to the folder C:\Program Files (x86)\Certara\Phoenix\application.
5. Create a backup of the Phoenix.exe.config and Phoenix32.exe.config files, i.e., copy them to a location outside of the Phoenix installation directory.
6. Open the Phoenix.exe.config file located in the Phoenix installation directory, in a Text editor such as Notepad.
7. Find the string
<services bindSubDirectories="true" tempDirectoryRoot="">
8. Enter the new temporary directory between quotes:
<services bindSubDirectories="true" tempDirectoryRoot="C:\PHXTEMP">
9. Save the changes in the Phoenix.exe.config file, and close the file.
10. Open the file named Phoenix32.exe.config.
11. Find the string:
<services bindSubDirectories="true" tempDirectoryRoot="">
12. Enter the new temporary directory between quotes:
<services bindSubDirectories="true" tempDirectoryRoot="C:\PHXTEMP">
13. Save the changes in the Phoenix32.exe.config file, and close the file.
14. Open Phoenix.
15. At the top of the application, select Edit > Preferences > General > Services.
16. Verify that the RootTempDirectory has the C:\PHXTEMP path.
17. Create a new project, then try to save and re-load the project, and verify successful reloading.
PsN plug-in does not work with shortcuts to model files that contain spaces in path (QC 11880): For PsN Shell objects, if the user selects to “Bring Model Files In as Shortcuts” the path to the model file cannot contain any spaces, e.g., C:\Program Files.
**Issue Corrected in Phoenix 8.3.5** R script changes are not retained when using “Start Development Environment” within the Phoenix 8.3 R Shell object (QC 18125): In an R-Shell object in Phoenix, when a script is edited using R-Studio (e.g., using the Start button in the bottom panel), it will open the script in R-Studio. Once changes to the script are made, saved back to disc, and R-Studio is exited, re-loading the script in Phoenix shows none of the changes. If the script is reloaded into R-Studio, the changes are there. A workaround is to edit the R scripts directly in R-Studio or other R program and then import the script into Phoenix.
Worksheet does not allow input after dragging down (QC 10725): When using Copy Down in grids, sometimes the cells become read-only. To enter data, try double-clicking in the cell or selecting the cell and then entering the data in the field that displays the data for the cell at the top of the grid.
Resizing of Phoenix window and fonts on Windows 10, can lead to the inability to select some buttons or tabs (QC 16904): Try changing the size of text, apps, and other items to 100% (right-click on the Desktop and choose the Display menu item), then restart your computer to apply changes. (Right-click on the Desktop and choose the Display menu item to make changes, then restart your computer to apply changes.)
To set the scaling only for the Phoenix application on a high DPI display, browse to the Phoenix installation folder (by default, “C:\Program Files (x86)\Certara\Phoenix\application”), right-click on “Phoenix.exe”, select “Properties” menu item, and choose “Compatibility” tab. For Windows 7, select [x] on “Disable display scaling on high DPI settings” option. For Windows 8 or Windows 10, select [x] on “Override high DPI scaling behavior. Scaling performed by: “option, and then select “Application” or “System (Enhanced)” in the drop-down menu. Then restart your computer to apply changes.
Running Phoenix with Windows Text Size (DPI Scaling) greater than 100% results in overlapping UI (QC 17302): This issue is caused by Phoenix not being DPI aware and is most frequently seen on laptops. If you encounter overlapping controls in Phoenix, try reducing the scaling to 100%.
Word Export sometimes does not launch with large projects (PHX-21179): The workaround is to select a workflow or an operational object or a dataset in the project and then launch Word Export, and to retry these steps until it launches.
IVIVC hangs on prediction step (QC 17919): On occasion, certain datasets may cause freezing in Phoenix 8.x when using the Predict PK button on the Prediction tab, caused by a deadlock in the code. To avoid losing unsaved work, please be sure to save your project before this step and use the workaround, which is to click the green Execute button (in the Phoenix toolbar) at this stage of your workflow, instead of the Predict PK button.
The Exponential Terms values disappear from copy/paste when input data set has a sort mapped (QC 8811): If the convolution input has a sort variable and the UIR source is internal, copy/pasting the convolution object will result in the UIR internal source being reset. The workaround is to publish the internal UIR source so that it is now external, and then copy/paste will just point to the UIR source.
Word Export in Phoenix takes a much longer time as compared to WinNonlin and as a result, system becomes unresponsive (QC 9772): When using Word Export on Levy Plots and IVIVC Workflow objects, the list of output includes all internal objects, and the objects are not readily discernible. Word Export is not recommended for IVIVC and Levy Plots.
Settings are missing in Settings text output and History tab (QC 10628): Settings are not contained in the results; look at the Properties panel to see the settings.
Levy plot created even if units in Values column are different (QC 10652): Levy Plot does not prevent or warn the user if the units in the Values source columns do not match.
Levy plot results are incorrect if data has sort variables, but sort variables are not used (QC 10681): Duplicate time points for a given profile are not handled properly for Levy Plots. The workaround is to use descriptive statistics to average the profiles to ensure distinct unique time points per profile.
Problem using multiple sort variables and user-specified matched values (QC 10896): In a Levy Plot object, if sort variables and user-specified matched values are used, the following error is encountered if the sort variables contain missing or null values: Bad conversion from DBNULL to Boolean. The workaround is to fill in any missing or null values.
Levy plots can fail to execute if there is a blank in a Formulation column (QC 12370): Levy Plots do not support blank/empty cells in the Formulation column. It will prevent execution of the Levy Plot object.
Prediction Dose units do not match In Vivo Dose units when external dosing worksheet is used (QC 9809): If dosing units are specified by an external worksheet for In Vivo and the Prediction Dosing is an internal worksheet, the Prediction Dosing internal worksheet will not reflect the Dosing units of the External Dosing for In Vivo. The dosing unit in the prediction stage is assumed to be the same as the In Vivo stage.
Pred Conv Out plot - All plots are not on one page (QC 10245): For the IVIVC Convolution Output Plots, the plots are not grouped by Formulation when there is a sort variable. Each plot is displayed on a separate tab.
Executing IVIVC steps by buttons in Properties panel does not cause execution of source feeders (QC 10419): The workaround is to use the execute button on the IVIVC workflow object to ensure sources are executed prior to IVIVC workflow verification.
Settings missing in History tab description when workflow is executed (QC 10633): When the IVIVC object is executed, the settings are not recorded on the History tab; however, they can be found in the Settings text output files:
Correlation.Correlation.Settings
Prediction.Conv.Settings
Prediction.Corr.Sim.Settings
Prediction.Diss.Settings
Prediction.Observed.Baseline.Nca.Settings
Prediction.Predicted.Nca.Prediction.Settings
Validation.Average.Vivo.Settings
Validation.Conv.Settings
Validation.Corr.Sim.Settings
Validation.Observed.Baseline.Nca.Settings
Validation.Predicted PK.Nca.Settings
Validation.Settings
Vitro.Dissolution.Settings
Vivo.Deconvolution.Settings
Vivo.UIR.Settings
Executions of individual steps are not recorded on the History tab (QC 10637): If the IVIVC workflow is executed using the Execute button in the toolbar, the History tab will record this event; however, when individual steps are executed (Fit Dissolution, Validate Correlation, Predict PK, etc.), they are not recorded in the History tab.
Levy Plot: Formulation column in data set used even if not mapped (QC 10689): If the sources for Levy Plot have the column Formulation, and no columns are mapped to Formulation in the sources, the columns called Formulation will be used even though not mapped and the Levy Plot will fail if the formulations in the columns do not match.
Unable to complete Validation if UIR step previously failed due to bad mapping (QC 10832): If the UIR step previously failed due to incorrect column mappings (for example, the Sort and Values column were inadvertently switched), the Validate Correlation and Predict PK steps will continue to fail, even after the In Vivo Data mappings are corrected. The workaround is to make a copy of the In Vivo data set, pre-fix all of the column names with X (Time column is renamed XTime), then map this data set to the In Vivo Data setup and map columns appropriately. The Validate Correlation and Predict PK steps will now execute successfully.
Prediction does not complete if Prediction Dissolution setup is completed before Prediction Data setup (QC 11024): In an IVIVC object, the following error is encountered in the Prediction step if an external worksheet is used for Prediction Estimates and this mapping is completed before the mapping for Prediction Data:
Failed Validation
1 or more columns required for Name in Units
1 or more columns required for Preferred in Units
IVIVC tool can enter a broken state that cannot be fixed (QC 13266/CRM 142192): The IVIVC tool has restrictions on a column named Formulation. If this column is mapped to a Sort variable in InVivo Data (even just temporarily), it will cause the IVIVC object to break and not to be able to perform Prediction. If this happens, the only solution is to create a new IVIVC object, reapply all the settings, change any Sort input column named Formulation to another name, remap, and execute.
Objects using results of partially executed IVIVC object fail verification (QC 13307): Even if Prediction is not being done, the Target Formulation may need to be set in order to use output from IVIVC downstream. Not setting the Target Formulation on the Prediction tab will prevent IVIVC from passing verification, which will prevent use of output in other objects. Selecting any value in the Target Formulation dropdown should get IVIVC to pass verification so that downstream objects will execute using the IVIVC results.
Cannot generate Levy plots (QC 13844): If Formulation is mapped to the InVitro Formulation, it can cause the Levy Plots to not be generated when requested. The workaround is to recreate the IVIVC object and not map a column named Formulation to the InVitro Formulation mapping.
Formulation data that is not mapped in the InVitro Formulation panel is still included in the output plots (QC 14874): The IVIVC toolkit creates IVIVC plots by default (Correlation overlay, Levy plot, and Fabs vs Fdiss). These plots are incorrectly including other formulations (e.g., Target) that are checked to not be included in the “InVitro Formulation” panel (i.e., set to None in mappings window). This could lead to a confusing interpretation of those plots because, in some cases, formulations displayed can be made using different technologies. (The IVIVC correlation model itself and any results from it do not include these other formulations if they are set to None.) As a workaround, the user can use the resulting worksheets and create plots after filtering the formulation that should not be included (e.g., for the Levy plot, filter unwanted data from the “Levy Plots.Tvivo vs Tvitro.Levy Plot Values” worksheet; for the Correlation overlay plot, filter unwanted data from the “Correlation.Abs vs Diss Data” worksheet). In addition, the user must specify the data to plot the line of unity.
IVIVC item is marked out-of-date even though all results are up-to-date (QC 15110): In cases where a prediction has been partially set up at one time and then removed, the IVIVC object will appear as being out-of-date, even though there is no Prediction output yet.
Val Cor Sim plots have incorrect Y-axis label (should be Fabs instead of Cp) (QC 18005/CRM 00169719): This issue only occurs if the Validate Correlation button on the Options tabs of the IVIVC Workflow is used. The workaround is to use the Execute icon in the Phoenix toolbar.
Validate Correlation may fail when using InVivo option for “Do not generate mean profiles” (QC 18036): This issue only occurs in Phoenix 8.3. It may occur if the IVIVC Workflow was previously either setup to use, or executed with, Generate Mean Profiles. As a workaround, if Validate Correlation fails after changing to Do Not Generate Means, go to the Setup for InVivo. Even if there appears to be a mapping for Sort, change the mapping to None and then back to Sort. Then run Validate Correlation again.
Clean.log is not time stamped (QC 49).
Uninstalling previous versions of the PLW on Windows 7 (SP1) fails to remove a registry entry, which causes the PLW installer to abort (QC 138): If you should encounter this issue, please contact Support for assistance. (Note that the registry entry is properly removed when PLW is uninstalled.)
Domain field should not be filled in when the machine does not reside on a domain (QC 190): Currently, the Domain field defaults to the localhost machine name when that machine does not reside on a domain. This field should be empty under such conditions.
User Management - In the Add User(s) dialog, entering incorrect information in the Domain Specification section and then using the Group pull-down causes an exception (QC 203): When this exception occurs, users can click Continue and return to the dialog where the domain information can be corrected.
Installing PLW 3.0 fails with “Runtime error in setup script” (QC 212): There is a dll that is missing from the PLW 3.0 installer package, causing the installation to fail with the message 'Runtime error in setup script, Source File: PharsightLicensingServer, Line Number: 131'. If you encounter this problem, please contact Certara Support.
Severe performance problem reported with PLW 3.0 user management that involved over 200 users (QC 213): When adding a new user to a Group, opening the Groups list and selecting a Group for the user, used multiple GB of memory and eventually PLW froze. A possible workaround is to edit the lsreserv list directly.
Multicore mode does not report all Bootstrap results in the Status window Text tab (QC 17929): For other modes, such as local or localMPI, all Bootstrap results are reported in the Text tab of the Status window. Multicore mode only reports the simple estimation results if the Bootstrap Estimate initial model parameters? option is checked. If the option is unchecked, then no results are reported.
When the overall number of fixed effects (including frozen and those not enabled for the current scenario run) exceeds number of thetas + 100, a memory issue could appear that stops any optimization (QC 18080/CS00210525): To avoid this issue, the number of frozen fixed effects, frozen sigmas, and covariate-parameter bounds should not exceed 100.
**Issue Corrected in Phoenix 8.3.5** IWRES and epsilon shrinkage are calculated incorrectly for models with two or more observe() statements (QC 18172): The IWRES and epsilon shrinkage calculation is only correct for the first observed() variable listed in the model.
Simulation table and Simulation output do not match; there is a small difference at time of dose (QC 12109): When generating tables from model fitting runs, if reporting times in a table are simultaneous with dosing times, the table reporting may actually occur before the doses, even though they are at identical times. This is simply an artifact of simultaneity, for which there is no clearly correct behavior.
If a crash occurs while using MPI in NLME, other MPI processes may be left running, which can cause unpredictable behavior in NLME (QC 14279): If one of the MPI processes crashes, other MPI processes and the parent process mpiexec.exe may be left running. This can leave NLME in an unpredictable state, such as being unusually slow to execute or being unable to execute another model. To fix the problem, mpiexec.exe must be stopped from the Task Manager. This will stop any other MPI processes. The problem should only occur with an engine crash, not when using the Stop Execution or Stop Early buttons, or when NLME ends with an error message.
For some models, requested pred values are reported as zeros in the table, if no other triggers except time are chosen (QC 18039): The workaround is to add one of the other triggers (covariate, dose, observe).
Remote platform NLME execution does not work on Windows 8.1 and with R 4.0.0 installed (QC 18040): A bug in R 4.0.0 prevents the Certara.NLME8 package from working properly on Windows 8.1 and causes an error to be generated during execution. To workaround this issue, use an R version other than 4.0.0.
QRPEM shows non-zero etas for the subjects with zero observations (QC 18182): All etas are updated irrespective of the number of observations. There is no known workaround.
Jobs canceled from Phoenix (while they are in a wait state in the queue) remain in the queue (QC 17632, QC 17635): In some grid configurations, if the number of available cores specified for a grid exceeds the total number of available cores, it can cause the job to remain in the queue. If the job cannot be canceled from within Phoenix, then a direct cancellation through ssh is required. Care must be taken especially for burstable grids, where additional resources (slots) can be requested but not used. Periodic monitoring of the running jobs for the current user is recommended.
Open MPI issue on remote linux server (QC 17986): For some grid configurations, the number of calculated MPI cores for the particular job cannot exceed the total number of hosts available on the grid. This can cause the software to ask for more hosts to do the computation than are available and result in the job freezing or exiting with an error. In such cases, it is advised to switch to the grid mode without MPI.
Comparison Result cannot be loaded into an R object, but works directly in R (QC 13965): When using the R object for accessing the Model Comparer results for NLME or NONMEM, the default .csv file generated for importing into R will not work. The columns beginning with # are preventing the default .csv file from loading into R. The workaround is to not have these output columns generated in the Model Comparer or explicitly state the columns to import with acceptable names via the commenting mechanism for mapping in the R tool. For example, to import all the columns generated by the Model Comparer into an R script, use a script similar to the following and map the columns:
attach(compare.df) #WNL_IN Hide Compare Name Sort Method Description Lineage LogLik -2(LL) AIC BIC -2(LL)Delta AICDelta BICDelta NumParms NumObs NumSubj pvalue
Incorrect identification of models (QC 17364): If multiple Phoenix Model objects have the same names in different workflows, when the Model Comparer is used, all models with the same names will be used in the comparison. Even if the models with duplicated names are not checked, the checkboxes will be checked at execution. The workarounds are to not use the same names for different model objects, or to check the Hide checkboxes next to the model objects that are not wanted for the comparison.
Random Effects setup can change upon model update (QC 8140): The order of the random effects and associated initial estimates can change if the user selects different model parameterization after the random effects have been setup. The order of the random effects is not remembered if the structural model is changed. Caution is advised in double-checking the random effects entries after a model is changed.
If Parameter.Mappings is forgotten when using a worksheet for initial estimates, the values on the bottom tab are used without a warning (QC 8529).
When going from a PK/Emax or PK/Indirect Built-in Model to a Graphical model, if Freeze PK is selected for Built-in, the parameters do not stay frozen in the Graphical model (QC 8543): The workaround is to select the boxes in the Graphical Editor to freeze the individual PK parameters.
Numbers typed in text fields with commas for decimals breaks PML (QC 8925): Where numbers can be entered in data fields, generally either comma (,) or period (.) can be used as a decimal point. (It will be converted to period.) However, there are fields where sequences of numbers, separated by commas, can be entered, such as the sequence of times in a table specification. In those fields, the comma character cannot be used as a decimal point, because it acts as a delimiter between numbers.
NLME cannot run when imported data is in a format that has commas (QC 9552): In Phoenix, the default display format is not one that uses commas as decimal points. In general, US format numbers should be used in worksheets. That is, using the period character (.) as a decimal point, and no thousands separator.
Changing the PML code requires rebuilding of the dosing sheet (QC 11306): When changing a model from individual to population or vice versa, and changing between Sort and ID mapping, if there is a built-in dosing worksheet, care must be taken to rebuild the worksheet. Otherwise, it retains the prior mapping, causing a verification error when attempting to run the model.
When running in graphical mode, sequence blocks cannot be entered in the procedure block (QC 11933).
A fixed effect term involving a categorical covariate is not recognized in the secondary parameter definition for a Built-in model (QC 14223): Secondary parameters depending on categorical covariate effects do not work for Built-in or Graphical models. The user interface does not accept them. However, they do work for Textual models. For example, if “sex” is a categorical covariate having values 0 and 1, and it modifies column “V,” then there is a fixed effect named “dVdsex1.” This fixed effect is not recognized in the secondary parameter definition; however it will work in a Textual model.
Parsing of a model fails to remove fixef parameters if they are deleted from the Textual model (QC 14256): When doing a Profile of a Textual model, it is possible to get extra copies of fixed effects appearing in the “Fixed Eff” list. There does not seem to be a way to recover from this, other than returning to a Built-in or Graphical model.
When “Stop Early” is executed, the “Warnings and Errors” output does not clearly state that the execution was stopped early (QC 14380): If the user reviews this output later, the user should check the Overall output to see the Return Code of 6, which will show that the fitting was not allowed to run to convergence.
Issues scrolling and expanding covariate addition panel in the Structural tab (QC 17805): The scrollbar on the right side for selecting the covariates from unused list does not appear, even after changing the settings to small text and maximum resolution. In addition, expanding the lower page does not expand the covariate panel, which could lead to mis-mappings.
Opening a project file in which the Phoenix Model object involved switching from built-in to textual mode can cause Phoenix to hang (QC 17967): Override statements are hidden by default. However, when a model is switched from built-in or graphical to textual mode, these statements can mistakenly become visible in the model text and cause Phoenix to hang when loading the project. To workaround this issue, delete the override statement from the text mode before saving the project.
Validation Suite installation check for Phoenix 8.3.4 and 8.3.5 fails after Integral Plugin 23.4.1 is installed (INTG-4276): An update to Phoenix by the “Integral 23.4.1 for Phoenix 8.x” installer causes one Validation Suite test (“PHX Installation”) to fail in the WinNonlin and NLME test sets. The update by the installer enables Phoenix to open Word files uploaded from Integral. The failure is observed if the Validation Suite is run after installing Integral Plugin 23.4.1 and occurs because the updated file has a different checksum than the Validation Suite expects. There is no issue if the user upgrades to Phoenix 8.4.
Changing a column name after setting up Bioequivalence leaves the object in a state where it cannot be re-executed (QC 13010)<: If the name of a classification variable is changed, the prior column name still appears in the Classification Variables field and cannot be removed, and this causes the execution to fail. The workaround is to finalize the column names before sending data to the Bioequivalence or LinMix objects.
Loss of accuracy in LinMix/Bioequivalence confidence intervals for small sample sizes (NDF < 5) (QC 4698): The CIs in Linear Mixed Effects Modeling and Bioequivalence use an approximation for the t-value which is very accurate when the degrees of freedom is at least five, but loses accuracy as the degrees of freedom approaches one. The degrees of freedom are the number of observations minus the number of parameters being estimated (not counting parameters that are within the singularity tolerance, i.e., nearly completely correlated).
Best-fit Lambda Z range values are not reported consistently in the output when multiple points are excluded at the start of the data or if there are zero-valued points at the start of the range (QC 11227): When excluded or zero-valued points are at the beginning of the Lambda_z range, Lambda_z_lower is reported in some output as the earlier time value, which includes the unused initial points, yet in other output, it is reported as the first point that is used. For example, if values range from 0.8 to 2 and points before 1.4 are excluded, some of the output reports the Lambda_z range as (0.8, 2), whereas other output lists the range as (1.4, 2).
NCA Core Output and Final Parameters results are missing the dose normalization in the dose units (QC 17907); Core Output, Final Parameters, and Final Parameters Pivoted worksheets do not have the dose normalization in the units. Only Dosing Used has the correct units.
**Issue Corrected in Phoenix 8.3.5** When a user saves an un-executed NCA object created with Phoenix 8.3.x outside of Phoenix, then reloads the project and executes it, the system gives an error (QC 18126): One workaround for this issue is to remap the sort variables for the NCA and then execute. A second workaround is to execute the NCA before saving the project.
**Issue Corrected in Phoenix 8.3.5** An issue has been reported where executing NCA within Phoenix 8.3 generates a “Phoenix has stopped working” error (QC 18152/CRM 215821): This error may occur in NCA objects with a large number of profiles, due to a stack overflow. To work around this issue, determine if any Sorts can be removed or try dividing the data into separate NCA executions.
**Issue Corrected in Phoenix 8.3.5** If an NCA execution involving many profiles is started while viewing a panel with mapped data, the NCA will fail to run (QC 18159): Upon execution, Phoenix tries to update the panel being viewed at the same time, which leads to the error. The workaround is to not be viewing any panel in NCA with data mapping when the execution is started. For example, start the execution when viewing the Units panel.
When NCA user-defined parameter is a function of null value, result shows zero value instead of null (QC 18180): There is no known workaround for this issue.
NPS quits trying to compute Lambda Z if the last three points fail to compute Lambda Z, which is different than NCA (QC 13804): NCA and NPS behave differently in computing Lambda Z for a specific case and they should be the same. As NCA tries to compute Lambda Z, if the last 3 points fail to compute a Lambda Z, it will continue checking further back in the dataset to see if a larger group of points ending with the last point will yield a valid Lambda Z. NPS appears to quit if the last three points fail to compute Lambda Z (such as the last three points going uphill). This defect has existed in all versions of Phoenix. The workaround is to execute NCA on the data using the default settings, and then use the NCA Slopes result and map it to the Terminal Phase setup in the NPS object.
NPS and NCA Best-Fit Lambda Z calculations differ (QC 14394): In a case where input data was given to significant precision (eight significant digits or more), the Best Fit algorithms in NCA and in NPS (Nonparametric Superposition) were not generating exactly the same Lambda Z values, although the results were very similar.
A “Processing halted due to MATH FUNCTION ERROR” error has been reported when running a very long-term simulation with a short half-life in NPS (QC 18078): The math function error occurs when the predicted concentration becomes too small to represent, i.e., when taking the exponential of a large negative number. No predicted output will be generated. The only known workaround is to use a shorter output time range.
Inconsistent behavior is possible when using external worksheets for a PK Model object (QC 7122): When using a PK operational object, external worksheets for stripping dose, units and initial estimates can be accessed in different ways. The differences will occur if there is more than one row of information on these external worksheets that correspond to one or more individual profiles of data of the Main input worksheet.
In such cases, the stripping dose for PK models will be determined as the first value found on that external worksheet whereas the units and initial parameters will be based on the last row found on those external worksheets (for any given profile). To avoid any confusion stemming from these differences it is suggested that external worksheets maintain a one-to-one row-based correspondence to the Main input profiles whenever possible.
Initial Estimates grid for Dissolution models does not accept new initial values after changing the Fixed option to Estimated (QC 9741): For the WinNonlin Generated Initial Parameter Values option with Dissolution models, to avoid getting pop-up warnings that “WinNonlin will determine initial estimate” when using the Initial Estimates internal worksheet setup, the user should delete the initial values and change the menu option from Estimated to Fixed before entering the initial estimate.
Once the dropdown is changed from Fixed to Estimated, the user will not be able to delete the initial value entered. However, it will not be used, and WinNonlin will estimate the initial value as requested.