Deconvolution through convolution methodology
Phoenix deconvolution uses the basic principle of deconvolution through convolution (DTC) to determine the input function. The DTC method is an iterative procedure consisting of three steps. First, the input function is adjusted by changing its parameter values. Second, the new input function is convolved with cd(t) to produce a calculated drug level response. Third, the agreement between the observed data and the calculated drug level data is quantitatively evaluated according to some objective function. The three steps are repeated until the objective function is optimized. DTC methods differ basically in the way the input function is specified and the way the objective function is defined. The objective function may be based solely on weighted or unweighted residual values, or observed–calculated drug levels. The purely residual-based DTC methods ignore any behavior of the calculated drug level response between the observations.
The more modern DTC methods, including Phoenix’s approach, consider both the residuals and other properties such as smoothness of the total fitted curve in the definition of the objective function. The deconvolution method implemented in Phoenix is novel in the way the regularization (smoothing) is implemented. Regularization methods in some other deconvolution methods are done through a penalty function approach that involves a measurement of the smoothness of the predicted drug level curve, such as the integral of squared second derivative.
The Phoenix deconvolution method instead introduces the regularization directly into the input function through a convolution operation with the dispersion function, fd(t). In essence, a convolution operation acts like a “washout of details”, that is a smoothing due to the mixing operation inherent in the convolution operation. Consider, for example, the convolution operation that leads to the drug level response c(t). Due to the stochastic transport principles involved, the drug level at time t is made up of a mixture of drug molecules that started their journey to the sampling site at different times and took different lengths of time to arrive there. Thus, the drug level response at time t depends on a mixture of prior input. It is exactly this mixing in the convolution operation that provides the smoothing. The convolution operation acts essentially as a low pass filter with respect to the filtering of the input information. The finer details, or higher frequencies, are attenuated relative to the more slowly varying components, or low frequency components.
Thus, the convolution of the precursor function with the dispersion function results in an input function, f(t), that is smoother than the precursor function. Smoothing is controlled through the use of the smoothing parameter d. Decreasing the value of the dispersion function parameter d results in a decreasing degree of smoothing of the input function. Similarly, larger values of d provide more smoothing. As d approaches zero, the dispersion function becomes equal to the so-called Dirac delta “function”, resulting in no change in the precursor function.
The smoothing of the input function, f(t), provided by the dispersion function, fd(t), is carried forward to the drug level response in the subsequent convolution operation with the unit impulse response function (cd). Smoothing and fitting flexibility are inversely related. Too little smoothing (too small a d value) will result in too much fitting flexibility that results in a “fitting to the error in the data.” In the most extreme case, the result is an exact fitting to the data. Conversely, too much smoothing (too large a d value) results in too little flexibility so that the calculated response curve becomes too “stiff” or “stretched out” to follow the underlying true drug level response.
Legal Notice | Contact Certara
© Certara USA, Inc. All rights reserved.