Research Article: Variability Attribution for Automated Model Building

Date Published: March 8, 2019

Publisher: Springer International Publishing

Author(s): Moustafa M. A. Ibrahim, Rikard Nordgren, Maria C. Kjellsson, Mats O. Karlsson.

http://doi.org/10.1208/s12248-019-0310-5

Abstract

We investigated the possible advantages of using linearization to evaluate models of residual unexplained variability (RUV) for automated model building in a similar fashion to the recently developed method “residual modeling.” Residual modeling, although fast and easy to automate, cannot identify the impact of implementing the needed RUV model on the imprecision of the rest of model parameters. We used six RUV models to be tested with 12 real data examples. Each example was first linearized; then, we assessed the agreement in improvement of fit between the base model and its extended models for linearization and conventional analysis, in comparison to residual modeling performance. Afterward, we compared the estimates of parameters’ variabilities and their uncertainties obtained by linearization to conventional analysis. Linearization accurately identified and quantified the nature and magnitude of RUV model misspecification similar to residual modeling. In addition, linearization identified the direction of change and quantified the magnitude of this change in variability parameters and their uncertainties. This method is implemented in the software package PsN for automated model building/evaluation with continuous data.

Partial Text

Nonlinear mixed effect (NLME) modeling, commonly known as the population approach, is increasingly used to describe longitudinal data from preclinical/clinical experiments, either to improve the efficiency of the drug development process and subsequent dosing, or increase the understanding of the studied underlying pathophysiological system (1). In contrast to naive pooling approach, which ignores individual differences, and two-stage approach, which does not distinguish between subject and observation variability, NLME models allow pooling of sparse data from different subjects while simultaneously quantifying multiple levels of variability, thanks to its mixed effects nature. In mixed-effects analysis, population parameters are included in a model as fixed effects, and the variability within this population as random effects. Random effects can incorporate variability on both the subject and observation levels, as inter-individual variability (IIV), between occasion variability, between study variability, and residual unexplained variability (RUV). This ability to identify different sources of variability is particularly critical to many clinical applications, e.g., therapeutic drug monitoring.

Linearization was successfully applied to all examples, justified by the similarities in the OFVs of the linearized base models and the NLME base models. All examples were extended successfully to the different RUV models, except for AR1 and t-distribution error models with Clomethiazole and the IGI models. All examples benefitted significantly with one or more of the RUV extensions, except for Daunorubicin model. Across all examples, the agreement between ∆OFVlin and ∆OFVNLME was good as shown in Fig. 2. Comparing to the performance of residual modeling in predicting ∆OFVNLME, linearization surpassed CWRESI, IWRES, and NPDE over all different ranges of ∆OFVNLME, and performed better than CWRES at most ranges of ∆OFVNLME except at low ranges of ∆OFVNLME(~ 10) where CWRES was slightly better. Linearization identified accurately the most important RUV extension to all examples similar to conventional analysis, surpassing CWRES modeling that reversed the order of 1st and 2nd most important extensions with two examples, Ethambutol and Disufenton sodium models. Also, linearization identified the RUV extensions resulting in significant improvement of fit in all examples similar to conventional analysis, while CWRES modeling missed only t-distribution error model with Asenapine model, shown in Fig. 2. Asenapine model is the only model with residuals’ IIV model as the base model, which may be sufficient in explaining outliers and would turn the t-distributed error model rather less important. The median ratio of ΔOFVlin/ΔOFVNLME was 0.95 among models with significant improvement, compared to 0.8 for the median ratio of ΔOFVCWRES/ΔOFVNLME.Fig. 2Plot of absolute ΔOFVNLMEversus absolute ΔOFV for CWRES, CWRESI, IWRES, linearization, and NPDE among the real data examples for the six extended RUV models

In this paper, we explored if the use of linearization to identify and quantify RUV model misspecifications, similar to residual modeling (5), can provide additional advantages. Residual modeling assesses whether RUV extensions are required to address an RUV misspecification. It is done in an extremely fast and robust way, thanks to the simple nature of models for residuals data. In case of multiple dependent variables, residual modeling evaluates the RUV extensions separately for each dependent variable, identifying which variable need which extension, and so reducing the risk of ending up with an over-parameterized NLME model. However, being estimated on residual data has shortcomings, as residual modeling cannot inform on the rest of the NLME model parameters. Implementation of a needed RUV extension in a NLME model would be expected to improve the uncertainties of Ω and θ subsequently, as the latter is a function of the former. Linearization, in contrast to residual modeling, uses the calculated parameters’ partial derivatives with respect to documentclass[12pt]{minimal}
usepackage{amsmath}
usepackage{wasysym}
usepackage{amsfonts}
usepackage{amssymb}
usepackage{amsbsy}
usepackage{mathrsfs}
usepackage{upgreek}
setlength{oddsidemargin}{-69pt}
begin{document}$$ {widehat{eta}}_i $$end{document}η^i from the fit of the NLME model. It estimates the RUV model incorporating any extension and the random effects components given the same data as the NLME model. Thus, linearization can estimate explicitly the random effects and their uncertainties in the base and the extended model, and implicitly the magnitude and the direction of change in the random effects and their uncertainties, and that is what we had shown here.

 

Source:

http://doi.org/10.1208/s12248-019-0310-5

 

Leave a Reply

Your email address will not be published.