Modeling Methods

Under construction

Model-based analysis is more an art than a science. -Lewis B. Sheiner, MD

Important considerations

Impact of a model [,]

  • What is the modeling used for? (e.g., bridging, dose, SmPC parameters?)
    • Does the conclusion align with the aim?
  • What data is available?
    • Rich data
    • Sparse data
  • What is the structural model?
    • Reasonable parameter estimates and RSE’s?
    • Graphical evaluation (VPC first)
    • Covariate evaluation
  • Exposure-response is generally non-informative if only one dose-level is given, even if weight-adjusted

Reviewing models

  • Does my conclusion align with the authors?
  • Questions NGN (eNGiNe)
    • Need-to-know: Will affect conclusion (Major objection)
    • Good-to-know: Could affect conclusion (Other concern)
    • Nice-to-know: Won’t affect conclusion (avoid asking this question)

Terminology

Parsimony: The idea is that comparing two models, the model with fewer parameters is preferable, given that all else is equal.

Shrinkage: Shrinkage quantifies how much individual estimates regress towards the population mean under the given sampling schedule []. It also measures the amount of information on a parameter in the current data used to estimate it.

Robustness: Robustness describes a model’s ability to perform reliably under varying assumptions or when faced with unusual or extreme data points. Robust models produce stable parameter estimates even if underlying assumptions (normality, independence, linearity, etc.) are not perfectly met. Non-robust models may yield misleading results or parameter instability if assumptions are violated.

Misspecification: Misspecification occurs when a model’s mathematical structure or assumptions differ significantly from the underlying data-generating process.

Note

It is reasonable to assume that there will always be some model misspecification. We recall:

All models are wrong, but some are useful. -George E. P. Box, PhD

Overfitting: The model fits training data so well that it fails on new data (poor generalization).

Identifiability: Whether parameters can be uniquely estimated from the data. Lack of identifiability means multiple parameter combinations yield the same likelihood.

Bias-variance tradeoff: Balancing a model’s simplicity (high bias, low variance) versus complexity (low bias, high variance) to optimize predictive accuracy.

References

[1]
Musuamba FT, Skottheim Rusten I, Lesage R, Russo G, Bursi R, Emili L, et al. Scientific and regulatory evaluation of mechanistic in Silico drug and disease models in drug development: Building model credibility. CPT Pharmacometrics Syst Pharmacol 2021;10:804–25. https://doi.org/10.1002/psp4.12669.
[2]
Skottheim Rusten I, Musuamba FT. Scientific and regulatory evaluation of empirical pharmacometric models: An application of the risk informed credibility assessment framework. CPT Pharmacometrics Syst Pharmacol 2021;10:1281–96. https://doi.org/10.1002/psp4.12708.
[3]
Savic RM, Karlsson MO. Importance of Shrinkage in Empirical Bayes Estimates for Diagnostics: Problems and Solutions. AAPS J 2009;11:558–69. https://doi.org/10.1208/s12248-009-9133-0.

Footnotes

  1. Summary of Product Characteristics↩︎

  2. Relative standard error↩︎

  3. Visual Predictive Check↩︎