0% found this document useful (0 votes)
100 views

Computer Simulations of The Cell

This document discusses the use of computer simulations across four levels of biological organization to model pharmacokinetics and pharmacodynamics in drug development. It begins by describing how whole-body simulations using systems of differential equations can model drug exposure and response over time in the intact organism. These whole-organism models aim to simplify reality while accurately predicting behavior. The document then explores moving from complex whole-body models to more detailed models of isolated tissues and organs, individual cells, and proteins and genes.

Uploaded by

Samjith Thomas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

Computer Simulations of The Cell

This document discusses the use of computer simulations across four levels of biological organization to model pharmacokinetics and pharmacodynamics in drug development. It begins by describing how whole-body simulations using systems of differential equations can model drug exposure and response over time in the intact organism. These whole-organism models aim to simplify reality while accurately predicting behavior. The document then explores moving from complex whole-body models to more detailed models of isolated tissues and organs, individual cells, and proteins and genes.

Uploaded by

Samjith Thomas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

● COMPUTER SIMULATIONS IN PHARMACOKINETICS AND PHARMACODYNAMICS:

REDISCOVERING SYSTEMS PHYSIOLOGY IN THE 21ST CENTURY

PAOLO VICINI

Contents

21.1 Introduction

21.2 Level 1: Computer Simulation of the Whole Organism

21.3 Level 2: Computer Simulation of Isolated Tissues and Organs

21.4 Level 3: Computer Simulations of the Cell

21.5 Level 4: Proteins and Genes

21.6 Conclusion

Acknowledgments

References.

21.1 INTRODUCTION

Perhaps no technology in human history has radically changed so many disciplines as the introduction
of personal computing and the now-ubiquitous presence of the World Wide Web. What the joint
application of these enabling technologies allows us to do is instantaneously and interchange exchange

robust, verifiable, and consistent information. An area that has benefited from this is sometimes termed
as "biocomputation," or the biological transformation of biological and biomedical research from a
painstaking endeavor often reserved for bench and field (Fig. 21.1). However, clearly outlining exactly
entailed by "biocomputation," or "biomedical simulations," is more often than not to challenge Terms
like "systems biology" and "bioinformatics" are increasingly used in multiple settings, but the multiple
meanings behind them Especially the expectations associated with these technologies are not always
clear. Some even draw a distinction between "biomedical informatics" and "bioinformatics." not the one
who distinguished between "bioengineering" and biomedical engineering. "The very fact that biomedical
computation has become so pervasive has made it difficult to draw clear boundaries between areas and
to unambiguously define areas of expertise and / or influence for practitioners that are now extending
computer modeling to virtually every aspect of biomedical enterprise "from bench to bedside" [I], all the
way from clinical record management to computer-aided drug design, through clinical trial simulation,
therapeutic drug monitoring, pharmacogenomics, and molecular engineering.

The information revolution in biology has been facilitated, and in a very reacted needed to somehow
coordinate and manage the increasing I sense motivated, by the emphasis placed on "discovery science"
[2)] projects such as the Human Genome Project and the various databasing amount of bioinformation
being generated by thousands of laboratories worldwide. This has coincided with a scientific change of
emphasis that is best tracked through the different interpretations and meanings associated with the
phrase "systems biology" nowadays and a few decades ago. According to Guyton [3] and other holistic
physiologists, a living homeostatic system was thought of as being comprised of a series of interacting
parts, or subsystems, an understanding of which was deemed essential to comprehension of the
complex dynamics of the whole. However, the starting point at that time was the intact system, as it was
believed that only through information gathered on the macroscopic behavior of the whole could
understand the inner workings of the parts. Since Aristotle's proposal that "the whole is more than the
sum of the parts," direct investigation of the living system was essential. The approach was "top to
bottom." This point of view shifted with the advent of molecular biology, which brought within reach of
the possibility of looking directly at the parts themselves at an unprecedented level of bio physical
detail. A clear, unambiguous, and validated understanding of the ts would in time, argued, lead to an
understanding of how they interact and how they conspire to shape the dynamic performance of the
intact, living system. This in turn motivated a paradigm shift from elini cal sciences to basic sciences, and
in pharmaceutical sciences from clinical pharmacology to molecular pharmacology This is the "bottom to
top.
approach to biomedical research. Clearly, with so much information on their fingertips, modern
biologists should have more than enough ammunition to build comprehensive, testable models of
biosystems: Possunt quia posse videntur. However, what could not be anticipated is that unexpected
complexity lurked in the modalities of interactions of the ingredients, that up to a living system, so that
mathematical and computer representations of comparatively simple subsystems tend to be almost
invariably much more complex than the whole, living system of which they are a part [4]. This has been a
somewhat unsatisfactory situation for modern biological research, where the need for refocus is
periodically felt, for example, through initiatives such as the NIH Roadmap [5] and changes to NIH peer
review criteria | 6)

The drug development process was influenced by these changes in different way. Because drug
development needs perspective, but in a slightly remain focused on the clinical outcome, or, in other
words, it has to generate drugs that are safe and effective, the shift to molecular pharmacology has.
least in the private sector, was accompanied by a continued presence of the tenets of clinical
pharmacology, in a beneficial synergy that includes the best of both worlds | 7]. This has not been the
case in the academy where training programs in clinical pharmacology have become less and far
between and the emphasis is on basic science, sometimes at the expense of traditional disciplines such
as pharmacokinetics and pharmacodynamics.

What happens in drug development these days is a recasting of Guyton's all-encompassing,


whole-system quantification approach, balanced by an increased awareness of the "parts list" that
comes from molecular biology [9] the pragmatism that characterizes the drug developer these two
different emphases are both used to lead to the creation of better therapeutics. The FDA, for example,
has been rather well positioned to take advantage of advances in biocomputation and has presented
recent developments in computational modeling in the development process through the issue of
guidances and consensus documents [10). The is happening at other federal agencies. The EPA is
becoming increasingly aware [11] of the potential advantage [12] of aggressively using computational
representations of complex systems to predict likely system behavior, or at least narrow down the field
of possibilities. DARPA has started a project, termed Virtual Soldier, to achieve the rather ambitious goal
of creating physiological, mathematical, and software representations of individual soldiers [13]

In this chapter, we describe some of the advances in biocomputation have impacted potentially will
impact pharmaceutical research and development. We list them by "biological size," going from the
most to the least organized, or from the most complex to the least complex. We focus on clinical
sciences in particular, because we feel that simplified. but useful representations of pharmacological
interventions have the greatest potential for shortening the development process and weeding out
potential unsatisfactory candidates. The discussion is articulated along four levels, roughly following the
idea of ​"biological size." which will carry us from all organism to genetic networks through the analysis
of biocomputation applications to isolated organs, cells, and molecules.

21.2 LEVEL 1: COMPUTER SIMULATION OF THE WHOLE ORGANISM


In a sense, being able to model the whole organism is the essential goal of biocomputing. In drug
development, it provides the obligatory handle to lead to response from exposure (Fig. 21.2). Provided
the intact organism is mathematically represented, a whole series of possibilities can be brought into
practice, such as the simulation of clinical trials and the prospective behavior of entire populations. In
drug development, whole body systems are usually represented in one of two ways. The first approach
is through the formalization of a lumped-parameter PK-PD model [14], often coupled with a model of
the disease process [15]. whose parameters can be estimated from data. A relatively small number of
differential equations, between one and ten, is used to predict the system's behavior over time [16].
Often, but not always, some variation of population PK-PD [17]. predicated on nonlinear regression and
nonlinear mixed-effects models [18]. is used to estimate both the population parameter values ​and their
statistical distribution. The same approach can be taken in reverse [19] by using models to generate
synthetic data, ultimately performing a full clinical trial simulation from first principles [20]. The other
approach to whole organism models is based on physiological modeling [21], models that are still based
on ordinary differential equations, but they attempt to describe the organism and especially the
interacting organs with more detail, often by increasing the number of differential equations (from 10 to
perhaps 30) and building appropriate interactions between

Figure 21.2 The exposure-response road map passes through pharmacokinetics pharmacodynamics. This
sequence of events is essentially the same as that of the computer simulation of clinical trials, with the
addition of complicating, important, factors such as protocol adherence and dropouts. the organs that
resemble their physical arrangement in the organism being studied.

Although the representation of the intact organism provided by PK-PD and PBPK models is simplified, it
does pose nontraditional challenges. For PK-PD, the purpose is in finding the best (simplest?) Model that
can explain the observations [23]. Formally speaking, the concept of "best is difficult to define
unambiguously." More often than not, model selection driven by some kind of parsimony criterion that
balances model complexity with the current information provided by the measurements. set of "good
practices" that can serve as guidance to model development, selection, and application [24] PBPK
models come at the problem from a different angle [25]. Because they embed previous knowledge
about the organism kinetics, their arrangements, and their specific parameter values, the process of
tailoring the model to the specific measurements at hand is not crucial. On the other hand, PBPK models
can suffer greatly in their predictive power if their parameterization is inaccurate, poorly specified, or
not well tailored to the particular drug. Many researchers split PBPK model parameters and structures
into "drug specific" and "not drug specific," thus implying that the model can indeed capture some
underlying dynamics that are general for all drugs, and that further specification can be limited to the
exclusive characteristics of a certain molecule. It is also very important to specify parameter and
structure uncertainty when dealing with model-based predictions [26]. More detail on how these
parameters can be specified is also provided below. The approach taken by PBPK modeling is not very
dissimilar from the recently proposed Physiome Project [271, a parts list of the human organism whose
development follows and often than not, the rate-limiting step for development of PBPK models is the
availability of information on single-organ parameters, such as rate rates and partition coefficients. (28)
An exhaustive list of these as the one that the Physiome Project may provide could certainly The EPA is
also showing interest in computer-based prediction of individual pharmacokinetics, and recently, the
technology for public comment. Finally, it is worthwhile to note that there have been recent advances in
the understanding of the mechanistic underpinnings of whole organism homeostasis [29] that have not
yet been aggressively applied drug development (where they would be most useful, one would expect,
for between- and within-species scaling). to note that the foremost challenges for the detailed modeling
of the intact organism (computing time, complexity of interactions, model selection) are very similar r to
those entailed by the analysis of proteomic genomic data. In the clinical case, complexity shifts from the
richness of the data set to the model formulation, whereas in the proteomic-genomic case the main
source of difficulties is the sheer size of the data set; However, at least at present, interpretive tools are
rather uncomplicated.

21.3 LEVEL 2: COMPUTER SIMULATION OF ISOLATED TISSUES AND Organs

The behavior of molecules in isolated organs has been the subject of extensive investigation. The heart
[30] and the liver [31] were historically the organs most extensively investigated [32], although the
kidney [33] and brain [34] have also been the subjects of mathematical modeling research. The liver in
particular has been extensively researched both in the biomedical [35] and pharmaceutical [36]
literature. Many of the computer simulations for the heart and liver were carried out with distributed
blood-tissue exchange (BTEX) models [37], because the increased level of detail and temporal resolution
certainly makes the good mixing and uniformity hypotheses at the basis of lumped parameter models
less tenable [38]. The work of Goresky, Bassingthwaighte, and others has spearheaded this area of
​development for mathematical modeling, and in recent times drug development has rediscovered some
of the analytical tools proposed by this research community [39]. It can be speculated that the
integration of organ-specific modeling with the whole-organism models would result in improvements
for the PBPK approach through better ie more physiologically sensitive and plausible) models of
individual organs The main challenge in doing so is the required shift from lumped to distributed
models. The jump to partial differential equations is fraught with difficulties, especially because of the
average bench biologišt ial equations often has a lot of trouble grasping the concepts behind ordinary
differen s well. This motivates the question of which is the audience for these technologies, or who is
expected to be a user for the various software and a reader for the papers. There is an enormous variety
of software for pharmacokinetic and pharmacodynamic simulations, with a partial list available in Table
21.1 and more updated lists available elsewhere [40] As an example of infrastructure endeavors, a new
project funded by the National Institute for General Medical Sciences at the NIH , The Center for
Modeling Integrated Metabolic Systems (MIMS) (41), its the development and integration of in vivo,
organ-specific mathematical models that can successfully predict behaviors for a range of parameters,
including rest and exercise and various pathophysiological conditions. Microcirculation Physiome [42]
and the Cardiome [43] are other multi- centered projects focused on particular aspects of the Physiome
undertaking One prevalent concept that seems to emerge in these large-scale projects is that of
interdisciplinary collaboration, and especially of the need to Many areas of expertise for the solution to
these problems. t of integrated computational representations of biological systems you have to borrow
from many fields, if nothing else because of the multidisciplinary complexity that some of these imply
endeavors

You might also like