Simulated Clinical Trials
Simulated Clinical Trials
Abstract
including clinical trials for drug development, since real trials are costly, frequently fail
and may lead to serious side effects. This paper is a survey of the statistical issues arising
in these simulated trials, with particular emphasis on the design of virtual experiments,
stressing similarities and differences with the design of real trials. We discuss the aims
and the peculiarities of the simulation models used in this context, including a brief
mention of the use of metamodels, and different validating techniques. We illustrate the
various issues brought about by this investigation tool by means of simulated projects
recently published in the medical and pharmaceutical literature. We end the paper with
simulation in clinical research, and the interesting research problem of how to plan
Keywords
1
1. Introduction
Simulation run on a computer is a formidable tool to aid and complement real life
can be run to imitate the behaviour of the system of interest. Simulators make it possible
to explore complex relationships between input and output variables and can be used in
assessment. They are also invaluable when only few physical runs can be made due to
their high cost. For these reasons the practice of complementing laboratory experiments
or field observations by means of simulated ones has been steadily growing in recent
years. The books by Santner, Williams and Notz (2003) and by Fang, Li and Sudjianto
experiments, Steinberg (2009), starting from applications, reviewed some of the main
ideas that have been proposed for the statistical analysis and design of studies that use
of real data.
Despite the understandable misgivings of the non-experts, the idea that the
functioning of the human body can be mathematically modelled and analyzed has been
widely accepted in the scientific community, at least since the second half of last
physiological functioning, disease progression and drug behaviour in the human body,
collective ethics” dilemma: potential harm to the subjects must be minimized, especially
when they are patients presently under care, and at the same time the trial must
2
maximize the experimental information for the sake of future patients. As well as the
ethical considerations, time and costs are also important. To bring down the costs,
prevent possible failures in future trials, reduce the trial time frame and avoid possible
side effects in humans, clinical trial simulation (CTS) is asserting itself as an emerging
technique to improve the efficiency of the drug development process, thanks also to the
advent of new powerful software tools. The excellent set of guidelines (Holford et al.,
1999) for correct CTS suggested in 1999 covers the following topics: planning a
assessment of simulation results and reporting, but it is not clear whether they are used
or not in actual practice. A very recent substantial review by Holford, Ma and Ploeger
also the collective volume edited by Kimko and Duffull (2003) which gives a general
overview of simulation for clinical trials presenting a large number of case studies (see
also Taylor and Bosch, 1990; Holford et al., 2002). A very useful introductory article
simulation in clinical trials paying special attention to the design aspects. We aim at
making medical statisticians more aware of the statistical issues and problems arising in
this field. Section 2 presents some remarks about protocols for simulation studies. All
the potential aims of simulation in clinical research are overviewed in Section 3. Section
4 contains a short description of the models used in clinical contexts, which must be
implemented in a simulator, and in Section 5, the central part of this paper, we discuss
the ensuing experimental design problems: the design of a simulated experiment is not
necessarily the same as for a real one, due also to a possible difference in the endpoints,
3
the aims etc. Section 6 explores existing software for CTS and Section 7 contains a brief
question of validating the simulator of a virtual trial, in which statistics should play a
crucial role. All the above topics will be illustrated by studies recently published in
and express some criticisms. Given that the subject is vast, we have made no attempt at
covering all the existing bibliography: we refer to Holford et al. (2010) and some of the
In the Western world and the major developing countries, guidelines for the correct
conduct of a clinical study have been issued by authoritative regulatory agencies. In drug
Harmonization (ICH). As is well known, a protocol is demanded for every trial, namely
a written document setting out the rules and the steps to follow in the study, aimed at
assuring the safety and health of the trial subjects, and also adherence to the same
standards by all the study investigators when the trial is a multicentre one. Among the
statistical decision to be made in advance of the trial, there is the description of the
- the choice of the treatments, which often include one or more controls
- the sample size. When the design is carried out sequentially, this is replaced by
4
- the allocation rule of the subjects to the treatment arms. Very often this rule has a
- the use of blinding or double blinding i.e. masking the treatments to the subjects
and so on. In simulated trials one can safely assume that there are no ethical problems
involved, and the costs are often a minute fraction of those of a real trial, but even for a
virtual trial a protocol is still necessary, as clearly explained in the 1999 Guidelines
Holford et al. (1999). The primary focus of the protocol is to identify the question(s) that
the project team wants to answer by means of the simulation experiment, but the
- assumptions
- extrapolation questions
and many more issues. The added value of a simulation protocol is discussed by Kimko
and Duffull (2003): among other things, an approved simulation plan increases the
5
• Pre-trial purposes
Simulation is often run before a trial with one or more of the following purposes:
Abbas et al. (2006) develop five simulation models of a clinical trial for
HIV patients treated with different drugs. The models are based on different
primary aim of the paper is to validate and select the “best” model. Selection of
the best model is based on the principle of parsimony and specific validation
This typically means running simulations to assess the power of the test that we intend to
perform once we observe the data, when analytical calculations are not feasible, keeping
the number of patients who need to be recruited to achieve a desirable statistical power.
called ivabradine developed for the treatment of stable angina pectoris. The
findings of the paper suggest that in order to obtain a desired reduction of the
outcome it is necessary to include 239 patients per group (control placebo and
treated group) with a twice-a-day low dose or 196 patients with a higher
6
3. finding robust designs, namely designs not too sensitive to some particular
experimental choice
Lockwood et al. (2006) use clinical trial simulations to find a robust design in
order to test the hypothesis that a novel treatment was effective for Alzheimer.
The primary aim of the study was to compare the power of several experimental
designs to detect a treatment effect using several dose response models since the
true effect of the treatment taken into account was unknown. The simulation
results allowed the research team to compare the trial designs and one of those
proved to be more efficient than the traditional one, leading to savings in time
and costs.
4. predicting the outcome of real trials (this issue can also be viewed as a post-trial
purpose)
Chan et al. (2007) use CTS to predict the outcome of a failed real trial in
order to improve the understanding of its failure. The trial had been
• Extrapolation purposes
As stated by Sale (in Bonate and Howard Eds, 2004), the dimensions across which
7
simulations.
2. Phases (from a small number of strictly selected patients to a full clinical study
De Ridder (2005) illustrates a case study where the aim was to predict the
outcome of a Phase III trial through data from two Phase II trial. In particular
dose ranging trials with patients treated for 4 weeks. Simulations were used
in order to obtain the outcomes of the Phase III trial, assess the robustness of
an ongoing Phase III trial in the same context (patient variability, dose-
response with a reduced dose as compared with those included in the trial.
suitable carvedilol dosing strategy for paediatric patients since the dose
the basis of the dose for adults but with dubious results.
5. Dose/dosing regimens
Ozawa et al. (2009) perform trial simulations in order to evaluate the dose
8
reduction strategy in patients with liver dysfunction of a clinically well
small cell lung and other types of cancer. Docetaxel clearance is decreasing in
dose for this kind of patients and a reduction strategy linked to the gravity of
liver dysfunction has been proposed (Minami et al., 2009). Since it is difficult to
have a sufficiently large number of these patients for a real clinical trial, because
of the typical exclusion criteria, the authors of this paper use a number of dose-
drug exposure. The results of the clinical trial simulations suggested that it is
efficacy.
• Learning about the effects of a new drug, or new dosage, new dose scheduling,
provide direct knowledge about the drug(s) under investigation. This is what is
In the final Section of the paper we shall discuss running simulations interactively with a
real trial.
9
We end this section with two more examples relative to population studies and not drug
Lee et al. (2010) have tried to gain a better understanding of the possible effects
of vaccinating employees with the new H1N1 influenza vaccine through the
and which, like virtual people, moved among virtual households, workplaces,
schools, and other locations every day and interacted with each other through
simulated social networks” (Lee et al., 2010). The model outcomes were daily
and deaths. The simulation shows how several actions regarding vaccination
labour force.
Indeed, Urban et al. (1997) simulated the effects of offering screening to a given
controlled trial to assess the efficacy of screening for ovarian cancer is costly
(ovarian cancer is a rare disease and its diagnosis requires surgery). A stochastic
model was developed with the aim of evaluating the cost-effectiveness of several
considering CA 125.
10
4. Simulation models for clinical trials
The computer models that simulate real scenarios are generally developed from previous
data sets that may include preclinical data, as well as previous phases of real trials. As
clearly stated in the 1999 Guidelines (Holford et al., 1999), a model for fully simulating
• an execution model
Input-output models They are the models that describe the patient’s response to the
treatment in mathematical terms and they would normally be used for an in vivo
much slower to run. This will be discussed in Section 7. However, other types of models
can also be used, such as physiological models (Chabaud et al., 1999) or agent-based
models, for simulating the behaviour of individuals and the overall consequences of their
local interactions (Lee et al., 2010). For a rich collection of PK/PD model equations see
models actually used by the authors can be found in the papers of Pillai et al. (2004),
Gruwez et al. (2007), Zierhut et al. (2008); the paper by Post et al. (2005) includes a
Covariate distribution models: IO models usually include terms for covariate effects
(prognostic factors), as models used for simulation studies must deal with the variability
11
from individual to individual. Covariate distribution models describe in a probabilistic
way, on the basis of previous trials or clinical experience, the variability of patients’
different characteristics in another population. Thus the impact of the different covariate
possible to explore conditions that have been ruled out in the inclusion/exclusion
document, it is well-known that some deviations from protocol are inevitable, due to
patients’ dropping out, non-compliance, lost to follow-up etc, but also due to acquiring
subsequent information which was not available when the study protocol was written. In
protocol and therefore can be extensively used as a tool for anticipating weaknesses and
insufficient statistical power and patients’ discontinuation can be studied via modelling
Girard et al. (1998) develop a Markov execution model for patients’ non-
compliance assuming that the probability of taking a wrong dose (or not taking
12
any dose at all) at a given time depends on the number of doses taken at the
Wang, Husan and Chow (1996) propose statistical models in the case of
multiple dose regimen trials aimed at studying the impact of two different non-
compliance scenarios: patients who do not take the prescribe dosage or patients
A word of warning: features of a model that are not relevant to the questions that
have been posed from the simulation team should not be considered. For instance, even
though “weight” could be a covariate of primary importance for a real trial, if the virtual
experiment we want to conduct concerns the same weight group, we should not include
“weight” in the model. This may seem a fairly obvious statement, but it is frequently
violated.
All the modern books on clinical trial methodologies, see for instance Piantadosi (2005),
Senn (2007), Friedman et al. (2010), devote at least one chapter to the experimental
design. Here we want to discuss the design of a virtual experiment, which will be
different from planning a real trial. However the design still needs to be efficient so as to
literature (Santner et al., 2003; Fang et al., 2005). The design consists in choosing the
settings of the input variables, with the proviso that a deterministic simulator provides
13
Hypercubes, Minimax and Maximin Distance criteria, Uniform designs are used in a
non-model based approach, and special analysis procedures such as the Kriging
methodology are employed (Santner et al., 2003). However, the simulator of a clinical
trial – the IO model, as well as the covariate model and the execution model – will very
likely include a stochastic component and the rationale of using standard statistical
and blocking, and also the use of specific designs, for instance cross-over designs and
play-the-winner. It must be borne in mind that the choice of the experimental design
will depend on the statistical model, and a model-based theory of optimal experimental
design for clinical trials, including dose-finding ones, has come to a mature
development stage, as shown in statistical journals and conferences (see for instance
Giovagnoli et al., 2010). But how relevant is this literature to the simulated
experiments?
increase the number of factors of interest and their levels that are simultaneously tried.
An important point is that the usual rules of factorial experiments apply, namely we
should not vary the factor levels one-at-a-time, to avoid masking possible interactions.
When simulating, we would normally not confine ourselves to fractional factorials but
instead use full factorials to evaluate all the interactions among the experimental factors
(e.g. dosage and dose timing of the drug). Fractional factorials would still be required,
however, when the number of combinations of factors and levels is too large, as pointed
out in the 1999 Guidelines (Holford et al., 1999). In actual practice often only a subset
of factors proves to be responsible for most of the output variation, but not much use is
14
made by clinical triallists of the literature on screening experiments, i.e. experiments for
choosing a few relevant factors out of a potentially very large number (Dean and Lewis,
2006). Furthermore, since virtual experiments are often run for choosing among
possible models, the theory of designs for model-selection (to be found for instance in
for the choice between two treatments. One may wonder about the role of
balance exist, for instance, the Biased Coin Design (Efron, 1971), or the Adjustable
Biased Coin Design (Baldi Antognini and Giovagnoli, 2004), where at each step the
the current difference between the two groups of allocations, so that the tendency
towards balance is stronger the more we move away from it. These could be
conducted sequentially on groups of patients and interim analyses of the data are
performed. Adaptive designs have come into use: adaptation of the study protocol
involves changes in sample size, changing doses, dropping treatment arms, changing the
timing and number of interim analyses, etc. Clearly the crucial inferential problem is to
assess the impact of such changes on the statistical analysis (Posch et al., 2003; Cui et
al., 1999). Going from real to virtual, it makes sense to ask ourselves whether a
simulated trial in clinical research should or should not be carried out sequentially, since
frequently recurring issues of slow patient recruitment to the trial, side effects, ethical
15
demand of early stopping, etc. do not apply to computer experiments. One answer is,
again, to achieve greater realism, but also sometimes the sequential nature of the
parameters of the model in response adaptive trials (Hu and Rosenberger, 2006) or
experiments for Phase I (Baldi Antognini et al., 2008; ‘O Quigley, 2002). The severe
several authors in the statistical and biomedical literature (for instance Fedorov et al.,
2007; Ogungbenro et al., 2007; McGree et al., 2009). However, Holford, Ma and
Ploeger (2010) regret that the statistical theory of optimal design of experiments deals
mainly with parameter estimation rather than hypothesis testing, whereas the main
of one drug over another. It will be interesting to see if a combined approach of optimal
design methods and simulation will bring useful results: optimal design theory deals
more often than not with designs that are most efficient for asymptotical inference, but
possibly not fully so for small sample sizes. So, to be able to simulate a large sample
experimenter’s control and this allows for exploring conditions that are ruled out in the
inclusion/exclusion procedures of the actual trial, exploring in depth all possible levels
16
of the concomitant variables, looking for possible interactions also between the
treatments and the prognostic factors, since in general one wishes to use simulation for
detecting also the possible side effects of a therapy. More in general, the full strength of
simulation lies in being able to treat prognostic factors as random noise in the virtual
experiment, and letting them vary according to a prescribed probability law, whereas in
an actual trial we would have to content ourselves with just a few set levels, either
experimental design does not seem to have caught up with this novelty. An IO model
including random covariates is a mixed effect one (linear or non-linear), and appropriate
experimental designs for these models are present in the literature, but they are all non-
stochastic.
Lastly, what are the appropriate designs that enable accounting for possible
protocol deviations? Again, this aspect has not been the object of statistical
investigation as yet.
As a final thought, we like to add that often the choice of the simulator itself is
the output of a trial-and-error process that can be regarded as a virtual experiment. This
is, yet again, a different problem, since in this case the endpoint is a measure of the
design for choosing the simulator as well. Different techniques and different computer
6. Software
Simulation for clinical trials includes different types of models and involves several
statistical issues. Therefore often researchers use more than just one software, each
17
software being targeted for specific purposes. In particular, sophisticated software
packages are employed for IO models, which are usually quite complicated. Programs
specifically designed for IO modeling of data in this context are the non-linear mixed-
from. MathWorks provides a software tool, the so-called SimBiology, for the complete
since IO models usually include terms for covariate effects, the choice of methodology
for generating virtual subjects is often dependent on the software for IO modeling.
Mouksassi et al. (2009) use the R package library GAMLSS which facilitates the
authors (Chabaud et al., 2002) prefer to resample patients from existing epidemiological
To our knowledge, there are no particular software specifically designed for simulating
execution models, but often a random number generator suffices. However, there also
exists multi-purpose software for full clinical trial simulation that incorporate
differential equations, such as the Pharsight Trial Simulator and another, originally
developed for Vertex Pharmaceuticals, which has recently become publically available.
http://www.biopharmnet.com/doc/2010_02_13_cts_documentation.pdf .
18
In general, however, existing prepackaged software is, by definition, not flexible and
the statistical methods behind a specific clinical trial simulation software it is difficult to
interpret the results correctly. Thus, rather than accepting library models and their
assumptions, some scientists create models according to their own needs using the free
7. Metamodels
The requirement for the IO model to be accurate in describing the problem under
investigation means that the simulator may be rather complex. In some instances the
valid approximation of the original simulator. Since emulators imitate the original
simulator, which is itself a model of reality, they are often called metamodels. One of
Furthermore, the case where data cannot support estimating all of the parameters in a
complicated simulation model is not rare. Therefore, models with fewer parameters
should be fitted to the data. Particular optimal design problems for metamodels can be
found in the recent literature (see for instance Baldi Antognini and Zagoraiou, 2010)
but, in the clinical context, this aspect has not been the object of statistical investigation.
In a study by Pillai et al. (2004), the authors state that “although the complex
19
physiological PK/PD model described the data well, its major disadvantages
were the long computer run-times […] and the numerical difficulties associated
with solving a rather stiff problem”. In order to reduce the computer run-times
associated with the simulator, the authors have constructed a ‘kinetics of drug
action’ (K/PD model) and its performance was assessed by fitting data simulated
with the PK/PD model under various scenarios. The authors observe that the
two-compartment one to face the problems arising from a sampling design that,
due to logistic reasons and clinical convenience, was inadequate for the more
complex model.
In the context of clinical trials there is special emphasis on the need for the simulators to
representation for the real system that it is trying to represent, and consequently the
question of its ability to accurately predict real situations. This concern is related to
model verification and validation (Sargent, 2008). Model verification deals with errors
that might have occured in the computer program and its implementation, while model
intended application” (Schlesinger et al., 1979). Thus, the primary aim of validation is to
make the model useful, in the sense that it addresses the right problem and provides
20
accurate information about the trial of interest. It goes without saying that to a certain
extent this question arises in real experiments as well, since real data too are subject to
random or systematic errors, but in most cases we are inclined to believe that a real
experiment has “empirical validity”, whereas a simulated one is fictitious and therefore
far away from reality. When real data provided by physical experiments are taken to be
the “gold standard” of the true relationship between factors and outputs, they should be
used to confirm the computer model and the results obtained by simulation. In some
cases, experimental data may not be available and data obtained from observational
studies or surrogate data (e.g. derived from experiments on animals or prototypes) may
be used.
called prospective validation is the one that uses data from simultaneous or subsequent
clinical trials in the same context (e.g. same disease). Retrospective external validation
uses the data of earlier trials to validate the model and, if necessary, modify it in order
collect a new dataset for validation. If not (e.g. studies of rare diseases), an internal
validation is used, which is based on “cheap” methods such as data-splitting, where data
utilized in order to build the simulator are compared with data generated by the model.
The validation problem is tackled with the aid of a family of resampling methods, at the
Concordance of simulated with real data under the same study design can be checked
via:
Smirnov tests).
21
• the use of graphs (or descriptive statistics), e.g. visual comparison of
versus predicted;
values);
Examples
In the carvedilol dosing strategy study described earlier (see §4), Albers et al.
(2007) make use of a visual predictive check in order to evaluate the proposed
patients were observed and compared with the simulation data. The authors
observe that about 90% of the real data are within the 90th percentile of the
In Ozawa et al. (2009) the model was validated with Phase II data provided by
Kunitoh et al. (1996) by comparing the predicted trial results obtained by the
Eddy and Schlessinger (2003) validate the so-called Archimedes diabetes model,
particular, they examine whether the difference between the outcome of the
actual trial and the model is statistically significant by using the corrected chi-
22
Duffull et al. (2000) develop a pharmacokinetic model for ivabradine and they
use two different kinds of datasets in order to test its ability to describe the real
prediction plots visually and comparing the cumulative density functions of the
Abbas et al. (2006) propose an innovative approach for the validation and
There may also be alternative ways for validation that have never been explored so far,
9. Some challenges
It is worth pointing out that although we have concentrated on research for drug
development, which is the aim of the majority of clinical trials, there is a wide variety of
vaccines, new medical devices and test kits, new diagnostic tools and procedures, new
methods of population screening, not to mention improving the quality of life: healthy
eating, lifestyle changes, comfort for chronic illnesses, old age, etc. In all of them the
momentum.
It goes without saying that clinical trial simulation poses several challenging
problems. First of all, some burning questions need an answer that is convincing for the
laymen too.
23
• Scientificity: Is this new discipline rigorous enough? Can results obtained by
• Efficacy: Is it true that simulated clinical trials can speed the drug development
process? After all, the model development procedure too is associated with time
• Ethics: Is it safe for the patients? Is it to their best advantage? Or do these efforts
only help the pharmaceutical companies to reduce costs without any benefit for
Much work lies ahead for statisticians. The successful execution of a simulation project
theory of experimental design for simulation. We stress that simulations are not aimed at
replacing real life trials; rather, physical and computer experiments are two
complementary sources of information with distinct roles and different degrees of cost,
speed, and reliability. Simulation is usually cheaper and faster, and, what is more
important, avoids the major ethical problems involved in clinical research, but in order to
be of use, simulation must be fairly close to the physical set-up. Thus a virtual
play a part with alternating roles. The fundamental steps in designing such a mixed trial
would consist of
- designing the simulated ones, to be run in groups, one after another, to improve
24
our knowledge of the process;
To the best of our knowledge, the best strategy of integrating real and simulated trials to
build actual knowledge while dynamically modifying the computer code to get closer
and closer approximations to the reality, has not yet been the object of theoretical
National Interest of Italian Ministry for Research (MIUR) PRIN 2007 Statistical
Methods for Learning in Clinical Research. The second author was supported by a grant
bio-sanitarie”.
References
trial model using the standardized mean and variance criteria, “Journal of Biomedical
clinical trials with computer simulation based on results of earlier trials, illustrated
1053–1061.
and dose simulation of carvedilol in paediatric patients with congestive heart failure,
25
“British Journal of Clinical Pharmacology”, 65, pp 511-522.
45-59.
A. BALDI ANTOGNINI, A. GIOVAGNOLI (2004), A new 'biased coin design' for the
sequential allocation of two treatments, “J. Roy. Statist. Soc. C”, 53, pp 651-664.
P.L. CHAN, J.G. NUTT, N.H. HOLFORD (2007), Levodopa slows progression of
L. CUI, H.M.J. HUNG, S-J. WANG (1999), Modification of Sample Size in Group
26
Drug Discovery, and Genetics, Springer, New York.
F. DE RIDDER (2005), Predicting the Outcome of Phase III Trials using Phase II Data:
A Case Study of Clinical Trial Simulation in Late Stage Drug Development, “Basic &
403–417.
K.T. FANG, R. LI, A. SUDJIANTO (2005), Design and Modeling for Computer
D’Agostino (eds), Pharmaceutical Statistics using SAS. A Practical Guide, SAS Press,
27
L.M. FRIEDMAN, C.D. FURBERG, D.L. DEMETS (2010), Fundamentals of Clinical
Heidelberg.
P. GIRARD (2005), Clinical Trial Simulation: a tool for understanding study failures
and preventing them, “Basic & Clinical Pharmacology & Toxicology”, 96, pp 228–234.
mixed effect regression model for drug compliance, “Statistics in Medicine”, 17, pp
2313-2334.
276–287.
N.H.G. HOLFORD, M. HALE, H.C. KO, J-L. STEIMER, L.B. SHEINER, C.C. PECK,
N. HOLFORD, S.C. MA, B.A. PLOEGER (2010), Clinical Trial Simulation: A Review,
209-234.
28
Randomization in Clinical Trials, Wiley & Sons, New York.
H.C. KIMKO, S.B. DUFFULL (eds) (2003) Simulation for designing clinical trials. A
A. KRAUSE (2010), The Virtual Patient: Developing Drugs Using Modeling and
1655.
B.Y. LEE, S.T. BROWN, P.C. COOLEY, R.K. ZIMMERMAN, W.D. WHEATON,
P.A. LOCKWOOD, J.A. COOK, W.E. EWY, J.W. MANDEMA (2003), The use of
clinical trial simulation to compare proof-of-concept study designs for drugs with a slow
2050-2059.
29
J.M. MCGREE, J.A. ECCLESTON, S.B. DUFFULL (2009), Simultaneous versus
101–123.
M.S. MOUKSASSI, J.F. MARIER, J. CYRAN, A.A. VINKS (2009), Clinical trial
New Jersey.
30
G. PILLAI, R. GIESCHKE, T. GOGGIN, P. JACQMIN, R.C. SCHIMMER, J-L.
M. SALE (2004), Clinical Trial Simulation, in P.L. Bonate and D.R. Howard (eds),
T.J. SANTNER, B.J. WILLIAMS, W.I. NOTZ (2003), The Design and Analysis of
S. SENN (2007), Statistical issues in drug development, 2nd Edition, Wiley, Chichester,
31
Boca Raton.
emse2009/pdf/slides/D.%20Steinberg.pdf
D.W. TAYLOR, E.G. BOSCH (1990), CTS: a clinical trias simulator, “Statistics in
Medicine”, 9, pp 787-801.
W. WANG, F. HUSAN, S.C. CHOW (1996), The impact of patient compliance on drug
HOLLOWAY, P.T. LEESE, M.C. PETERSON (2008), Population PK–PD model for
Alessandra Giovagnoli
Department of Statistical Sciences, University of Bologna, Via delle Belle Arti 41 - 40126 Bologna, Italy.
email: [email protected]
Maroussa Zagoraiou
Department of Statistical Sciences, University of Bologna, Via delle Belle Arti 41 - 40126 Bologna, Italy.
email: [email protected]
32