Limitations of Mathematical Model PDF
Limitations of Mathematical Model PDF
of Mathematical Models
in Transportation Policy Analysis
ABSTRACT:
Government agencies
are using many kinds
of mathematical models
to forecast the
effects of proposed
government policies.
Some models are useful;
others are not; all
have limitations.
While modeling can
contribute to effective
policyma king, it can
con tribute to poor
decision-ma king if
policymakers cannot
assess the quality of a
given application.
This paper describes
models designed for
use in policy analyses
relating to the
automotive transportation
system, discusses
limitations of these
models, and poses
questions policymakers
should ask about a
model to be sure its
use is appropriate.
Introduction
Significance
The consequences of relying on output data
with such confidence limits are obvious. The
confidence bands in this case are so large that theoretical difficulties may preclude the
there may not be true differences among the computation of statistical confidence intervals
year to year values of the VMT estimates. A for the predictions of large models such as
policy based on this type of information may be Wharton EFA's, less rigorous estimates of pre-
unsound. Yet, unless such a lack of precision diction precision would be a great aid to deci-
were made explicit, policymakers could easily sion makers.
be misled. One can only speculate what the outcomes
would have been if each "number" had been
lllustration 6 properly qualified. The differences in inter-
A second illustration deals with two applica- pretation of similar numbers highlight the need
tions of the Wharton EFA Automobile Demand to properly qualify results and to inform policy-
Model by two different sets of analysts. makers of the uncertainty of forecasted values.
The National Highway Traffic Safety Adminis-
tration (NHTSA) made prominent use of the
Wharton EFA Auto Demand Model in the
documentation supporting the fuel economy
standards for automobiles for 1980-1984.
NHTSA reported that the proposed 27 miles per
gallon standard for 1984 would lead to only
210,000 fewer new car sales than the 1980 stan-
dard of 20 miles per gallon. The difference was
1.8 percent of the forecast 1984 sales. NHTSA
labeled this difference as "insignificant, given
the difficulties of projecting the sales initially"
(National Highway Traffic Safety Administration
1977).
In contrast, analysts of the International
Trade Commission (ITC) used the same model
in conducting a study for the Senate Finance
Committee of the proposed "gas guzzler" tax.
This study projected a shift of 300,000 in sales
from domestic to foreign producers in 1985 if
the tax and rebate plan were enacted ( U S .
International Trade Commission 1977). This
represented a shift of slightly more than two
percent of total sales. Senate Finance Com-
mittee staff members report this was viewed as
significant and it contributed to the delay in ac-
tion on the gas guzzler tax proposal.
Significance
Both of the projected differences are of about
the same size. One group determined the figure
to be significant while the other labeled the
amount insignificant. In both cases, the judg-
ments of significance and insignificance were
subjective. Both the magnitude of the estimate
(number of cars), and the degree of precision of
the estimate judged to be significant depend on
the different perspectives of NHTSA and the
ITC in the context of specific problems. In this
case, however, it is not clear that these judg-
ments were derived from an adequate under-
standing of the uncertainty of the forecasts
since confidence intervals were not associated
with the predictions. While practical and
Questions a Policymaker Should Ask
Before Using a Model
How well does the model Assuming that an analyst has chosen a particular model for
perform? use in a particular policy-related application, the policymaker
should check on the quality of its performance. There are
three ways of doing this if the model is based on historical
data: first, by examining the model's output over the sample
period and comparing it with observed data for that period;
second, by examining the model's output for the time period
starting just after the fit period of the model through the pre-
sent and comparing it with actual data for that period, if they
are available; and third, by examining the model's output for
future years and checking for its "reasonableness."
Of these three, the second alternative is probably the best
way of checking the model's "track record." It affords the
opportunity to test the model in a forecasting mode, yet it of-
fers the advantage of having historical data available to com-
pare with the output. Note, however, that this method will be
less useful if only short-term data are available when a long-
term model is being tested.
Testing over the historical period is sometimes neither
feasible nor appropriate because of the nature of the model,
its method of construction, or other factors. Testing the model
over the future may not yield adequate information to make
the decision about its accuracy, since sometimes it is impossi-
ble to judge whether the output is reasonable. There simply
may be no basis for comparisons.
- - -
Has the model been analyzed by Often in the course of building a model, the author will per-
someone other than the model form various tests in an attempt to validate the model. These
authors? test results, if they include model output and are objective, can
probably be viewed with some confidence. However,
modelers often do not take the time to rigorously analyze or
assess their models themselves, primarily because the time
and resources allocated to model building are limited. Con-
tracts requiring model construction usually do not include a
separate task for model analysis.
Model validation tests performed after a model has been
constructed give little insight into the theory and dynamics of
the model. For a user to have an understanding of the model,
he should have access not only to the model documentation
but to any assessments performed by people other than the
model builders. Such assessments can provide insight into
the strengths and weaknesses of a model and provide a more
objective view of the model than may be provided by a model
author. The results of such an assessment should be carefully
reviewed and taken into account before a model is chosen for
use in policy-related studies. It should be noted, however, that
model assessments are not often performed.
What assumptions and data The assumptions and data used in running a model for
were used in producing model specific applications are generally different from those used in
output for specific applications? constructing the model. In running econometric models to
produce projections, a set of exogenous data consisting of
forecasts of several variables is generally required as input.
These input data are themselves forecasts of the unknown
future and should be used only with care and an under-
standing of their limitations.
If a model has already been run for a specific purpose, the
set of assumptions and data used to produce the output
should be known, so that their reasonableness and ap-
plicability can be determined.
Why is the selected model There may be many models that are suitable, at least based
appropriate to use in a given on initial inspection, for use in a particular situation. It is up to
application? the policymakers to satisfy themselves that the most ap-
propriate model has been selected. Questions that should be
asked include: What is the stated purpose of the model
selected? What does it measure? What does it not measure?
Is its intended use compatible with the present need? Is this
the easiest model to run that is applicable to the study area of
interest? Are there other models equally suited to the job?
Finding the answers to these questions may be a very time-
consuming effort. However, it is advisable to have the answers
in hand so that resources may be most effectively used. Many
models may forecast the same variables, but some may do
more. If two models are of equal quality (which is difficult to
determine) and a user is interested only in the output of the
less complex model, clearly it would be wasteful to run the
more complex model.
It should also be determined that the model chosen for use
actually forecasts the variables of interest and that they are not
buried somewhere internally in the model, or worse, set ex-
ogenously. Often this distinction is not clear.
- - - -
Was the model run directly and A given model may be run by a number of users for a variety
specifically for the present of purposes. It may be that for one of those past uses, the
purpose? model input and output seem similar to those desired for a
present policy analysis. Extreme caution should be exercised
if output from other applications is used. Caution must also be
exercised when individuals in other agencies perform model
runs on request for a specific application. One can never be
sure of the exact circumstances under which a model was run.
Input may not coincide directly with current needs. Alternative
options in programs may sometimes be exercised. Biases in
interpretation of the meaning of output may exist. If a model is
not run by a user for a particular policy application, the
chances of errors appearing in the analysis are greatly in-
creased.
What is the accuracy of the Many models have output that is accurate only within some
model output? error band. The larger the error band at some level of con-
fidence, the less accurate the output. It is relatively straight-
forward to determine confidence bands for small, single-
equation models, but more difficult for large-scale models.
Nevertheless, it is imperative that the model user have some
idea of the accuracy of the model output before it is used in
specific applications. In comparing output of a model run that
uses two different sets of input data, the error bands on the
output may be so large that results that look different may not
be, in a statistical sense. Knowing the accuracy of the output
helps to put the usefulness of the model into perspective.
- --- .
Does the structure of the model A model is an abstraction of reality. In translating from
resemble the system being reality to mathematical equations, some components of the
modeled? real-world system are omitted. It is important to identify which,
if any, pivotal elements of the real system have not been in-
cluded in the model. Key items and relationships included and
the key ones omitted should be identified. In addition, while an
attempt may be made to include in the model some aspect of
the real-world system, its representation in the form of an
equation may be inappropriate or inadequate. The bases of
the mathematical representation should be clear to the model
user.
Mathematical models are in widespread use several questions relating to model use. These
in policy analyses related to the transportation include queries concerning the model's perfor-
system. There are many kinds of mathematical mance record, results of model assessment, the
models, with econometric models being the purpose of the model, its appropriateness in
primary kind used in the motor vehicle specified applications, assumptions contained
transportation policy sector. i n the model, and availability of model
While mathematical models may provide documentation. Analysts who use models to
policy analysts with strong tools to use in their formulate or analyze policies have an obliga-
studies, they may also provide very misleading tion to answer such questions. These answers
results if not applied correctly. There are many should be public so that peers can review their
limitations in the correct use of models. Some reasonableness.
limitations are inherent in a model (e.g., models The proper use of models can add con-
are incomplete, and model output is uncertain siderable insight to the policymaking process,
although it may appear precise). Other limita- but model output should be regarded only as
tions arise when models are used (e.g., the ac- approximations. Only if policymakers are aware
curacy of input data may be unknown, and the of the limitations inherent in models can
operational status of a model is often unclear). mathematical modeling enhance the policy-
To help ensure proper use of models in making process.
policy analyses, a policymaker should ask
Acknowledgment