0% found this document useful (0 votes)
169 views2 pages

Parametric Statistics

Parametric statistics assumes data comes from a known probability distribution and makes inferences about the distribution's parameters. It requires less assumptions than non-parametric methods but can produce inaccurate estimates if its assumptions are incorrect. Parametric methods are simpler and faster than non-parametric methods but are not robust to violations of its assumptions. The term "parametric" was coined in 1942 by statistician Jacob Wolfowitz to define the opposite case where a distribution's functional form is unknown, known as the non-parametric case.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views2 pages

Parametric Statistics

Parametric statistics assumes data comes from a known probability distribution and makes inferences about the distribution's parameters. It requires less assumptions than non-parametric methods but can produce inaccurate estimates if its assumptions are incorrect. Parametric methods are simpler and faster than non-parametric methods but are not robust to violations of its assumptions. The term "parametric" was coined in 1942 by statistician Jacob Wolfowitz to define the opposite case where a distribution's functional form is unknown, known as the non-parametric case.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 2

Parametric statistic

Parametric statistics is a branch of statistics that assumes data has come from a type of
probability distribution and makes inferences about the parameters of the distribution.[1]
Most well-known elementary statistical methods are parametric.[2]

Generally speaking parametric methods make more assumptions than non-parametric


methods.[3] If those extra assumptions are correct, parametric methods can produce more
accurate and precise estimates. They are said to have more statistical power. However, if
those assumptions are incorrect, parametric methods can be very misleading. For that
reason they are often not considered robust. On the other hand, parametric formulae are
often simpler to write down and faster to compute. In some, but definitely not all cases,
their simplicity makes up for their non-robustness, especially if care is taken to examine
diagnostic statistics.[4]

Because parametric statistics require a probability distribution, they are not distribution-
free.[5]

Example

Suppose we have a sample of 99 test scores with a mean of 100 and a standard deviation
of 10. If we assume all 99 test scores are random samples from a normal distribution we
predict there is a 1% chance that the 100th test score will be higher than 123.65 (that is the
mean plus 2.365 standard deviations) assuming that the 100th test score comes from the
same distribution as the others. The normal family of distributions all have the same
shape and are parameterized by mean and standard deviation. That means if you know
the mean and standard deviation, and that the distribution is normal, you know the
probability of any future observation. Parametric statistical methods are used to compute
the 2.365 value above, given 99 independent observations from the same normal
distribution.

A non-parametric estimate of the same thing is the maximum of the first 99 scores. We
don't need to assume anything about the distribution of test scores to reason that before
we gave the test it was equally likely that the highest score would be any of the first 100.
Thus there is a 1% chance that the 100th is higher than any of the 99 that preceded it.

History

Statistician Jacob Wolfowitz coined the statistical term "parametric" in order to define its
opposite in 1942:

"Most of these developments have this feature in common, that the distribution functions
of the various stochastic variables which enter into their problems are assumed to be of
known functional form, and the theories of estimation and of testing hypotheses are
theories of estimation of and of testing hypotheses about, one or more parameters. . ., the
knowledge of which would completely determine the various distribution functions
involved. We shall refer to this situation. . .as the parametric case, and denote the
opposite case, where the functional forms of the distributions are unknown, as the non-
parametric case."[6]

You might also like