0% found this document useful (0 votes)
6 views

Statistics For Management

Uploaded by

Raunak Singh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Statistics For Management

Uploaded by

Raunak Singh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

STATISTICS FOR MANAGEMENT

NAME – RAUNAK SINGH


ROLL NUMBER – 2314520076
PROGRAM – MASTERS OF BUSINESS ADMINISTRATION (MBA)
SEMESTER – I
COURSE NAME – STATISTICS FOR MANAGEMENT
COURSE CODE – DMBA103
SET – I
Q.NO
1. In the older days statistics was used for collecting and gathering data or information about
the population of state, incomes and the population of the military etc. for framing the fiscals
policies and and military policies. But with the advancement of time and technology statistics
was also used in various other things such as science, business, maths, research sector and in
banks. The scope of statistics has been enlarged in the modern times and is applied very
widely. A.L Bowley has defined “statistics as a science of averages”.
Functions of statistics are –
. Collection of data – Statistics is used for collecting various data from different sources such
as experiments, observations, surveys etc., statistics then uses the data to prepare a conclusive
information.
. Quality control - statistics is used for controlling and managing a good quality of the
products and services and enhances the quality.
. Decision making – statistics is used for making effective and efficient decisions . Statistics
uses its gathered data and provide useful information through which proper decisions can be
made.
. Presentation of data – Statistics uses tables, charts and numerical summaries or graphs t
represent data in a meaningful and proper way which makes the data easier to understand.
. Analysing data – Statistics helps in analysing the data to identify its trends and patterns and
making it easier to understand.
. benchmarking and comparison – Statistics help in comparing the data between two
organizations and benchmarking them by there standards.
Limitations of statistics –
. Sensitivity to data quality – Statistics fully relies on the data it collects and uses the data to
make effective decisions but any inaccurate or false information on data can lead to wrong
and flawed conclusions.
. Simplification of reality – Statics usually oversimplifies the data it collects which can
sometimes lead to missing of important or more crucial data.
. Limitation in scope – statistics may not be able to capture every possible data or a full
complexity of the situation and some data’s may be more difficult to quantify.
. Ethical issues - Using of statistics may rise the issue of ethical concerns such as invasion
of privacy, or biased information’s.
2. Measurement scales are the system in which the collected data is been quantified and
categories variables in a study or experiment. These measurement scales allow researchers
meaningful and effective decisions, find meaningful comparison and make effective
conclusions by providing them a good framework for analysing and collecting the data. There
four main types of measurement scales these are interval scale, ratio scale, nominal scale and
ordinal scale.
. Interval scale – Interval scale is used to represent ordered categories with a consistent and
equal interval between them, interval scale has no true zero point. Some examples are IQ
scores or temperatures in Fahrenheit and Celsius.
. Ratio scale – Ratio scales allows true meaningful ratios between values, and is used to
represent ordered category with equal intervals but it has true zero point. Some examples are
weight, height income and age.
. Nominal scale – Nominal scale is used to define categories and labels them without any
specific order. Some examples are types of animals, colours, genders.
. Ordinal scale – Ordinal scale is used to represents categories with a proper and organized
order or rank. The differences between the values are not consistent. Some examples are
customer satisfaction ratings or education levels.

Qualitative data –
Qualitative data is used for describing qualities or characteristics, Qualitative data is often not
in a numeric form and is often categorical. Qualitative data cannot in counted measured or
expressed using numbers and is shared through data visualisation tool. Some examples are
shapes, colours, genders type of vegetable or type of fruits etc.
Quantitative data –
Quantitative data on the other hand uses measurable data which can be classified, measured
or expressed in the form of numbers and can be divided into discrete or continues data. Some
examples are height, weight, temperature or age, number of bikes in parking etc.

3. The basic laws of sampling theory are –


. Law of unbiased estimation – The estimation should not be biased which means that every
element in the population gets an equal chance to be in the sample. A sampling is said to
unbiased when the expected value is equal to the true value of the parameter.
. Law of independence – This law states that the selection of element should not effect the
selection of other element and should be independent from it.
. law of large numbers - As the sample size increases, the characteristics of the sample will
approach those of the population.
. Law of random selection – According to this law every element in the selection process has
the equal chance of being selected without any biasness.
Different sampling techniques are as follows -
. Stratified sampling – Stratified sampling means dividing the elements strata, groups or sub
groups on the based on certain characteristics and then randomly selecting the samples from
each stratum. For example, if studying the performance of employees in any organization the
strata can be created on based on the performance level of the employees and samples can be
randomly selected from them.
. cluster sampling - in cluster sampling the elements are divided into groups or different
clusters and then randomly selecting the entire clusters for analysis. For examples cluster
sampling can be used in sampling neighbourhoods instead of sampling every individual the
entire neighbourhood can be selected for sampling.
. Multi stage sampling – multi stage sampling is the combination of various sampling
methods in which the sampling is done in different stages which often involves hierarchy of
sampling units. For example
In a multi-stage sample for a country's census, the first stage might be selecting states, then
districts within states, followed by towns within districts, and finally households within
towns.
These are some of the sampling techniques.
SET - II
1. Business forecasting refers to process of predicting the future condition of the market
by using some of the business intelligence tools and using different forecasting
methods to get and analyse the historical data. Business forecasting can be of two
types qualitative and quantitative

. qualitative forecasting – qualitative forecasting is the method which is totally based


on the experts’ opinions of the consumers, customers and the experts. This forecasting
method is used when the business does not have any conclusive or sufficient data to
make any effective conclusive decision. In this method the expert uses all his
available tools to make a qualitative prediction of the future.
. quantitative forecasting – Whereas on the other hand a quantitative forecasting
method is used when there is enough and accurate past data to predict the probability
of future.

Different methods of business forecasting are, time series analysis, market research,
the average approach, the naïve approach.
. Time series analysis – this method is also known as the trend analysis method; in
this forecasting method the experts are required to analyse the historical data to
identify the trends.

. Market research – there are various market research techniques that studies the
customer needs and wants and their review or response to a certain product or
services. Some of the market research techniques involves customer reviews and
interviews and product testing.

. The average approach – The average approach refers that it uses the historical data to
predict the future data. This also know as a quantitative forecasting.

. The naïve approach - The naïve approach is the most cost-effective and is often used
as a benchmark to compare against more sophisticated methods. It’s only used for
time series data where forecasts are made equal to the last observed value. This
approach is useful in industries and sectors where past patterns are unlikely to be
reproduced in the future. In such cases, the most recent observed value may prove to
be the most informative.

2. Index numbers are the numbers that are the measurement of any changes in the
variable or many variables across a determined period. These numbers do not show a
direct measurable figure rather they show a general relative change. Index numbers
are commonly used in the study of the economic status of any particular region.
3- Types of Estimators
There are several types of estimators used in statistics. Here are some common ones:
1. Point Estimators: Point estimators provide a single value as an estimate of the
population parameter. Examples include the sample mean, sample variance, and
sample proportion.
2. Interval Estimators: Interval estimators provide a range of values within which the
population parameter is likely to fall. Confidence intervals are a common type of
interval estimator.
3. Maximum Likelihood Estimators: Maximum likelihood estimators are obtained by
maximizing the likelihood function, which measures the probability of observing the
sample data given a specific value of the parameter. These estimators are widely used
in many statistical models.
4. Bayesian Estimators: Bayesian estimators incorporate prior knowledge or beliefs
about the parameter into the estimation process. They use Bayes' theorem to update
the prior beliefs based on the observed data.
Criteria of a Good Estimator
A good estimator should possess the following criteria:
1. Unbiasedness: An estimator is unbiased if, on average, it produces estimates that are
equal to the true population parameter. In other words, the expected value of the
estimator is equal to the true parameter value.
2. Efficiency: An efficient estimator has a smaller variance compared to other
estimators. It provides estimates that are closer to the true parameter value, on
average, and has a smaller mean squared error.
3. Consistency: A consistent estimator converges to the true parameter value as the
sample size increases. As the sample size grows, the estimates become more accurate
and approach the true value.
4. Robustness: A robust estimator is not heavily influenced by outliers or violations of
assumptions. It provides reliable estimates even when the data deviate from the
underlying assumptions of the statistical model.
5. Sufficiency: A sufficient estimator contains all the relevant information about the
parameter in the data. It summarizes the data in a way that captures the essential
information needed for estimating the parameter.
6. Ease of computation: An estimator should be computationally feasible and easy to
calculate. It should not require complex calculations or excessive computational
resources.
Remember that different estimators may have different strengths and weaknesses depending
on the specific context and assumptions of the statistical problem at hand.

You might also like