0% found this document useful (0 votes)
6 views

Ba Explaination

Uploaded by

hm4000981
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Ba Explaination

Uploaded by

hm4000981
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

T-Score (or T-Statistic)


The T-score measures how far a sample mean is from the population mean in terms of the
standard error. It is used in t-tests to determine whether there is a significant difference
between the means of two groups.
Example:
You test whether a new drug lowers blood pressure compared to a placebo. If the t-score is 2.5,
it shows the difference between drug and placebo means is 2.5 times the standard error.

2. Variance
Variance tells how spread out the data is from the mean. It’s the average squared difference
from the mean.
Formula:
Variance=∑(Xi−Xˉ)2n\text{Variance} = \frac{\sum (X_i - \bar{X})^2}{n}Variance=n∑(Xi−Xˉ)2
Example:
For the data points 2, 4, 6:
 Mean = 4
 Variance = [(2-4)² + (4-4)² + (6-4)²] / 3 = 8/3 ≈ 2.67

3. Pooled Variance
Pooled variance is the weighted average of the variances from two or more groups. It’s used in
independent samples t-tests to assume equal variances across groups.
Formula:
Sp2=(n1−1)S12+(n2−1)S22n1+n2−2S_p^2 = \frac{(n_1 - 1)S_1^2 + (n_2 - 1)S_2^2}{n_1 + n_2
- 2}Sp2=n1+n2−2(n1−1)S12+(n2−1)S22
Example:
If group 1 (n=10) has variance 4, and group 2 (n=12) has variance 6, the pooled variance would
average the two, weighted by their sample sizes.

4. Degrees of Freedom (DF)


Degrees of freedom represent the number of independent values that can vary in the data while
estimating a parameter.
Example:
In a t-test comparing two groups, DF = (n1 + n2 - 2). If each group has 10 participants, DF = 10
+ 10 - 2 = 18.
5. T-Critical
T-critical is the cutoff value from the t-distribution table. If the t-statistic exceeds this value, the
null hypothesis is rejected.
Example:
For α = 0.05 and DF = 18, the t-critical value might be 2.10 (from the table). If your t-score is
greater than 2.10, your result is significant.

6. ANOVA (Analysis of Variance)


ANOVA tests whether the means of multiple groups are significantly different.
Example:
Testing whether students from three different cities perform differently in an exam. ANOVA
checks if the average scores differ significantly across the three groups.

7. Sum of Squares (SS)


SS measures the total variation in data by summing the squared deviations from the mean.
Formula:
SS=∑(Xi−Xˉ)2SS = \sum (X_i - \bar{X})^2SS=∑(Xi−Xˉ)2
Example:
For data points 2, 4, 6:
 Mean = 4
 SS = (2-4)² + (4-4)² + (6-4)² = 8

8. Mean Square (MS)


MS is the average variation and is obtained by dividing the sum of squares (SS) by the
degrees of freedom (DF).
Formula:
MS=SSDFMS = \frac{SS}{DF}MS=DFSS
Example:
If SS = 20 and DF = 4, MS = 20 / 4 = 5.

9. F-Statistic (F)
The F-statistic compares two variances to see if they are significantly different. It's used in
ANOVA and regression analysis.
Formula:
F=MSbetweenMSwithinF = \frac{MS_{between}}{MS_{within}}F=MSwithinMSbetween
Example:
In ANOVA, if the F-value is higher than the F-critical value, the group means differ significantly.

10. P-Value
The P-value measures the probability of observing your result under the null hypothesis. A P-
value < 0.05 usually indicates statistical significance.
Example:
In a t-test, if the P-value is 0.03, you reject the null hypothesis, concluding that the difference is
statistically significant.

11. F-Critical
F-critical is the cutoff value from the F-distribution table. If the F-value exceeds this, the null
hypothesis is rejected.
Example:
In an ANOVA with α = 0.05 and DF1 = 2, DF2 = 27, the F-critical value might be 3.35.

12. Multiple R (Correlation Coefficient)


Multiple R is the correlation between the predicted and actual values in a regression model. It
measures the strength of the linear relationship.
Example:
If Multiple R = 0.9, it means there’s a strong positive linear relationship between the variables.

13. R-Square (Coefficient of Determination)


R-Square indicates the proportion of variance in the dependent variable explained by the
independent variables.
Example:
If R² = 0.85, 85% of the variation in the dependent variable is explained by the model.

14. Adjusted R-Square


Adjusted R² adjusts the R² value for the number of predictors in the model, preventing
overfitting.
Example:
If you add more predictors to the model, R² may increase, but adjusted R² accounts for the
increased complexity.

15. Standard Error


The standard error measures the average distance that the observed values fall from the
regression line.
Example:
If the standard error is 2, the predicted values on average deviate by 2 units from the observed
values.

16. Regression
Regression analysis estimates the relationship between variables. It predicts the dependent
variable based on the independent variables.
Example:
A regression model predicts house prices based on area, location, and number of bedrooms.

17. Hypothesis Testing


Hypothesis testing is used to check assumptions about the population by testing a null
hypothesis.
Example:
In a t-test, the null hypothesis might state that there’s no difference between the means of two
groups.

18. Residual
A residual is the difference between the observed value and the predicted value in
regression.
Example:
If the observed value is 10 and the predicted value is 8, the residual is 2.

19. Homoscedasticity of Residuals


Homoscedasticity means that the residuals have constant variance across all levels of the
independent variables. It ensures the model is well-fitted.
Example:
In a well-fitted regression, the spread of residuals will remain the same regardless of the
predicted value.

20. Cumulative %
Cumulative percentage sums the percentages of multiple items up to a certain point.
Example:
In a Pareto chart, cumulative % helps visualize how much of the total effect is explained by the
top factors.

You might also like