0% found this document useful (0 votes)
31 views

Forecasting_Session1_2_3_4 (2)

The document outlines the Operations Management II course led by Alok Raj and Rajiv Srivastava, focusing on various topics including forecasting, scheduling, and supply chain management. It includes details on course objectives, resources, and methods for demand forecasting, emphasizing quantitative techniques and error measurement. The course utilizes a textbook by William J. Stevenson and provides a Google Classroom link for further engagement.

Uploaded by

mridsatwork
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Forecasting_Session1_2_3_4 (2)

The document outlines the Operations Management II course led by Alok Raj and Rajiv Srivastava, focusing on various topics including forecasting, scheduling, and supply chain management. It includes details on course objectives, resources, and methods for demand forecasting, emphasizing quantitative techniques and error measurement. The course utilizes a textbook by William J. Stevenson and provides a Google Classroom link for further engagement.

Uploaded by

mridsatwork
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Operations Management – II

Alok Raj ([email protected])


Associate Professor, PODS area
Office: Room number 14, 2nd Floor, Library Building
Course details
Course Instructor
➢ Alok Raj ([email protected])

➢ Rajiv Srivastava ([email protected])


Textbook
Operations Management / by William J Stevenson: McGrawHill (12th edition)

Course Pages
Classroom Link: https://classroom.google.com/c/NzMzODcwNjE4MjQ4?cjc=rna775p
Class Code: rna775p
Class Rules
Session Topics
❖ Forecasting
❖ Aggregate Requirement Planning
❖ Material Requirement Planning
❖ Scheduling
❖ Supply Chain Management
❖ Quality Management
❖ Lean and TPM
❖ Theory of Constraints
❖ Waiting Line Management
❖ Service Management
❖ Project Management
Resources
❖ Textbook

❖ Notes

❖ Case study

❖ Data Set

❖ Excel/ Python

❖ Group Exercise

❖ Industry Experts
Forecasting

Alok Raj
PODS Area
Office: Room No 14, 2nd Floor, Library Building, Tel. 3439
Email: [email protected]
Objectives
❖ Understand the fundamental principles of forecasting.

❖ Learn various quantitative forecasting methods.

❖ Develop skills to select and apply appropriate forecasting


techniques.

❖ Analyze and interpret forecasting results.

❖ Cases: Forecasting Beer Demand


Forecasting at Food Mart
Examples
Examples

All the Examples: Prediction about future


Forecasting

Products/
Services
Demand

Uncertainty
Good
Forecasting model

✓ Forecasts help in reducing uncertainties from a supply chain. A good forecast


helps in planning activities in a supply chain such as capacity investments,
production plans, delivery plans and schedules, ordering cycles and inventory
planning.
✓ Need of Forecasting because of better
❖ Planning
❖ Capacity Investment
❖ Delivery Plans
Which type of product category forecasting is more important ?

Push vs Pull system

Make-to-stock- Push- More relevance of Forecasting?

Material Material

Make-to-Order- Pull
Types of Demand
❖ Independent demand (finished product)
❖ Dependent demand (component parts or subassemblies)

https://www1.grc.nasa.gov/wp-content/uploads/NASA-Glenn-Airplane-Parts-Image-2.jpg
What is the way for forecasting
First data: Diaper data sales
Year Actual
Demand
(×100000)
2014 25

2015 26

2016 24

2017 28
Actual Demand (×100000)
2018 26 30
28
2019 27 26
24
2020 26 22
20
2021 28 18
16

2022 25 14
12
10
2023 ?
2014 2015 2016 2017 2018 2019 2020 2021 2022 2023
Second data: Passenger data sales
Sales of automobiles in India
25.00

20.00
Sales of automobiles in India (in millions)

15.00

10.00

5.00

0.00
2010 2012 2014 2016 2018 2020 2022 2024 2026

Two wheelers Passenger vehicles Commercial vehicles Three wheelers


Third data: Electric Sales data
Year E-2 Wheelers E-3 Wheelers E-4 Wheelers E-Buses Grand Total
17-18 2005 91970 2242 35 96252
EV Grand Total Sales
18-19 28007 116031 2407 75 146520 1400000
19-20 26834 143051 2404 369 172658
1200000
20-21 44803 90898 5201 373 141275
21-22 252641 172543 19782 1198 446164 1000000
22-23 728054 401882 48105 1917 1179958 800000
Grand Total 1082344 1016375 80141 3967 2182827
600000
400000
200000
0
17-18 18-19 19-20 20-21 21-22 22-23
Electric Vehicle Sale
1400000
1200000
1000000
800000
600000
400000
200000
0
17-18 18-19 19-20 20-21 21-22 22-23

E-2 Wheelers E-3 Wheelers E-4 Wheelers E-Buses Grand Total


Fourth data: Air passenger movements data
Air passenger
Sr
Year Month movements (in
no
Lakhs)
13 Jan-16 242
14 Feb-16 233 Air passenger movements (in Lakhs)
15 Mar-16 267
450
16 Apr-16 269
17 May-16 270
18 Jun-16 315 400
2016
19 Jul-16 364
20 Aug-16 347 350
21 Sep-16 312
22 Oct-16 274 300
23 Nov-16 237
24 Dec-16 278 250
25 Jan-17 284
26 Feb-17 277
27 Mar-17 317
200
28 Apr-17 313
29 May-17 318 150

Jun-16

Aug-16

Nov-16

Nov-17
Oct-16

Jun-17

Aug-17

Oct-17
Feb-16

Apr-16
May-16

Jul-16

Sep-16

Dec-16

Feb-17

Apr-17
May-17

Jul-17

Sep-17

Dec-17
Jan-16

Jan-17
Mar-16

Mar-17
30 Jun-17 374
2017
31 Jul-17 413
32 Aug-17 405
33 Sep-17 355
34 Oct-17 306
35 Nov-17 271
36 Dec-17 306
37 Jan-18
38 Feb-18
39 Mar-18
40 Apr-18
41 May-18
42 Jun-18
2018 ?
43 Jul-18
44 Aug-18
45 Sep-18
46 Oct-18
47 Nov-18
48 Dec-18
Quantitative Methods

Actual Demand Air passenger movements (in


(×100000) Passenger Vehicle Sales Lakhs)
30 6000000 450
5000000 400
25 350
4000000
300
20
3000000 250
15 200
2000000
150
10 1000000
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023

0
2000 2005 2010 2015 2020 2025

Level Level +Trend Level + Trend+ Seasonality

Do you think forecasting approach will be same for all type of products
Quantitative Methods of forecasting

Demand Forecasting
(Independent demand)

Time series Causal Analysis

Naive Averaging Trend Seasonality

✓ Average
✓ Simple moving average ✓ Regression analysis ✓ Seasonality index
✓ Weightage average ✓ Holt’s exponential ✓ Regression with
✓ Exponential smoothing smoothing dummy coding
Method 1 : The Naïve Method
● Ft=A t-1 : Simplest Approach to Forecasting

Forecast of Demand Period Demand Forecast


in Period 2
(This forecast made 1 130 -
After seeing demand
in period 1) 2 155 130
3 145 155
Forecast of Demand
in Period 5 4 160 145
(This forecast made 5 151 160
After seeing demand
in period 4) 6 143 151
7 143
Method 2 : The Simple Average
● Ft=(A1+ A2+A3+…At-1)/(t-1)

Period Demand Forecast

1 130 -
130/1 =130
2 155
(130+155)/2 =142.50
3 145
(130+155+145)/3 = 143.33
4 160
(130+155+145+160)/4 = 147.5
5 151
6 143 (130+155+145+160+151)/5= 148.2

7 (130+155+145+160+151+143)/6 = 147.33
The Moving Average Forecast
“The recent history is more relevant ”
● The Simple Average forecast uses ALL THE HISTORY of demands to
generate the forecast for the next period

● The (Simple) Moving Average forecast (order n) uses ONLY THE n


MOST RECENT period demands to generate the forecast for the next
period
Method 3 : The Simple Moving Average
● Forecast for period t = the average of demand in the past
n periods (from period t-1 to t-n)

At- 1+ A t- 2+A t- 3+...+A t-n


F t=
n

● At-1 is the actual sales in period t-1


Method 3 : The Simple Moving Average(Cont’d)

Period Demand Forecast Using Using


n=3 n=4
1 130 -

2 155 - Not enoughhistory

3 145 - Not enoughhistory

4 160 143.33 (130+155+145)/3 = 143.33

5 151 153.33 (155+145+160)/3 = 153.33

6 143 152.00 (145+160+151)/3 = 152.00

7 151.33 (160+151+143) /3 = 151.33


Method 3 : The Simple Moving Average (Cont’d)

165

160

155

150

145

140

135

130

125

120
1 2 3 4 5 6 7

D emand 2 period 3 period 4 period

● The four-week average is smoother than three-week average.


The Weighted Moving Average Forecast
❖The (simple) Moving Average forecast (order n) treats each
of the n most recent demands EQUALLYingenerating the
forecast for the next period

❖The Weighted Moving Average forecast (order n) weights


each of the n most recent demands (possibly)
DIFFERENTLY in generating the forecast for the next
period
Method 4: Weighted Moving Average (WMA)
● When a detectable trend or pattern is present, weights can
be used to place more emphasis on recent values

● Weights based onintuition


❖ Weights are values between 0 and 1
❖ Weights sum to 1.0
❖ Weights impact stability and responsiveness of the forecast

● Formula

Ft= w1At-1+ w2 At-2+w3 At-3+...+wn At-n


Method 4 : Weighted MovingAverage (WMA)

n=3:
Period Demand Forecast w t-1=0.5, w t-2=0.3,

1 130 - w t-3=0.2

2 155 - Not enough history

3 145 - Not enough history

4 160 145.00 0.2(130)+0.3(155)+0.5(145) =145.00

5 151 154.50 0.2(155)+0.3(145)+0.5(160) =154.50

6 143 152.20 0.2(145)+0.3(160)+0.5(151) =152.50

7 148.80 0.2(160)+0.3(151)+0.5(143) =148.80


Forecast Errors
Forecast error is the difference between the forecast value
and what actually occurred.
❖ Forecast errort = Actualt –Forecastt
❖ et =At − Ft

Measures of Error
❖ Mean absolute deviation (MAD)
❖ Mean absolute percent error (MAPE)
❖ Mean squared error (MSE)
❖ Root mean sum of square (RMSE)
Measuring Forecast Errors: Mean Absolute
Deviation (MAD)
n

A
n
t - Ft
t=1
t=1
MAD =
nn
❖ The ideal Mean Absolute Deviation (MAD)is zero which would mean there
is no forecasting error at all.

❖ The larger the MAD, the less the accurate the resulting model.
Measuring Forecast Errors: Mean Absolute
Deviation (MAD)

Absolute
Period Demand Forecast Error Error
1 130 - - -

2 155 130.00 25.00 25.00

3 145 155.00 -10.00 10.00

4 160 145.00 15.00 15.00

5 151 160.00 -9.00 9.00

6 143 151.00 -8.00 8.00


MAD = 13.40
Measuring Forecast Errors: Mean Squared Error
(MSE)

Squared
Period Demand Forecast Error Error

1 130 - - -
2 155 130.00 25.00 625.00
3 145 155.00 -10.00 100.00
4 160 145.00 15.00 225.00
5 151 160.00 -9.00 81.00
6 143 151.00 -8.00 64.00
MSE = 219.00
Measuring Forecast Errors: Mean Absolute
Percentage Error (MAPE)

Abs %
Period Demand Forecast Error % Error
Error
1 130 - - - -
2 155 130.00 25.00 =25/155=16.13% 16.13%
3 145 155.00 -10.00 =-10/145=-6.90% 6.9%
4 160 145.00 15.00 =15/160=9.38% 9.38%
5 151 160.00 -9.00 =-9/151=-5.96% 5.96%
6 143 151.00 -8.00 =-8/143=-5.59% 5.59%
MAPE 8.79%
Measure of Forecast Error
σ 𝐴𝑡 −𝐹𝑡
⦁ Mean Absolute deviation (MSD): 𝑀A𝐷 = 𝑛
σ 𝐴𝑡 −𝐹𝑡 2
⦁ Mean squared error (MSE): 𝑀A𝐷 = 𝑛
⦁ The MSE penalizes large errors more significantly than small efforts because all
errors are squared.
⦁ Lower MSE implies better prediction

⦁ Root Mean squared error (RMSE): = 𝑀𝑆𝐸


𝐴 −𝐹
σ 𝑡 𝑡
𝐴𝑡 ×100
⦁ Mean Absolute percentage error (MAPE): 𝒏

⦁ MAPE is dimensionless it can be used for comparing different models with


varying scales.
⦁ It is a good measure of forecast error when forecast error when the underlying
forecast has significant seasonality and demand varies considerably from one
period to the next
MAPE, and MSE (RMSE) are the popular forecasting accuracy indicators
Problem 1
Period Demand
1 20 a) If a three-period moving average had been used to forecast sales,
2 24
3 24
what would the daily forecasts have been starting with the
4 16
5 20
forecast for Day 4?
6 20
7 17
b) If a five-period moving average had been used
8
9
22
22
determine what the forecasts would have been for each day,
10
11
20
21
starting with Day 6.
12
13
16
18
c) Use a three-period weighted moving average with w1=0.2,
14
15
17
16
w2=0.3, and w3=0.5 and forecast sales for the 4th day onwards.
16
17
19
23
d) Use a four-period weighted moving average with w1=0.1,
18
19
20
23
w2=0.2, w3=0.3, and w4=0.4 and forecast sales for the 5th day
20 17 onwards.
21 22
22 18 e) Plot the original data and each set of forecasts on the same
23 15
24 22 graph. Which forecast approach is better?
25 23
26 18
27 16
28 22
29 20
30 24
31 ?
Which approach is better ?
❖ Sensitivity to Trends: WMAs are typically more sensitive to recent trends since they
assign more weight to the latest data points.

❖ Data Volatility: SMAs can smooth out short-term fluctuations and highlight longer-term
trends in data that is very volatile. This can be beneficial when you want to avoid reacting
to what might be considered "noise" in the data.

❖ Data Patterns: If the data has a seasonal pattern or other cyclical changes, neither SMA
nor WMA may be sufficient as they don't inherently account for such patterns. More
sophisticated methods like Exponential Smoothing or ARIMA may be more suitable in
such cases.
Practice Problem-1
Day Number Sold a) If a two-period moving average had been used to
1 25 forecast sales, what would the daily forecasts have
2 31 been starting with the forecast for Day 3?
3 29 b) If a four-period moving average had been used
4 33 determine what the forecasts would have been for
5 34 each day, starting with Day 5.
6 37 c) Use a three-period weighted moving average with
7 35 w1=0.2, w2=0.3, and w3=0.5 and forecast sales for
8 32 the 4th day onwards.
9 38 d) Use a four-period weighted moving average with
10 40 w1=0.4, w2=0.3, w3=0.2, and w4=0.1and forecast
11 37 sales for the 5th day onwards.
12 32 e) Plot the original data and each set of forecasts on
the same graph. Which forecast has the better
ability to respond quickly to changes?
Method 5 : Exponential Smoothing
Smoothing constant

● Ft = Ft-1 + α (At-1 -Ft-1)= α (At-1 )+(1- α)Ft-1

Forecast= Previous forecast + α (previous actual sales- previous forecast )


▪ Ft is the forecast for the period t
▪ At-1 is the actual sales in the previous period
▪ Ft-1 is the forecast for the previous period
▪ α: ranges from 0 to 1 and is subjectively chosen
Problem 2
Period Demand
1 20
Day Number Sold 2 24
1 25 3 24
4 16
2 31 5 20
6 20
3 29
7 17
4 33 8 22
9 22
5 34 10 20
6 37 11 21
12 16
7 35 13 18
8 32 14 17
15 16
9 38 16 19
10 40 17 23
18 20
11 37 19 23
20 17
12 32
21 22
13 22 18
23 15
24 22
25 23
26 18
27 16
28 22
29 20
30 24
31 ?
Method 5 : Exponential Smoothing (Cont’d)
● Forecast effects of Smoothing Constant α

Weights on Actual Sales

Prior Period 2 PeriodsAgo 3 PeriodsAgo

α α(1 − α) α(1 − α)2

α = 0.10 0.271
α = 0.90 0.999
Summary: Exponential Smoothing
Most popular
● Because…
❖ Formulating an exponential model is relatively easy and
it is surprisingly accurate.
❖ Little computation is required so the computer
storage requirements are small

● It is a special form of weighted moving average


❖ Weights decline exponentially with most recent
data weighted most
Regression
Passenger
Period Year
Vehicle Sales
1 2004 1061572
2 2005 1143076 Passenger Vehicle Sales
3 2006 1379979
5000000
4 2007 1549882
4500000
5 2008 1552703
4000000
6 2009 1951333
3500000
7 2010 2501542
3000000
8 2011 2629839
2012 2665015 2500000
9
10 2013 2503509 2000000

11 2014 2601111 1500000

12 2015 2789678
2732983 1000000
13 2016 3047582 500000
14 2017 3288581 0
15 2018 3377389 2000 2005 2010 2015 2020 2025

16 2019 2773575
17 2020 3062280
18 2021 3650698
19 2022 4578639
20 2023 4901844
21 2024 4382797
To determine the significance levels (1%, 5%, and 10%) of a statistical result based on
the t-value and p-value, we follow these steps

Using the p-value


The p-value directly tells us the significance:
1% Level (α = 0.01):
• If p≤0.01: Result is significant at the 1% level.
5% Level (α = 0.05):
• If p≤0.05: Result is significant at the 5% level.
10% Level (α = 0.10):
• If p≤0.10: Result is significant at the 10% level.
If p>0.10: The result is not significant.

Critical t-values for common significance levels (two-tailed)


For large df, approximate critical t-values are:
•1% level (α=0.01):𝑡critical ≈2.576
•5% level (α=0.05): 𝑡critical ​≈1.960
•10% level (α=0.10): 𝑡critical ​≈1.645

Standard
Coefficients Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%

Intercept 918185.8 159033.9 5.773523 1.45682E-05 585324.0457 1251047.488 585324.0457 1251047.488

Period 164981.5 12665.42 13.02613 6.40281E-11 138472.4499 191490.519 138472.4499 191490.519


Time series data with seasonality
❖ Forecasting data with seasonality is an essential aspect in various fields, including sales,
marketing, and even academic research

❖ Seasonality refers to periodic fluctuations that regularly occur in data due to seasonal
factors. It's often seen in monthly or quarterly sales data, where certain times of the year
are consistently higher or lower than others.
Approach

❖Seasonal Factor

❖Regression Approach with help of dummy coding


Key Steps in Seasonal Forecasting
1. Identifying Seasonality: This involves analyzing your data to determine if there is a seasonal pattern. This can
be done through visual examination of time series plots or using statistical tests for seasonality.

2. Decomposition of Time Series: Time series data can be decomposed into three components: trend, seasonality,
and randomness. This helps in understanding the underlying patterns. Methods like STL (Seasonal and Trend
decomposition using Loess) are commonly used.

3. Choosing the Right Model: Depending on the nature of the seasonality, different models can be applied.
Common models include:
1. Seasonal Factor
2. Regression Approach
3. SARIMA (Seasonal ARIMA): An extension of ARIMA that specifically addresses seasonality.
4. Exponential Smoothing: Methods like Holt-Winters which are simple yet powerful for forecasting
seasonal data.

4. Parameter Selection: This involves choosing parameters that best fit the model to your data. For SARIMA,
these include seasonal order and non-seasonal order parameters.

5. Model Validation: Before using the model for forecasting, it’s crucial to validate it using historical data. This
involves checking the model’s performance against known data and adjusting as necessary.

6. Forecasting: Once the model is validated, it can be used to forecast future data points.
Time series data with seasonality
Sales Year 1 Year 2 Year 3 Year 4 Year 5 Demand Dt
25,000
JAN 2,000 3,000 2,000 5,000 5,000

FEB 3,000 4,000 5,000 4,000 2,000


20,000
MAR 3,000 3,000 5,000 4,000 3,000

APR 3,000 5,000 3,000 2,000 2,000

MAY 4,000 5,000 4,000 5,000 7,000


15,000

JUN 6,000 8,000 6,000 7,000 6,000

JUL 7,000 3,000 7,000 10,000 8,000 10,000

AUG 6,000 8,000 10,000 14,000 10,000

SEP 10,000 12,000 15,000 16,000 20,000


5,000
OCT 12,000 12,000 15,000 16,000 20,000

NOV 14,000 16,000 18,000 20,000 22,000


-
DEC 8,000 10,000 8,000 12,000 8,000 0 10 20 30 40 50 60 70

❖ Assume that sales in Year 6 will follow similar seasonal patterns. Predict the sales for September and November in Year 6 if
the growth rate remains consistent with previous years.

❖ Suggest two strategies the business can adopt to boost sales during off-season months like February and April.
Regression Approach with help of dummy coding

Sales Year 1 Year 2 Year 3 Year 4 Year 5

JAN 2,000 3,000 2,000 5,000 5,000

FEB 3,000 4,000 5,000 4,000 2,000

MAR 3,000 3,000 5,000 4,000 3,000

APR 3,000 5,000 3,000 2,000 2,000

MAY 4,000 5,000 4,000 5,000 7,000

JUN 6,000 8,000 6,000 7,000 6,000

JUL 7,000 3,000 7,000 10,000 8,000

AUG 6,000 8,000 10,000 14,000 10,000

SEP 10,000 12,000 15,000 16,000 20,000

OCT 12,000 12,000 15,000 16,000 20,000

NOV 14,000 16,000 18,000 20,000 22,000

DEC 8,000 10,000 8,000 12,000 8,000

❖ Solve this problem with help of dummy coding and estimate month wise sales
for Year 6
Dummy coding

• Dummy coding is a way to represent a categorical variable with


k categories using k−1 binary (0/1) columns.
• For example, if you have a variable with three categories: A, B,
and C, you can represent it as:
• Dummy 1: 1 if A, 0 otherwise.
• Dummy 2: 1 if B, 0 otherwise.
• If neither Dummy 1 nor Dummy 2 is 1, the observation is
implicitly in category C.

• The category not represented by a dummy column becomes the


reference or base category. Coefficients of the dummy variables
indicate the effect of that category relative to the base category.
Example

For a categorical variable with 3 categories: A, B, and C:


•k=3.
•Dummy coding uses k−1=2 columns:
• Dummy 1 (for A): 1 if A, 0 otherwise.
• Dummy 2 (for B): 1 if B, 0 otherwise.
• If both Dummy 1 and Dummy 2 are 0, the category is C.

Including all three columns would result in:


Dummy 1+Dummy 2+Dummy for C=1, which is a linear dependency.

By using one less column, the reference category (C) is implicitly


represented, and we avoid redundancy.
Multiple Regression
Multiple Regression
Price Advertising ($ Burger Sales (units
A well-established burger chain in India was Store ($) '000s) '000s)
determined to optimize its operations and improve Baulkham Hills 7.5 3 360
burger sales across its 16 stores in different Bella Vista 4.5 3 520
neighborhoods. Blacktown 8 2.7 240
Castle Hill 6.8 3 350
With growing competition and customer expectations, Claremont
the company decided to analyze data on three key Meadows 7.5 3.3 350
factors influencing sales: burger price, advertising Dee Why 5.1 4.2 820
budget, and units sold. Forestville 5 3.5 500
Frenchs Forest 7 2.7 300
The marketing team gathered detailed information from Glenwood 8 2 230
each store, including pricing strategies, local Killara 7.2 3.5 310
advertising expenses, and the actual number of burgers Macquarie Park 6.4 3.7 430
sold. North Ryde 7 3.5 400
Penrith 5.5 3.3 350
What is the way to decide sales based on Price and Rhodes 5 4 630
Advertising ? West Pennant
Hills 7.9 2.7 250
Wynyard 5.9 4 440
Next Class

✓ Forecasting Beer Demand at Anadolu Efes

✓ Forecasting at FoodMart
Case Article
Forecasting Beer
Demand at
Anadolu Efes

This case won first prize in the 2007


Case Competition of the Institute for
Operations Research and the
Management Sciences (INFORMS)
Synopsis of the Case Study
❖ Efes Beverage Group is the beverage division of one of Turkey’s leading corporations
and it has a share of about 78% in the Turkish beer market.

❖ Efes forecasts the monthly demand for the coming year during the fall of the current
year.

❖ Historically, high-level sales managers, based on input from their sales personnel, have
done this forecasting mostly subjectively.

❖ They now want to formalize this process so that significant factors on beer demand can
be identified and used to predict the monthly demand
Questions
❖ Plot the monthly beer demand and discuss your observations.
❖ Run the multiple linear regression model to explain the monthly beer demand in terms
of the predictor variables, discuss the validity of the model, and interpret the results.
❖ Are there any problems with the validity of the original model? If so, make the
necessary modifications and repeat the process with new model(s). Employ variable
selection methods to eliminate irrelevant variables.
❖ Are there unusual observations? Interpret and discuss.
❖ Discuss the predictive capability of each predictor variable. Make predictions for the
monthly demands of the following year. Make necessary assumptions if you need data
on predictor variables. You may create scenarios by assuming some values for
“uncontrollable” variables and trying some values for “controllable” variables. Discuss
the results.
❖ Are there alternative models that include different sets of predictor variables with
approximately the same explanation power? How would you interpret such results?
Key Takeaways

• The model explains a substantial portion of the variation in the dependent


variable.

• Time period, beer price, Ramadan proportion, and Tou index are critical
factors with statistically significant impacts.

• There is evidence of seasonality, with some months (e.g., June, July,


August) having significant positive effects, while others (e.g., January,
February) have negative effects.

• Insignificant monthly variables could be dropped to simplify the model.


Quantitative Methods of forecasting
Demand Forecasting
(Independent demand)

Time series Causal Analysis/


Associative model

Naive Averaging Trend Seasonality

✓ Average ✓ Seasonality index ✓ Regression


✓ Regression analysis
✓ Simple moving average
✓ Weightage average ✓ Regression with
✓ Exponential smoothing dummy coding

You might also like