0% found this document useful (0 votes)
8 views

Simulation Output Data Analysis

Simulation Assignment

Uploaded by

Tanvir Rana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Simulation Output Data Analysis

Simulation Assignment

Uploaded by

Tanvir Rana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Bangabandhu Sheikh

Mujibur Rahman Science


and Technology University,
Gopalganj-8100

Course Title: Simulation & Modelling


Course Code: STA 401

Submitted To:
Md. Murad Hossain
Assistant Professor

Dept. of Statistics,
BSMRSTU

Submission Date: 5th November,2024


Group Number: Six
Assignment Topic: Simulation Output & Data Analysis
Topic wise Contribution Table
Name Student ID Contribution
Md.Murad Hossain 19STA057 Identifying the distribution with data

Nurul Alam 18STA033 Output Analysis for Steady-State


Simulation
MD:Tuhin Imran Hero 19STA060

Md.Mahfuz Hasan Leon 18STA044 Goodness-of-Fit Test and Parameter


Estimation Simulation, Kolmogorov-
Smirnov (K-S) Test and Anderson-
Darling (A-D) Test, Parameter
Estimation, Simulation Outline, Output
Analysis for Terminating Simulation
Royhan Sheikh 19STA058

Abdullah Al Mamun 18STA002 Simulation Case Studies

Ashik Mazumder 18STA038


Identifying the Distribution of Data in
Simulation
Identifying the data distribution in a simulation is crucial for interpreting
results accurately, predicting outcomes, and ensuring that the simulation
model realistically reflects real-world dynamics. Distributions describe
how data is spread across possible outcomes and provide insights into the
underlying process, helping to draw meaningful conclusions. Below is an
in-depth guide to identifying distributions in simulation.

1. Purpose of Identifying Distributions

• Improves Model Accuracy: Knowing the data’s distribution lets you


use the right parameters, making simulation outputs more representative.
• Guides Statistical Analysis: Many statistical tests and confidence
interval calculations rely on certain distribution assumptions.
• Informs Performance Metrics: Knowing the distribution of key
metrics (like queue lengths, wait times) is crucial for understanding
system performance and identifying areas for improvement.
2. Common Distributions in Simulation Studies

Different simulation models often follow specific distributions:


• Normal Distribution
• Exponential Distribution
• Poisson Distribution
• Uniform Distribution
• Log-normal Distribution
• Triangular and Beta Distributions

3. Steps to Identify Distribution of Simulation Data

Step 1: Collect Data

Run the simulation multiple times to gather sufficient data for analysis.
Track variables of interest, like inter-arrival times, wait times, or service
times.

Step 2: Visual Inspection

Use histograms and box plots to visually inspect the data distribution
shape and spread.

Step 3: Quantitative Goodness-of-Fit Tests

Apply tests like Kolmogorov-Smirnov, Chi-Square, and Anderson-


Darling tests to evaluate how well data matches specific distributions.

Step 4: Calculate Descriptive Statistics

Analyze sample mean, variance, skewness, and kurtosis to determine key


distribution characteristics.

Step 5: Use Probability Plots

Use Q-Q and P-P plots to visually assess how data aligns with theoretical
distributions.

Step 6: Parameter Estimation

Estimate parameters (like mean, variance, rate) using methods like MLE
or method of moments.
Step 7: Model Selection and Validation

Use criteria like AIC or validation tests to ensure model fit and
predictability.

Output Analysis for Steady-State Simulation


In steady-state simulations, the objective is to analyze the long-term
behavior of a system, assuming it reaches equilibrium over time. Unlike
terminating simulations, which are run for a finite time, steady-state
simulations focus on collecting data after the system has "warmed up" and
stabilized.

1. Understanding Steady-State Simulations

A steady-state simulation observes a system over an extended period to


reach and analyze its equilibrium conditions. This type of simulation
ignores initial transients (start-up effects) and focuses on behavior after the
system stabilizes.

Examples include:
- Manufacturing processes that run continuously.
- Telecommunications systems where calls arrive continuously.
- Network traffic analysis for long-running data flows.
2. Objectives of Output Analysis

For steady-state simulations, the primary objectives of output analysis


include:
1. Estimating Long-Term Performance Measures: Determine metrics
like average wait times, queue lengths, or throughput over an extended
period.
2. Reducing Bias from Initial Transients: Discard initial "warm-up"
data that may bias the results.
3. Constructing Confidence Intervals: Ensure reliability in performance
estimates by calculating intervals for metrics such as mean, variance, and
utilization rates.

3. Key Components of Output Analysis

a) Warm-Up Period

The warm-up period is the initial phase of a simulation where transient


effects are prominent before the system reaches equilibrium. To avoid
bias:
- Visual Inspection: Plot key metrics over time to identify when the
system appears to stabilize.
- Heuristic Rules: Use domain knowledge (e.g., after observing a system
for a set time, such as a day in an ongoing service system).
- Statistical Techniques: Use statistical methods, such as Welch’s
method, to systematically determine the end of the warm-up period.
After identifying the warm-up period, discard data collected before this
point in the simulation.

b) Replications or Batch Means

Two primary approaches are used to ensure accurate results in steady-


state simulations:

1. Independent Replications: Run multiple, independent simulation


replications with different random seeds, and collect steady-state
observations from each replication. Calculate average metrics across
replications to estimate steady-state performance.
2. Batch Means Method: In long single-run simulations, divide the data
collected after the warm-up into equally sized batches and calculate the
average for each batch. Treat each batch mean as a separate data point to
calculate overall averages and variances.
The batch means method is efficient for long-run simulations where
running multiple independent replications may be costly.

c) Analyzing and Summarizing Results

After obtaining data either from replications or batch means:


1. Mean Or Expected Value: Calculate the mean of each performance
measure over the steady-state period.
2. Variance and Standard Deviation: Determine the variability to
assess consistency over the long run.
3. Confidence Intervals: Construct confidence intervals to indicate the
precision of performance measure estimates.
For a 95% confidence interval for the mean of a performance metric X:
CI = X ± t * (S / √n)
where:
- X is the sample mean,
- S is the sample standard deviation,
- n is the number of data points (batch means or replication averages),
- t is the critical value from the t-distribution for the chosen confidence
level.

d) Assessing Convergence and Stability

When determining if a simulation has reached steady state, consider:


-Visual Convergence: Use time plots of key metrics to visually assess if
values fluctuate around a stable mean.
-Statistical Tests for Convergence: Some tests (e.g., the sequential test)
can confirm if further data collection is unnecessary due to stable
observed averages.
4. Common Pitfalls and Considerations

- Insufficient Warm-Up: If the warm-up period is underestimated,


transient effects can bias results. Reevaluate warm-up if results seem
inconsistent.
- Batch Size in Batch Means Method: Batch sizes that are too small
may yield correlated batches, while excessively large batches may reduce
the number of observations, decreasing reliability.
- Autocorrelation: In continuous simulations, successive data points may
be autocorrelated, which can lead to underestimating the variability in the
performance measure. The batch means method can help mitigate this.

5. Reporting Results

In steady-state simulations, clear and precise reporting helps stakeholders


understand the system’s long-term behavior. Report details should
include:
1. Descriptive Statistics: Provide the mean, variance, and confidence
intervals for each performance measure.
2. Warm-Up Period Justification: Explain how the warm-up period was
chosen and its length.
3. Convergence Evidence: Show time plots or statistical tests to support
the claim that steady-state conditions were reached.
4. Interpretations and Recommendations: Discuss implications of the
steady-state results and suggest any operational changes based on
findings.

Example Scenario: Analyzing Average Queue Length in a Steady-


State Simulation

Imagine simulating a call center where calls arrive continuously. To find


the average number of customers in the queue after reaching steady state:

1. Run the Simulation: Let the simulation reach steady state by allowing
enough time for the warm-up period.
2. Collect Data Post-Warm-Up: Gather queue length data after
discarding the initial warm-up period.
3. Apply the Batch Means Method: Divide the data into batches to
calculate mean queue length for each batch.
4. Calculate Confidence Interval: Use batch means to construct a 95%
confidence interval for the mean queue length.
5. Interpret Results: If the average queue length exceeds acceptable
levels, consider changes (e.g., adjusting staff levels or routing processes).

Goodness-of-Fit Test and Parameter Estimation


Simulation
The goodness-of-fit test and parameter estimation concepts, and then
outline steps for simulating both. These are key methods for statistical
inference, where you test how well a model fits data and estimate model
parameters from observed data.

1. Goodness-of-Fit Test

A goodness-of-fit test helps to determine if sample data fits a particular


distribution, which is useful for verifying assumptions in data analysis.
Here, we’ll explore the most common goodness-of-fit tests and then go
through a simulation outline.

Common Types of Goodness-of-Fit Tests

1. Chi-square goodness-of-fit test:


-Typically used for categorical data.
-Tests whether the observed counts in categories differ from expected
counts based on a specific distribution (e.g., uniform distribution).

2. Kolmogorov-Smirnov (K-S) test:


-Suitable for continuous data.
-Compares the cumulative distribution function (CDF) of the observed
data to the CDF of the expected distribution.

3. Anderson-Darling test:
- A variation of the K-S test with extra sensitivity to discrepancies in
the tails of the distribution.
Steps for a Chi-square Goodness-of-Fit Test Simulation

1. Define the Hypothesis:


- Null hypothesis (H₀): The data follows the expected distribution.
- Alternative hypothesis (H₁): The data does not follow the expected
distribution.

2. Generate Data:
- Create a sample set that you will test. For example, if testing a
uniform distribution across 5 categories, generate categorical data with
counts for each category.

3. Calculate Expected Frequencies:


- Determine the expected frequency for each category if the null
hypothesis were true. For a uniform distribution, this would simply be the
total number of observations divided by the number of categories.

4. Compute the Chi-Square Statistic:


- Use the formula:
χ² = Σ ((Oᵢ - Eᵢ) ² / Eᵢ)
where Oᵢ is the observed count, and Eᵢ is the expected count.

5. Determine the Critical Value and P-value:


- Compare the chi-square statistic to a critical value from the chi-square
distribution table, based on the chosen significance level (e.g., 0.05) and
degrees of freedom.

6. Interpret Results:
- A high chi-square statistic or low p-value indicates a poor fit,
suggesting that the observed distribution differs significantly from the
expected distribution.
Kolmogorov-Smirnov (K-S) Test and Anderson-
Darling (A-D) Test
Kolmogorov-Smirnov (K-S) test and the Anderson-Darling (A-D) test,
focusing on their purpose in evaluating the goodness of fit for continuous
data and outlining the steps for simulation using these tests.

Kolmogorov-Smirnov (K-S) Test


The K-S test is a nonparametric test that evaluates whether a sample comes
from a specified continuous distribution, such as a normal or exponential
distribution. It measures the largest difference between the cumulative
distribution function (CDF) of the sample and the CDF of the hypothesized
distribution.

Key Concepts:

1. Test Statistic:

The K-S test statistic D is defined as:


D = max | F[obs(x)] – F[exp(x)] |

where F[obs(x)] is the empirical CDF of the observed data and F[exp(x)]
is the CDF of the expected distribution.

2. Hypotheses:
- Null Hypothesis (H₀): The sample follows the specified distribution.
- Alternative Hypothesis (H₁): The sample does not follow the specified
distribution.

3. Interpretation:

If D is large and the p-value is below a chosen significance level (e.g.,


0.05), we reject the null hypothesis, meaning the data does not follow the
expected distribution.
Simulation Steps:

1. Generate Data: Simulate or sample data from a known distribution


(e.g., normal distribution).
2. Calculate Empirical CDF: Compute the empirical CDF from the
generated data.
3. Compare to Expected CDF: For each value in the data, calculate the
difference between the empirical CDF and the CDF of the hypothesized
distribution.
4. Find Maximum Difference: Identify the largest difference, which
serves as the K-S statistic.
5. Determine Significance: Compare the test statistic to the critical value
for your sample size and desired significance level. A large statistic or a
low p-value indicates a poor fit.

Anderson-Darling (A-D) Test


The A-D test, similar to the K-S test, evaluates the fit of a sample to a
specified distribution but with more sensitivity to the tails of the
distribution. This makes it particularly useful when detecting deviations in
the tails.

Key Concepts:

1. Test Statistic:

The A-D test statistic A² is based on the squared differences between the
empirical and expected cumulative distributions, weighted more heavily
in the tails:

A² = -n - (1/n) Σ [(2i - 1) * (ln(F(xi)) + ln (1 - F(x(n+1-i))))]


where F(x) is the CDF of the hypothesized distribution and xi’s are
ordered sample points.
2. Hypotheses:
- Null Hypothesis (H₀): The sample comes from the specified
distribution.
- Alternative Hypothesis (H₁): The sample does not follow the specified
distribution.
3. Interpretation:

If A² is large, the sample does not fit the expected distribution, especially
in the tails.

Simulation Steps:

1. Generate Data: Sample data from a known distribution.


2. Order Data and Compute CDF: Sort the data and calculate the
empirical CDF.
3. Calculate Weighted Differences: For each value, calculate the
weighted squared difference between the empirical and theoretical CDFs.
4. Sum and Normalize: Aggregate these differences according to the
formula for A², obtaining the Anderson-Darling statistic.
5. Compare to Critical Value: Use a significance table specific to the A-
D test to interpret whether the sample matches the specified distribution.
Large values of A² indicate poor fit.

Theoretical models, allowing for reliable conclusions and the verification


of distributional assumptions in data modeling.

Parameter Estimation
Parameter estimation is essential for characterizing distributions by finding
the values of parameters that best fit the observed data. Maximum
Likelihood Estimation (MLE) is a common technique for this, as it
provides parameter estimates that maximize the probability of observing
the sample data.

Maximum Likelihood Estimation (MLE) for a


Normal Distribution
1. Define the Model and Parameters:
- For a normal distribution, two parameters characterize the
distribution: mean (μ) and variance (σ²).
2. Write the Likelihood Function:
- The likelihood function is the probability of observing the data given
specific values for μ and σ². For a normal distribution with observed data
points X₁, X₂, …, Xₙ, the likelihood function L(μ, σ²) is:
L(μ, σ²) = Π (1 / √(2πσ²)) * exp(-(Xᵢ - μ)² / (2σ²))
- This function is complex to maximize directly due to the product, so
we often take the natural logarithm of the likelihood function (log-
likelihood) to simplify.

3. Maximize the Likelihood Function:


- Differentiate the log-likelihood with respect to μ and σ², set these
derivatives to zero, and solve for μ and σ².
- For the normal distribution, this yields:
μ̂ = (1 / n) Σ Xᵢ
σ²̂ = (1 / n) Σ (Xᵢ - μ̂) ²

4. Interpret the Estimates:


- These values are the MLE estimates, meaning they make the observed
data as likely as possible under the assumed normal distribution.
- With a large sample size, the MLE estimates should closely
approximate the true population parameters if the sample is
representative.

Simulation Outline
Here’s an outline of how you could simulate these processes for a hands-
on understanding.

Goodness-of-Fit Test Simulation

1. Generate Data: Simulate a dataset with categorical values. For


example, randomly assign counts to each of 5 categories.
2. Define Expected Distribution: For uniform distribution, the expected
count for each category is total observations divided by the number of
categories.
3. Calculate Chi-square Statistic: Compute the chi-square statistic to
measure the discrepancy between observed and expected counts.
4. Interpret Results: Use a chi-square distribution table to assess
whether the observed data likely fits the expected distribution.
Parameter Estimation Simulation

1. Generate Data: Simulate data drawn from a normal distribution with


known mean and variance. For example, set μ = 10 and σ = 2.
2. Estimate Parameters: Use MLE formulas to calculate the sample
mean and sample variance as estimates for μ and σ.

3. Compare with True Values: Since you generated the data with
known parameters, you can compare your estimated values with the true
values to gauge accuracy.

Output Analysis for Terminating Simulation


Output analysis in terminating simulations is essential for assessing the
accuracy, reliability, and interpretation of results upon completion of a
simulation. Terminating simulations, also known as finite-horizon
simulations, are those that run for a fixed time or until a specific event
occurs (e.g. reaching a deadline, processing a set number of tasks). Here’s
comprehensive breakdown of output analysis for these simulations:

1. Understanding Terminating Simulations

In a terminating simulation, the goal is to observe and analyze the results


within a well-defined period or conditions. Unlike steady-state
simulations, which run indefinitely until a stable distribution is reached,
terminating simulations often reflect real-world systems that operate
within a specified duration. Examples include:
- Customer service operations from opening to closing time.
- Production batch runs that begin and end at set intervals.
- Simulating a queue system that terminates when a target number of
customers is served.

2. Objectives of Output Analysis

Output analysis in terminating simulations focuses on understanding key


performance measures and ensuring the results accurately represent the
system's performance. Primary objectives include:

1. Estimating Performance Measures: Calculate metrics like average


wait times, number of entities processed, and resource utilization.
2. Determining Confidence Intervals: Use statistical techniques to
provide intervals that indicate the precision of estimates.
3. Testing Hypotheses: Conduct hypothesis testing to compare the
system's performance against expected standards or other systems.

3. Key Components of Output Analysis

a) Collecting Output Data

In terminating simulations, data collection is bounded by the end


condition. Data might include:

- Event Counts: Number of entities processed or events completed


within the time frame.
- Time Metrics: Waiting times, idle times, and processing times.
- Resource Utilization: Percentage of time resources were in use.

b) Replications and Sampling

Since output from a single simulation run may not be representative,


multiple replications (independent runs with different random seeds) are
usually conducted to provide more reliable results. Each replication can
yield slightly different outputs due to the stochastic nature of simulations.
- Number of Replications: Increase the number of replications to reduce
variability and improve the accuracy of the estimated performance
measures.
- Sample Averages: Compute averages across all replications to estimate
central tendencies of performance metrics.

c) Analyzing and Summarizing Results

After completing multiple replications, use summary statistics to analyze


the results:

1. Mean (Expected Value): Calculate the mean of performance measures


across replications.
2. Variance and Standard Deviation: Measure variability to understand
consistency.
3. Confidence Intervals: Construct confidence intervals around
performance measures to capture the uncertainty of estimates.

For a 95% confidence interval for the mean of a performance metric X:

CI = X̄ ± z * (σ / √n)

where:
- X̄ = average value of X across replications,
- σ = standard deviation of X,
- n = number of replications,
- z = critical value from the z-distribution (typically 1.96 for 95%
confidence).

This interval provides a range where the true mean likely falls, helping
evaluate result reliability.

d) Hypothesis Testing

Hypothesis testing is useful when comparing the simulation’s


performance against a baseline or different scenarios:
- Null Hypothesis (H₀): Assumes no significant difference between
observed results and expected values or between two different scenarios.
- Alternative Hypothesis (H₁): Assumes a significant difference exists.

A common test is the t-test for means to determine if observed differences


are statistically significant. When comparing two scenarios (e.g., current
vs. improved system):
1. Calculate the difference in means across replications.
2. Use the t-test to assess if the difference is statistically significant.

4. Common Pitfalls and Considerations

- Warm-Up Periods: While warm-up is mainly a concern for steady-


state simulations, it may still be relevant if the system requires time to
stabilize before data collection.
- Output Variability: High variability across replications can indicate
the need for more replications to obtain precise estimates.
- Assumptions of Independence: Ensure replications are independent,
especially if random seeds aren’t changed between runs.

5. Reporting Results

In terminating simulations, reporting clear and comprehensive results is


critical. Include:

1. Descriptive Statistics: Provide mean, standard deviation, and


confidence intervals for key performance metrics.
2. Visualization: Use graphs like boxplots or histograms to illustrate data
distribution and variability.
3. Interpretations: Discuss the implications of the results, including any
significant trends or deviations from expected behavior.

6. Example Scenario: Analyzing Customer Wait Times in a


Terminating Simulation

Let’s consider an example where you simulate customer wait times in a


service center that operates from 9 AM to 5 PM. The goal is to analyze
the average wait time by the end of the day.
1. Run the Simulation: Collect wait times for each customer who visited
during the day across multiple replications.
2. Calculate Key Metrics: Compute average wait time, total number of
customers served, and standard deviation for wait times.
3. Construct Confidence Intervals: Generate 95% confidence intervals
to understand the reliability of the mean wait time estimate.
4. Interpret Results: If the confidence interval for wait time is too wide
or the mean wait time exceeds an acceptable level, you might recommend
operational changes (e.g., adding staff).

Simulation Case Studies


Definition: A simulation case study is a detailed examination of a specific
real-world situation or system that is modeled and analyzed using computer
simulation techniques. It involves creating a virtual representation of the
system to study its behavior under various conditions and to predict future
outcomes.
Key Components of a Simulation Case Study:

1. Objective

• Each case study begins with a clear objective, outlining the problem
or situation that the simulation aims to address. This could range
from improving efficiency in a manufacturing process to training
medical personnel for emergency situations.

2. Methodology

• The case study describes the simulation techniques used, such as


discrete event simulation, agent-based modeling, or system
dynamics. It details how the simulation was set up, including the data
inputs, assumptions made, and the modeling tools employed.

3. Implementation

• This section outlines how the simulation was executed, including the
scenarios tested. It often involves running multiple simulations to
explore different outcomes based on varying parameters.

4. Results

• The results section presents the findings from the simulation,


highlighting key metrics or insights gained. This could include
improvements in performance, cost savings, or enhanced
understanding of complex systems.

5. Conclusions

• Finally, case studies summarize the implications of the findings,


discussing how they can be applied in practice, potential limitations
of the simulation, and recommendations for future work.
Difference between simulation and case study
The main difference between a simulation and a case study is that a
simulation is a process that can model and test out different scenarios,
while a case study is a finished product that provides information about a
specific situation:

Simulation Case Study

I. A simulation is a process that I. A case study is a finished


models a situation or product that provides in-
environment to mimic how it depth information about a
would behave. specific situation or
problem.

II. Simulations can be used to II. Case studies can be used to


test out different scenarios, provide context and help
evaluate the impact of new understand a situation.
procedures, or predict how a
system will perform.

III. Simulations can also help III. Case studies can be


develop credible intelligence effective for building trust
for decision making. with potential customers by
sharing real-life examples.

Example of simulation case studies


Here are some specific examples of simulation case studies across various
fields:

1. Healthcare Simulation

Case Study: Simulation Training for Surgical Teams.

Objective: Improve teamwork and surgical skills in high-stakes


environments.
Method: A hospital developed simulation scenarios using mannequins
and virtual reality to mimic complex surgeries.

Results: Post-training assessments showed a 40% increase in team


performance scores and improved patient safety metrics.

2. Manufacturing Simulation

Case Study: Lean Manufacturing Process Improvement

Objective: Streamline production to reduce waste.

Method: A manufacturing company used discrete event simulation


to model its assembly line, testing various layouts and workflows.

Results: The simulation identified bottlenecks, leading to a


redesigned workflow that increased throughput by 30% and reduced lead
times.

3. Supply Chain Management

Case Study: Logistics Optimization for E-Commerce.

Objective: Enhance delivery efficiency for a retail company.

Method: A simulation model analyzed different distribution


strategies and inventory management practices.

Results: Implementing the recommended strategies led to a 25%


reduction in delivery times and a 15% decrease in logistics costs.

4. Urban Planning

Case Study: Traffic Flow Simulation in a Major City.

Objective: Assess the impact of new traffic signals on congestion.

Method: A city used traffic simulation software to model current


and proposed traffic patterns under various conditions.

Results: The analysis revealed optimal signal timings, leading to a


projected 20% reduction in congestion during peak hours.

5. Environmental Science

Case Study: Coastal Erosion Impact Assessment


Objective: Evaluate the effects of climate change on coastal areas.

Method: Researchers used a simulation model to predict erosion


rates under various sea-level rise scenarios.

Results: Findings informed conservation strategies, identifying


critical areas for intervention and helping guide policy decisions.

6. Financial Services

Case Study: Portfolio Risk Assessment

Objective: Assess the risk exposure of an investment portfolio.

Method: A financial institution employed Monte Carlo simulations


to model potential market fluctuations and their impacts.

Results: The simulation revealed vulnerabilities in the portfolio,


leading to adjustments that improved overall risk management.

7. Military Training

Case Study: Combat Readiness Simulation

Objective: Enhance soldier preparedness for urban operations.

Method: A military unit used virtual reality simulations to create


realistic combat scenarios for training.

Results: Soldiers reported improved situational awareness and


tactical decision-making, with performance in drills increasing by 35%.

8. Education

Case Study: Virtual Labs for Engineering Students

Objective: Improve practical skills and theoretical understanding.

Method: A university implemented virtual simulation labs where


students could design and test engineering projects.

Results: Student engagement and performance in practical assessments


increased by 25%, enhancing their overall learning experience.

You might also like