0% found this document useful (0 votes)
8 views

minor_project

The project report presents a 'Voice Based Virtual Assistant' developed by students of B.M.S. College of Engineering as part of their Bachelor of Engineering in Electronics and Communication Engineering. It focuses on creating a web-based fire detection application using the YOLO deep learning model for real-time fire identification in video streams, addressing limitations of traditional fire detection methods. The report outlines the project's objectives, methodology, and potential applications across various domains, emphasizing the integration of AI for enhanced fire safety.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

minor_project

The project report presents a 'Voice Based Virtual Assistant' developed by students of B.M.S. College of Engineering as part of their Bachelor of Engineering in Electronics and Communication Engineering. It focuses on creating a web-based fire detection application using the YOLO deep learning model for real-time fire identification in video streams, addressing limitations of traditional fire detection methods. The report outlines the project's objectives, methodology, and potential applications across various domains, emphasizing the integration of AI for enhanced fire safety.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

B.M.S.

COLLEGE OF ENGINEERING
Bengaluru-560019.
Autonomous College, affiliated to
Visvesvaraya Technological University, Belgaum

A Project Report on

“VOICE BASED VIRTUAL ASSISTANT”


Submitted in partial fulfilment of the requirements for the award

of the degree

Bachelor of Engineering in Electronics and Communication Engineering

By

SATHYABUSHAN M N 1BM20EC136

SUDEEP P 1BM20EC160

SUHAS H 1BM20EC163

SYED ASIF PASHA 1BM20EC170

Under the guidance of


TBD

(Assistant Professor, Dept. of EC)

Department of Electronics and Communication Engineering, BMSCE

Academic Year

2023-2024

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

BMS COLLEGE OF ENGINEERING


BULL TEMPLE ROAD, BASAVANAGUDI, BENGALURU-560019
BMS COLLEGE OF ENGINEERING
Autonomous college, affiliated to VTU
Bull Temple Road, Bengaluru – 560
019
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

CERTIFICATE

This is to certified that the project entitled “VOICE BASED VIRTUAL


ASSISTANT” is a bonafide work carried out by SATHYABUSHAN M N:
(1BM20EC136), SUDEEP P: (1BM20EC160), SUHAS H: (1BM20EC163) and
SYED ASIF PASHA: (1BM20EC170), in partial fulfillment for the award degree of
Bachelor of Engineering in Electronics and Communication during the academic year
2023-24. Theproject report has been approved as it satisfies the academic requirements
in respect of project work prescribed for the said degree.

Guide Head of Department

( TBD ) ( Dr. Siddappaji )


Assistant Professor Professor and Head
Department of ECE Department of ECE
BMS College of Engineering BMS College of Engineering

(Dr. Muralidhara S)
Principal
BMS College of Engineering

External Viva
Name of the Examiners Signature with Date

1.

2.
DECLARATION
SATHYABUSHAN M N: (1BM20EC136), SUDEEP P: (1BM20EC160),
SUHAS H: (1BM20EC163) and SYED ASIF PASHA: (1BM20EC170), hereby declare
that the project work entitled “VOICE BASED VIRTUAL ASSISTANT” is a bonafide
work and has been carried out by us under the guidance of TBD, Assistant Professor,
Department of Electronics and Communication Engineering, BMS College of Engineering,
Bengaluru in partial fulfillment of the requirements for the award of Bachelor of
Engineering in Electronics and Communication engineering, Visvesvaraya Technological
University, Belagavi, during the academic year 2023-24.We further declare that, to the best
of our knowledge and belief, this project work has not been submitted either in part or in
full to any other university for the award of any degree.

Place: Bengaluru SATHYABHUSHAN M N : 1BM20EC136


Date: SUDEEP P : 1BM20EC160
SUHAS H : 1BM20EC163
SYED ASIF PASHA : 1BM20EC170
VOICE BASED VIRTUAL ASSISTANT

ACKNOWLEDGEMENTS

We take this opportunity to express our profound gratitude to the respected principal Dr. S.
Muralidhara, BMS College of Engineering for providing a congenial environment to work in. Our
sincere gratitude to Dr. Siddappaji, Head of the Department, Electronics and Communication
Engineering for encouraging and providing opportunity to carry project work in the department.
We heartfully thank our guide TBD for the guidance and constant encouragement throughout
the course of this project without which this project would not be successful.
A number of personalities, in their own capacities have helped us in carrying out this project
work. We would like to take this opportunity to thank them all.

1) SATHYABUSHAN M N: 1BM20EC136

2) SUDEEP P: 1BM20EC160

3) SUHAS H: 1BM20EC163

4) SYED ASIF PASHA: 1BM20EC170


VOICE BASED VIRTUAL ASSISTANT

ABSTRACT

This project introduces a web-based Fire Detection Application built using Streamlit and the YOLO
deep learning model, designed for real-time fire detection in video files. Users can upload MP4
videos, which are processed frame by frame to identify the presence of fire. The application
leverages a fine-tuned YOLO model specifically trained for fire detection, ensuring accurate and
reliable results. Key features include adjustable parameters, allowing users to modify the confidence
threshold for detections and set frame skipping to enhance performance. During processing,
annotated frames with visualized detections are displayed dynamically on the interface. Once the
detection is complete, the processed video with fire annotations can be downloaded for further
analysis. OpenCV is used for efficient video handling, while Streamlit’s interactive features ensure a
user-friendly experience. This lightweight and intuitive application is well-suited for various
scenarios, such as monitoring surveillance footage or evaluating environmental risks, providing a
practical tool for fire detection tasks.
VOICE BASED VIRTUAL ASSISTANT

LIST OF FIGURES
FIGURE 3.1: Block diagram

FIGURE 3.2: Project Flow


VOICE BASED VIRTUAL ASSISTANT

CONTENTS

TITLE PAGE NO

CHAPTER 1: INTRODUCTION...........................................................................................................2
1.1 INTRODUCTION.....................................................................................................................2
1.2 AREA OF APPLICATION.......................................................................................................5
1.3 PROBLEM DEFINITION.........................................................................................................8
1.4 PROPOSED SOLUTION........................................................................................................11
1.5 OBJECTIVE OF THE PROJECT...........................................................................................15
CHAPTER 2: LITERATURE SURVEY.............................................................................................20
CHAPTER 3: METHODOLOGY AND IMPLEMENTATION..........................................................28
3.1 BLOCK DIAGRAM................................................................................................................28
3.1.1 Speech to Text Converter..............................................................................................28
3.1.2 AI Voice Assistant.........................................................................................................29
3.1.3 Text to Speech Converter:.............................................................................................29
3.2 PROJECT FLOW....................................................................................................................29
3.3 COMPONENT REQUIREMENTS AND BUDGET..............................................................30
3.4.1 SYSTEM SPECIFICATIONS....................................................................................................31
Table 3.1: System Specifications............................................................................................31
MIC SPECIFICATIONS........................................................................................................32
Table 2: Mic Specifications....................................................................................................32
3.5.1 SOFTWARE DESCRIPTION....................................................................................................33
3.5.1.1 ChatGPT...........................................................................................................................33
3.5.1.2 .NET..................................................................................................................................42
3.5.1.3 .NET MAUI......................................................................................................................43
3.5.1.4 XAML...............................................................................................................................43
3.5.1.5 C#......................................................................................................................................44
3.5.1.6 Speech Recognizer:............................................................................................................46
3.5.1.6.1 All features...............................................................................................................47
3.5.1.7 SQ Lite...............................................................................................................................50
CHAPTER 4: RESULTS AND DISCUSION......................................................................................53
CHAPTER 5: CONCLUSION............................................................................................................59
REFERENCES.....................................................................................................................................61
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 1: INTRODUCTION

1
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 1: INTRODUCTION

1.1 INTRODUCTION

Fire detection plays a vital role in ensuring safety across various domains, including industrial,
residential, and environmental settings. Traditional fire detection methods often rely on smoke
sensors, heat detectors, or manual monitoring, which can be limited in accuracy and scope. With the
advent of artificial intelligence and computer vision, advanced techniques now allow for more robust
and efficient fire detection. This project focuses on leveraging the YOLO deep learning model to
develop a real-time fire detection application capable of identifying fire in video streams. By
combining AI capabilities with an intuitive user interface, the application provides a scalable and
reliable solution to enhance fire safety and prevention. Fire detection systems are crucial for ensuring
safety and minimizing damage in various environments, from residential buildings to industrial
plants. Traditional fire detection systems rely on heat, smoke, or flame sensors, but these systems
often suffer from delays and inaccuracies. With advancements in artificial intelligence (AI) and
computer vision, fire detection has become more robust and reliable. Leveraging deep learning
models such as YOLO (You Only Look Once), fire detection can now be performed in real-time
using video footage. This project explores an AI-driven fire detection system, focusing on the
integration of YOLOv11 for high accuracy and quick response.

2
VOICE BASED VIRTUAL ASSISTANT

1.2 AREA OF APPLICATION

The fire detection application has a wide range of practical uses. In industrial settings, it can monitor
factories and warehouses for potential fire hazards, reducing the risk of significant damage. In
residential spaces, the system can integrate with home surveillance systems to provide early
warnings for fire incidents. Environmental applications include identifying wildfires or forest fires,
which are crucial for minimizing environmental destruction. Additionally, public infrastructure such
as airports, shopping malls, and train stations can benefit from enhanced fire safety through real-time
detection. Emergency response teams can also use this tool to receive timely alerts, enabling faster
and more effective rescue operations. The application of this system spans multiple domains,
including fire safety in residential, commercial, and industrial settings. It is particularly useful in
remote areas where traditional detection systems are unavailable or unreliable. Other potential
applications include monitoring forest fires, ensuring compliance with safety standards in
manufacturing units, and safeguarding cultural heritage sites. The system's ability to analyze video
feeds in real-time makes it ideal for integrating with surveillance systems for continuous monitoring.

3
VOICE BASED VIRTUAL ASSISTANT

1.3 PROBLEM DEFINITION

Fire incidents are among the most devastating hazards, leading to loss of life, significant property
damage, and environmental degradation. Traditional fire detection systems, such as smoke and heat
detectors, often exhibit several limitations. These include delayed detection times, especially in open or
high-ceiling environments, and vulnerability to false alarms caused by factors like steam, dust, or cooking
fumes. Moreover, these systems require costly hardware installations and frequent maintenance, making
them impractical for widespread deployment in certain settings, such as rural areas, forest reserves, or
heritage sites.

Another significant challenge lies in their inability to provide visual verification of fire incidents, which
is crucial for assessing the severity and planning an appropriate response. In environments where video
surveillance is already in place, leveraging existing infrastructure to enhance fire detection capabilities
remains underexplored.

This project addresses these gaps by introducing an AI-powered fire detection system that integrates
seamlessly with video surveillance systems. The proposed system aims to reduce dependency on
standalone fire detection hardware, minimize false positives, and enable faster and more reliable fire
detection. By utilizing real-time video analysis, this solution ensures prompt identification of potential
fire hazards, allowing for timely intervention and reducing the overall risk of catastrophic outcomes.

4
VOICE BASED VIRTUAL ASSISTANT
1.4 PROPOSED SOLUTION

To overcome the limitations of traditional fire detection systems, this project proposes a cutting-edge
solution leveraging YOLOv11, an advanced object detection model renowned for its speed and accuracy.
The solution employs a pre-trained YOLO model, further fine-tuned on specialized fire detection datasets
to enhance its precision in identifying flames under diverse conditions, such as varying lighting,
occlusions, and dynamic backgrounds.

The system processes video feeds frame by frame, analyzing them for signs of fire in real-time. To
optimize computational efficiency, it employs a frame-skipping mechanism, ensuring that the system
balances detection accuracy with processing speed. This makes it suitable for deployment on devices with
limited computational resources, such as edge devices or low-power servers.

Key features of the proposed solution include:

1. Streamlit Interface: The application provides an easy-to-use web interface built with Streamlit.
Users can upload video files, view real-time detections, and adjust settings like confidence
thresholds and frame skip values to suit their specific requirements.
2. Dynamic Processing: The system is designed to adapt to different video resolutions and frame
rates, ensuring compatibility with a wide range of surveillance setups.
3. Annotated Outputs: Detected fire incidents are visually highlighted in the video output, enabling
clear and intuitive understanding for users.
4. Downloadable Results: The processed video, with fire detections annotated, can be downloaded
for documentation or further analysis.
5. Low Latency: By skipping unnecessary frames and focusing on key detections, the system
ensures minimal latency, making it ideal for real-time monitoring applications.

The proposed system goes beyond mere fire detection by providing actionable insights that facilitate
faster decision-making. Its flexibility, cost-effectiveness, and ease of integration with existing
infrastructure make it a promising tool for enhancing fire safety across various domains, including
residential, industrial, and environmental applications.

Potential future extensions of the solution include adding features like smoke detection, integrating with
IoT-enabled fire alarms, and deploying the system on drones for monitoring large or remote areas, such as
forests or industrial complexes.

5
VOICE BASED VIRTUAL ASSISTANT

1.5 OBJECTIVE OF THE PROJECT

The primary objective is to design a reliable, fast, and user-friendly fire detection system. Specific goals
include achieving high detection accuracy, minimizing false positives, and providing an interface for easy
use. Another objective is to ensure compatibility with various video formats and surveillance systems,
making the solution versatile and widely applicable.

6
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 2: LITERATURE SURVEY

7
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 2: LITERATURE SURVEY


Fire detection systems have long relied on traditional techniques such as smoke and heat sensors.
However, these approaches often fail in outdoor environments or large indoor spaces, leading to delayed
responses or high false alarm rates. Recent advancements in computer vision and artificial intelligence
have paved the way for more robust and reliable fire detection systems. This chapter reviews the
progression of fire detection technologies and the role of deep learning in revolutionizing this domain.

2.1 Traditional Fire Detection Systems

Traditional fire detection systems, including smoke detectors, flame sensors, and infrared sensors, have
been widely adopted for decades. While effective in specific scenarios, these systems are prone to
limitations such as slow response times, high installation and maintenance costs, and susceptibility to
environmental factors like humidity or dust.

For example, G. Marbach et al. (2006) in "An Image Processing Technique for Fire Detection in Video
Images" demonstrated that traditional sensors struggle to identify fire in complex visual scenarios,
necessitating the use of visual-based detection methods.

2.2 Evolution of Computer Vision in Fire Detection

The adoption of image processing techniques marked a significant leap in fire detection. Early systems
utilized handcrafted features like color and motion analysis to identify flames. However, these methods
lacked robustness when applied to diverse environments or under varying lighting conditions.

With the advent of machine learning, researchers began training models on fire datasets to improve
detection accuracy. Celaya et al. (2012) in "A Bayesian Network Model for Video Fire Detection"
highlighted the use of probabilistic models to enhance decision-making. However, these approaches were
computationally intensive and lacked scalability for real-time applications.

2.3 Deep Learning and Real-Time Fire Detection

Deep learning has revolutionized fire detection, enabling systems to process large datasets and recognize
fire patterns with high accuracy. Among the various models, YOLO (You Only Look Once) has gained
significant attention for its ability to perform object detection in real-time.

Studies comparing YOLO with models like SSD (Single Shot MultiBox Detector) and Faster R-CNN
emphasize YOLO's superiority in speed without compromising accuracy. Redmon et al. (2016)
introduced YOLO as a groundbreaking object detection framework, and subsequent versions, such as
YOLOv4 and YOLOv5, have demonstrated improved performance for various applications, including fire
detection.

L. Akhloufi et al. (2019) in "Deep Learning for Fire Detection Using Unmanned Aerial Vehicles"
explored the use of YOLO for detecting forest fires using drone footage, showcasing the model's
adaptability to different scenarios. Similarly, Chen et al. (2020) in "Real-Time Fire Detection Using
Deep Learning Techniques" highlighted YOLO's ability to balance accuracy and computational
efficiency, making it suitable for deployment in surveillance systems.

8
VOICE BASED VIRTUAL ASSISTANT
2.4 Challenges in AI-Based Fire Detection

Despite its advantages, AI-based fire detection systems face challenges, particularly in complex
environments. False positives due to reflections, sunlight, or other visual disturbances remain a critical
issue. Basu et al. (2015) in "An Intelligent Decision Support System for Fire Detection Using Video
Processing" identified the need for robust preprocessing techniques to filter out noise and improve
detection reliability.

Moreover, scalability to different resolutions and real-time processing constraints require optimizing
models to run on edge devices or low-power systems. Recent advancements in lightweight architectures,
such as YOLO-Nano, aim to address these challenges, as discussed by Qin et al. (2021) in "Real-Time
Lightweight Fire Detection Framework Using YOLO-Nano."

2.5 Foundation for Methodology

The reviewed literature underscores the potential of AI-based systems to revolutionize fire detection. By
leveraging YOLOv11, the proposed project builds on the strengths of previous studies while addressing
their limitations. Fine-tuning YOLO on specialized fire datasets and optimizing frame skipping for real-
time performance ensures a practical and efficient solution.

9
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 3: METHODOLOGY AND


IMPLEMENTATION

10
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 3: METHODOLOGY AND IMPLEMENTATION:

3.1 BLOCK DIAGRAM

Fig 3.1: Block diagram

The block diagram represents the workflow of an AI-based fire detection system using YOLO. This
system is designed to process video input, detect instances of fire in real-time, and produce an annotated
output video with highlighted detections. The diagram is structured into four major components, each
representing a crucial stage in the system’s functionality.

The first component, Input Video Feed, is where the process begins. This block represents the raw video
footage supplied to the system. The video can be sourced from surveillance cameras, drone feeds, or user-
uploaded files. This flexibility allows the system to operate in various scenarios, including indoor
environments, forest monitoring, or industrial safety inspections.

The second block, Preprocessing (Frame Extraction & Resizing), performs critical preparatory
operations on the input video. Video files consist of multiple frames per second, and processing all frames
can be computationally expensive. To optimize performance, the system extracts specific frames at
11
VOICE BASED VIRTUAL ASSISTANT
regular intervals (based on user settings for frame skipping). Each extracted frame is resized to match the
input dimensions expected by the YOLO model, ensuring compatibility and reducing computational
overhead. This preprocessing stage also handles tasks like color normalization and format adjustments,
which are vital for improving detection accuracy.

The core of the system lies in the third block, YOLO Model (Fire Detection). YOLO, or "You Only
Look Once," is a state-of-the-art object detection framework known for its speed and accuracy. The
system uses a pre-trained YOLO model fine-tuned on fire datasets to identify flames within the frames.
The model takes each preprocessed frame as input and performs real-time object detection, producing
bounding boxes around fire regions. Alongside the bounding boxes, confidence scores are calculated,
indicating the probability of the detected object being fire. These scores are compared against a user-
defined confidence threshold, ensuring only high-confidence detections are considered.

Finally, the last block, Output Processed Video with Detections, represents the output stage. Once fire
is detected in a frame, the system overlays annotations, such as bounding boxes and labels, onto the
original frame. These annotated frames are compiled into a new video file, preserving the original frame
rate and resolution. This processed video is then made available for review, enabling users to visualize the
detected fire occurrences. In addition to the video output, the system can generate alerts or logs for further
analysis, making it suitable for integration into larger safety frameworks.

The overall design emphasizes efficiency and accuracy. By leveraging YOLO’s capabilities and
implementing preprocessing techniques, the system ensures timely detection while minimizing
computational costs. The modular nature of the workflow allows for customization, such as adjusting
frame skip rates or confidence thresholds, making the system adaptable to various environments and user
needs.

This block diagram and its explanation provide a clear understanding of the system’s architecture,
illustrating the seamless integration of AI and video processing for fire detection.

12
VOICE BASED VIRTUAL ASSISTANT
3.5.1 SOFTWARE DESCRIPTION

This chapter describes the technologies and tools used in the project. The YOLOv11 model, built using
PyTorch, forms the backbone of the detection system. OpenCV handles video processing, while Streamlit
provides a web-based interface for user interaction. Temporary files and video processing pipelines are
managed using Python's standard libraries. The app's functionality is enhanced with features like progress
bars, real-time previews, and video download options. The modular design ensures scalability and ease of
integration with other systems.

13
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 4: RESULTS AND


DISCUSSION

14
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 4: RESULTS AND DISCUSION:

The results demonstrate the system's capability to detect fire in various video scenarios, including low-
light and cluttered environments. With a confidence threshold of 0.35, the model achieves a balance
between detection accuracy and false positives. Frame skipping optimizes processing time, making real-
time detection feasible. The discussion highlights challenges encountered, such as handling occlusions
and optimizing frame processing for different video resolutions. Future improvements could include
expanding the training dataset and integrating additional features like smoke detection.

The system's performance was evaluated on multiple test videos, showcasing its robustness and
adaptability in diverse settings. It successfully identified fire instances with minimal false alarms, even in
dynamic backgrounds and high-motion scenarios. The integration of YOLOv11 enabled precise
localization of fire regions, which was visually represented in the output video frames. However, certain
limitations were observed, such as occasional misdetections in scenarios involving bright lights or
reflective surfaces. Additionally, the system's processing speed varied with video resolution and hardware
specifications, indicating a need for further optimization. These insights provide a foundation for refining
the model and expanding its applicability to more complex environments.

15
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 5: CONCLUSION

16
VOICE BASED VIRTUAL ASSISTANT

CHAPTER 5: CONCLUSION

This project has successfully designed and implemented an AI-driven fire detection system utilizing the
YOLOv11 object detection model. The system addresses the challenges posed by traditional fire detection
methods, such as delayed response times, high false-positive rates, and dependency on specialized
hardware. By leveraging advancements in deep learning and real-time video analysis, this project
provides a reliable, efficient, and scalable solution for detecting fire in various environments.

The developed system offers several key benefits. It integrates seamlessly with existing video
surveillance infrastructure, eliminating the need for additional hardware installations. The system
achieves high accuracy by utilizing a YOLOv11 model fine-tuned on fire datasets, enabling it to
differentiate fire from other visual anomalies in diverse and complex scenes. Furthermore, the real-time
processing capability ensures timely detection, which is critical in reducing the risk of property damage,
injuries, or loss of life during fire incidents.

A standout feature of this project is the user-friendly interface built using Streamlit. This interface allows
users to upload videos, adjust settings such as confidence thresholds and frame skipping, and visualize the
detection results in an intuitive manner. The processed video output, annotated with bounding boxes
around detected fire regions, makes the results easy to interpret and review. This simplicity in design
ensures accessibility for users with varying levels of technical expertise.

The system also demonstrates computational efficiency by incorporating preprocessing steps like frame
extraction and resizing, which optimize the performance without compromising detection accuracy. The
flexibility of adjusting parameters like frame skip rates and confidence thresholds further enhances the
adaptability of the system, making it suitable for diverse scenarios such as forest monitoring, industrial
safety, and residential fire prevention.

While the current implementation is highly effective, it also highlights areas for future development.
Deploying the system on edge devices, such as IoT-enabled cameras or drones, could significantly
expand its applicability, particularly in remote or resource-constrained areas. Edge deployment would
also reduce latency, as the detection process could occur closer to the source of the video feed.
Additionally, integrating multi-class detection capabilities to identify not just fire but other hazards like
smoke or sparks could enhance the system’s functionality.

Another avenue for improvement lies in expanding the dataset used for fine-tuning the YOLOv11 model.
Incorporating data from various lighting conditions, weather scenarios, and cultural settings would further
increase the robustness of the system. Moreover, combining this model with predictive analytics or early-
warning mechanisms could enable proactive responses, such as triggering alarms or activating fire
suppression systems automatically.

In conclusion, this project marks a significant step forward in leveraging AI for fire safety applications.
The system provides an effective alternative to traditional methods, with superior accuracy, faster
detection times, and minimal false positives. Its ability to operate with existing infrastructure and its
flexibility in user-defined configurations make it a highly practical solution for real-world deployment.
With further enhancements, the AI-driven fire detection system has the potential to revolutionize fire
safety protocols globally, contributing to safer living and working environments. This success
underscores the transformative impact of integrating AI with practical applications, paving the way for
future advancements in emergency response systems.

17
VOICE BASED VIRTUAL ASSISTANT

REFERENCES
1) C. Yuan, F. Zhang, and H. Liu, "A Survey on Fire Detection Using Computer Vision
Techniques," IEEE Transactions on Industrial Informatics, vol. 16, no. 5, pp. 3239–3250, May 2020,
doi: 10.1109/TII.2019.2945121.

2) L. D. Viegas et al., "Real-Time Flame Detection Using YOLOv3 and UAV Imagery," 2020
International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France,
2020, pp. 1-6, doi: 10.1109/IPTA50016.2020.9286634.

3) Z. Wei, Y. Wei, and W. Tian, "A Novel Fire Detection Algorithm Based on Deep Learning and
Image Processing," IEEE Access, vol. 8, pp. 127888–127899, 2020, doi:
10.1109/ACCESS.2020.3007654.

4) H. Yu, Y. Liu, and J. Zhang, "Deep Convolutional Neural Networks for Fire Detection in
Surveillance Videos," 2019 IEEE/CVF International Conference on Computer Vision Workshop
(ICCVW), Seoul, Korea (South), 2019, pp. 3508-3514, doi: 10.1109/ICCVW.2019.00430.

5) S. Muhammad, S. Hussain, and M. Sajid, "Fire Detection in Urban Areas Using YOLOv4 and
Deep Learning," 2021 IEEE International Symposium on Multimedia (ISM), Naples, Italy, 2021, pp.
99-104, doi: 10.1109/ISM52913.2021.00025.

6) T. Celik and H. Demirel, "Fire Detection in Video Sequences Using Statistical Color Model,"
2019 IEEE Transactions on Image Processing, vol. 28, no. 7, pp. 3556–3566, July 2019, doi:
10.1109/TIP.2019.2905881.

7) F. Long, Y. Cao, and R. Lu, "A Study on Real-Time Fire Detection and Localization Using Drone
Technology," 2021 IEEE Access, vol. 9, pp. 2987–2999, doi: 10.1109/ACCESS.2021.3049359.

8) M. Cheng et al., "Multi-Modal Fire Detection Based on Infrared and Visible Video Streams,"
2020 IEEE Sensors Journal, vol. 20, no. 21, pp. 12851–12863, Nov. 2020, doi:
10.1109/JSEN.2020.3019302.

1.

18

You might also like