0% found this document useful (0 votes)
9 views20 pages

Rubel & Durjoy

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views20 pages

Rubel & Durjoy

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

IoT-Based Suspicious An Activity Recognition

Using Computer Vision

Registration No: 17508006604 & 17508006634

Institute of Science and Technology, National University, Bangladesh

March 19, 2023

Institute of Science and Technology


Motivation
• Providing a smartest security system
• Preventing crimes and recording data as proof
• Ensuring people data privacy
• Identify human behaviors based on subject’s observations
• Providing a safer city
• Machine learning (ML) - still poses challenges in the research that conducts
on security

2
Institute of Science and Technology
Technical Challenges
Challenge 1: Improving the accuracy of human
behavior recognition

Challenge 2: Developing a 3D CNN approach


computer vision by YOLO algorithm

Challenge 3: Overcoming the sparse annotation


problem in 2D image-based supervised learning

Challenge 4: Resolving the motion detection


problem in an automatic and non-invasive way

Challenge 5: Designing powerful and discriminatory


IP/TCP frame with real-time response to deploy the
detections for the observers notifications

Challenge 6: Investigating if we can design powerful


and discriminatory IC/TCP frame

3
Institute of Science and Technology
Data
The project is designed based on two databases:

Database 1:
• Intended to compile a database of suspicious objects associated with these actions
• Images from the University of Central Florida (UCF) crime dataset[1]

Database 2:
• The Common Objects in Context Organization (COCO) dataset[2]

[1] The Cornell University Archive (W. Sultani et al., 2018)


[2] The Cornell University Archive (TY Lin et el, 2014)

4
Institute of Science and Technology
Project Outline

Introduction Methodology

Expected
Background
Experimental Result
& Analysis

5
Institute of Science and Technology
Introduction
• Automated human activity recognition is important due to advancements in technology
and has various applications for indoor and outdoor safety.
• Manual involvement is not possible due to the vast amount of video data to analyze, so
intelligent video surveillance is required to detect irregularities in human behavior and
issue alerts.
• Sensors like cameras, radar, and mobile phones are used for human activity recognition,
and it would be advantageous to construct a system that can detect an aberrant event in
advance and alert authorities.
• Automated human activity recognition has several applications and is critical for indoor
and outdoor safety, and computer vision technology is used for data processing and
analysis.
• Deep learning and LSTM models are suitable for video and can extract visual patterns
from pixels and retain information for a longer period of time, respectively, and IoT
protocols are employed to link multiple cameras to a centralized system for prediction.

6
Institute of Science and Technology
Project Outline

Introduction Methodology

Expected
Background Experimental Result
& Analysis

7
Institute of Science and Technology
Backgrounds
• Depth sensors increased human activity recognition in the early 1980s
• Human behavior tracking is important in computer vision research for a range of
applications, including surveillance, video processing, robotics, and human-computer
interaction.
• Previous research has focused on understanding and detecting activities from visible
light video feeds.
• Researchers have used various deep learning techniques, including LSTM networks and
CNN classifiers, to detect and classify human activities.
• Some studies have focused on detecting anomalies or suspicious behavior, while others
have focused on recognizing completed job assignments in an industrial context.
• The suggested research aims to improve the efficiency and accuracy of detecting and
classifying human behaviors in live recordings by first detecting the region of interest
before passing it to the classification network.

8
Institute of Science and Technology
Project Outline

Introduction Methodology

Expected
Background Experimental Result
& Analysis

9
Institute of Science and Technology
Methodology

The approach is intended in two stages:

1. YOLO-v4 to detect Region of Interest (ROI)

2. A 16-frame sequence will be formed, and the ROI will be

transferred across the sequence of frames into the 3D-CNN

for classification

10
Institute of Science and Technology
Methodology: 1st Stage

Figure 1: Intended Backbone Network Figure 2: Intended Auxiliary Network

11
Institute of Science and Technology
Methodology: 2nd Stage

Figure 3: 3D-CNN architecture Figure 4: Flow of video recognition

12
Institute of Science and Technology
Methodology: Smart Surveillance
• IoT and Ethernet will be zipped up for real-time decision-making
• LAN and Physical media (CAT-6) will be used for data transfer
• NVR and BNC cable used for organizing CCTV stream, with signal
transmitted to DVR via BNC cable.
• NVR and BNC cable used for organizing CCTV stream, with signal
transmitted to DVR via BNC cable.
• IoT-based architecture used for efficient, real-time access and encrypted
transmission of video/sound, with GPU-based server analyzing live
recordings using YOLO-v4 and 3D-CNN.
• Anomaly prediction and warnings/notifications sent in case of
emergency/suspicious behavior.

13
Institute of Science and Technology
Methodology: Smart Surveillance

Figure 5: Internet of Things-based architecture for decision making

14
Institute of Science and Technology
Project Outline

Introduction Methodology

Expected
Background Experimental Result
& Analysis

15
Institute of Science and Technology
Experimental Result & Analysis: Expected
• Modified-YOLOv4 will be trained for 1500 epochs on 2000 frames of each activity, while 3D-CNN will
be trained for 2000 epochs with 80% allocated to training and 20% to validation
• Modified-YOLOv4 is expected to have 94.21% validation accuracy and 96.2% training accuracy, with
training loss reduced from 8.6 to 0.19 and validation loss decreased from 8.7 to 0.25

Figure 6: Expected Accuracy graph of Modified- Figure 7: Expected Loss graph of Modified-YOLOv4
YOLOv4

16
Institute of Science and Technology
Experimental Result & Analysis: Expected
• Training accuracy would be 94.8% and validation accuracy would be 89.0%.
• Training loss would be reduced from approximately 9.2 to 0.11 in the last epoch, while validation loss
would start at 9.8 and finish at 0.22.

Figure 8: Expected Training and validation accuracy graph of 3D CNN Figure 9: Expected Training and validation Loss graph of 3D CNN

17
Institute of Science and Technology
Summary
• Human activity recognition is a common research subject due to technological advances
• Deep learning is the most exemplary architecture for automated activity recognition
• The proposed work implemented YOLOV4
and 3D CNN for surveillance

Introduction Methodology

Expected
Background
Experimental
Result & Analysis
• Modified-YOLOv4 is expected
to have 94.21% validation
• Previous research has focused on understanding and accuracy and 96.2% training
detecting activities from visible light video feeds
• Researchers have used various deep learning
accuracy
• Training accuracy would be 94.8% and validation
techniques, including LSTM networks and CNN
classifiers, to detect and classify human activities accuracy would be 89.0%

18
Institute of Science and Technology
Future Works
• Monitoring Jail, Hospital, City, Malls, Important Places Using Computer Vision
and Deep Learning
• Crime Rate Analysis Through Human Behaviors Activity Detection
• Development of a Human Behavior Database Analytics Software for
Psychology Research
My deepest gratitude to
 My supervisor, Md. Shawkot Hossain
 My all dearest teachers
 All my classmates

Thank You
Institute of Science and Technology
References
• IEEE Access, vol. 9, pp. 89181–89197, 2021.
• Concurrency and Computation: Practice and Experience, vol. 34, no. 10, Article ID e6825, 2022.
• Computers & Electrical Engineering, vol. 99, Article ID 107810, 2022.
• IEEE/CAA Journal of Automatica Sinica, vol. 8, no. 7, pp. 1253–1270, 2021.
• Computational Intelligence and Neuroscience, vol. 2022, 2022.
• Computers & Electrical Engineering, vol. 94, Article ID 107363, 2021.
• Computational Intelligence and Neuroscience, vol. 2021, pp. 1–15, 2021.
• Future Generation Computer Systems, vol. 89, pp. 335–348, 2018.
• Information, vol. 10, no. 10, p. 293, 2019.
• Sensors, vol. 19, no. 9, p. 2114, 2019.
• The Journal of Supercomputing, vol. 77, no. 4, pp. 3242–3260, 2021.
• Mathematical Problems in Engineering, vol. 2018, pp. 1–13, 2018.
• Computers & Electrical Engineering, vol. 99, Article ID 107805, 2022.
• Neural Networks, vol. 132, pp. 297–308, 2020.
• Journal of Information Technology and Digital World, vol. 02, no. 01, pp. 64–72, 2020.

20
Institute of Science and Technology

You might also like