0% found this document useful (0 votes)
70 views

Synopsis Final

This document provides a synopsis for a project on weapon detection in public spaces using YOLOv5. It aims to enhance public safety and prevent threats by detecting weapons and analyzing human behavior using deep learning. The project will use YOLOv5, a state-of-the-art object detection model, to identify weapons in video surveillance footage and detect abnormal behaviors. A literature review covers previous work using YOLOv3 and deep learning models for weapon detection in surveillance videos and human detection while preserving privacy. The project scope, methodology, requirements and expected outcomes are outlined in the synopsis.

Uploaded by

Nirnay Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Synopsis Final

This document provides a synopsis for a project on weapon detection in public spaces using YOLOv5. It aims to enhance public safety and prevent threats by detecting weapons and analyzing human behavior using deep learning. The project will use YOLOv5, a state-of-the-art object detection model, to identify weapons in video surveillance footage and detect abnormal behaviors. A literature review covers previous work using YOLOv3 and deep learning models for weapon detection in surveillance videos and human detection while preserving privacy. The project scope, methodology, requirements and expected outcomes are outlined in the synopsis.

Uploaded by

Nirnay Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

A PROJECT SYNOPSIS ON

A PROJECT GROUP NO:

Weapon Detection in Public Spaces Using YOLOv5


SUBMITTED TO THE SAVITRIBAI PHULE PUNE UNIVERSITY ,
PUNE IN THE PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE AWARD OF THE DEGREE

BACHELOR OF ENGINEERING (Computer Engineering)


SUBMITTED BY

Nirnay Patil PRN No: 72145755L


Shantanu Badwe PRN No: 72145436E
Swapnil Phand PRN No: 72145802F
Rohan Kshirsagar PRN No: 72145676G

Under The Guidance of


Prof. Prashant Sadaphule

DEPARTMENT OF COMPUTER ENGINEERING


AISSMS Institute of Information Technology
Kennedy Road, Near R.T.O., Pune – 411 001, Maharashtra (INDIA).

SAVITRIBAI PHULE PUNE UNIVERSITY, PUNE


2023 - 24
Abstract

Weapon detection is a very serious and intense issue as far as the security
and safety of the public in general, no doubt it’s a hard and difficult task fur-
thermore, its troublesome when you need to do it automatically or with some
of the AI model. Video Surveillance plays an important role in every aspect
of life like theft detection, unusual happenings in crowded places, monitoring
the suspicious activities of each individual to provide a secure and hassle free
environment. Footage of closed circuit television (CCTV) camera is taken as
an evidence to track the suspicious act. It is very tough to operate surveil-
lance cameras with human intervention to detect abnormal activities. Fully
automating surveillance with smart video capturing capabilities using deep
learning technique is one of the most advanced means of remotely monitoring
strange activities with exact location, time of event occurred along with fa-
cial recognition of criminal. Finding misdemeanor activity in a public place
is very difficult to observe, as many objects are involved in the real time sce-
nario. An uncommon or doubtful incidents in public places are captured in
CCTV cameras which promotes police force to safeguard people before any
mishap happens. It helps police to reach that spot on time and rescue victim.
All these are meant to be achieved by using YOLO (You Only Look Once)
object detection models and its variants like YOLO V1, V2, V3, V4 and
latest V5 which is 88system helps in identifying weapons held by a person as
well as face recognition to identify the suspicious user. Using YOLO v5, it is
very simple to track objects like weapons in a crowd. Low resolution images,
far away and out of focus in the scene can also be captured and identified
accurately.
Contents

1.0 Project Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


2.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.0 Technical Keywords (As per ACM Keywords) . . . . . . . . . 5
4.0 Domain of Project . . . . . . . . . . . . . . . . . . . . . . . . 6
5.0 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . 6
6.0 Internal Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
7.0 Type of Project . . . . . . . . . . . . . . . . . . . . . . . . . . 6
8.0 Sponsorship Details and External Guide Details . . . . . . . . 6
9.0 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . 7
10.0 List of Features . . . . . . . . . . . . . . . . . . . . . . . . . . 10
11.0 System Architecture . . . . . . . . . . . . . . . . . . . . . . . 10
12.0 List of Modules and Functionality . . . . . . . . . . . . . . . . 11
13.0 Goals and Objectives . . . . . . . . . . . . . . . . . . . . . . . 12
14.0 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
15.0 Scope of the Project . . . . . . . . . . . . . . . . . . . . . . . 15
16.0 Software and Hardware Requirements . . . . . . . . . . . . . . 15
17.0 Input to the Project . . . . . . . . . . . . . . . . . . . . . . . 16
18.0 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
19.0 Expected Outcomes . . . . . . . . . . . . . . . . . . . . . . . . 17
20.0 Plan of Project Execution . . . . . . . . . . . . . . . . . . . . 18
21.0 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.0 Project Title
WEAPON DETECTION IN PUBLIC SPACES USING YOLOv5

2.0 Introduction
At present weapon detection at all public places includes sensors for detecting
suspicious objects. Sensors are very expensive, not secure, and not efficient
and also it cannot cover large area under surveillance. To overcome the
drawbacks of conventional system, we focus on machine learning algorithms
for object detection whose efficiency is better than using sensors alone. The
algorithm is applied to all regions of input image and finds the highest score
as the region of detection, which is a time consuming task for processing
large number of images but with deep learning based You Only Look Once
(YOLO) algorithm the object detection process is simplified by applying the
algorithm to entire input image and the region of interest is highlighted with
bounding box concept to detect different real time images with higher accu-
racy. Through deep learning by using YOLO V5 algorithm. Misdemeanor
activities are detected easily and precisely in a Crowd. Through this algo-
rithm high and low level objects like weapons, unusual things irrelevant to
the situation is recognized and identified. This also enhances the localization
tasks. In addition to this, facial recognition is also implemented through
which it increases the speed by eliminating different object categories and
replacing them with facial features. Thus the irregularity through webcam is
monitored and is been prevented before cause. You Only Look Once (YOLO)
algorithm is very popular for real time object detection. It uses YOLOv5
which is an advanced version of YOLO. YOLOV5 is fast and accurate when
compared to earlier version of YOLO.The Difference between Yolov5 and
Yolov4 is as follows Yolov5 is very small which weighs around 27 megabytes
whereas yolov4 is 244 megabytes with DarkNet architecture. Yolov5 is 90can
be measured by using following metrics i) mAP-mean average precision, 2)
p-precision, 3) R-recall. This paper focuses particularly on three things i)
weapon detection such as a person holding a knife, gun, pistol and rifles
in public places ii)face detection of a person holding weapon will also be
observed by surveillance camera as a suspicious even by extracting features
from each segment. iii) It also monitors any suspicious activities like rising
the arms suddenly, bending down, and other abnormal activities performed
by a person will be discovered so it is a multiclass classification problem.
Identification of specific object among several real time objects is very tough
by using Video surveillance which covers multiple real time objects.
Figure 1: DETECTION OF OBJECTS USING YOLO

3.0 Technical Keywords (As per ACM Key-


words)
Technical Key Words:

• Weapon Detection

• Weapon Classification

• Surveillance Systems

• Threat Level Analysis

• Social Interaction Analysis

• Behavior Monitoring

• Alert Generation
4.0 Domain of Project
To avoid strange activities happening in society, detecting weapons, grouping
of people and threatening activities is achieved by using YOLOV5 algorithm.

5.0 Problem Statement


To enhance public safety and prevent threat in public spaces by weapon
detection and analyzing human behavior using deep learning therefore en-
hancing security measures and ensuring the safety of the public.

6.0 Internal Guide


Prof. Prashant Sadaphule

7.0 Type of Project


Non-sponsored project

8.0 Sponsorship Details and External Guide


Details
-
9.0 Literature Survey
1. Weapon Detection in Surveillance System
Author : Dr. N. Geetha, Akash Kumar. K. S, Akshita. B. P, Arjun.
M Coimbatore Institute of Technology
Publish year :05, May-2021
Methodology Used:YOLOv3(You look only once) algorithm for the de-
tection of weapons in real-time video. The YOLO models are end-to-
end deep learning models and are well-liked because of their detection
speed and accuracy.
Features:

• Detecteion weapon from the video frame.


• Classification of weapons – Handguns, Knives, and heavy guns
with the accuracy and type of detected weapon.

2. Deep Learning based Human Detection in Privacy-Preserved Surveil-


lance Videos.
Author : Nadia KanwalAthlone Institute of Technology. Samar Ansari
University of Chester.Mamoona Asghar University of Galway.Brian Lee
Athlone Institute of Technology
Publish year : July 2022
Methodology Used:The proposed method utilizes state-of-art object de-
tection deep learning models (viz. YOLOv4 and YOLOv5) to perform
human/object detection in masked videos.The data in these videos is
masked using Gaussian Mixture Model (GMM) and selective encryp-
tion. We then train high-performance object detection models on the
generated dataset.
Features:

• Deep learning models can automatically learn relevant features


from raw data,
• Deep learning models are highly scalable.

3. Smart Video Survillance Based Weapon Identification Using Yolov5


Author:Nikkath Bushra St. Joseph’s Institute of Technology.K. Uma
Maheswari Bharathi Womens College (Autonomous).
Publish year: April 2022
Methodology Used:YOLO (You Only Look Once) object detection mod-
els and its variants like Yolo V5 which is 88
Features:

• Provide the early detection of potentially violent situations.

4. Human pose estimation for mitigating false negatives in weapon detec-


tion in video-surveillance.
Author: Alberto Castillo Lamas University of Granada.Siham Tabik
University of Granada.Francisco Pérez University of Granada.
Publish year:June 2022
Methodology Used:Weapon Detection over Pose Estimation (WeDePE)
methodology that first determines the hand regions guided by the hu-
man pose estimation then analyzes those regions using a weapon de-
tection model.
Features:

• Exploits the human pose estimation for mitigating FN in the de-


tection of weapons, firearms and knives, held by a person in video-
surveillance.

5. Gun Detection in Video Frames with YOLOv3


Author:Alberto Castillo Lamas University of Granada. Siham Tabik
University of Granada. Francisco Pérez University of Granada.
Publish year June 2022
Methodology Used:Creating a new dataset from real-life video record-
ings and few datasets with data augmentation to implement the YOLOv3
algorithm. Besides, the performance of YOLOv3 over our datasets was
evaluated with mAP and IoU metrics.
Features:

• Prevent these situations is by detecting the presence of dangerous


objects such as handguns and knives in surveillance videos.

6. Application of Deep Learning for Weapons Detection in Surveillance


Videos
Author: eTufail Sajjad Shah Hashmi, Nazeef Ul Haq, Muhammad
Moazam Fraz, Muhammad Shahzad School of Electrical Engineering
and Computer Science, National University of Sciences and Technol-
ogy (NUST), Islamabad, Pakistan.
Methodology Used:In this paper that YOLOV4 performs obviously su-
perior to the YOLOV3 in terms of processing time and sensitivity yet
we can compare these two in precision metric.
Features:

• Object Detection: The primary focus is on using deep learning


models to detect weapons, such as Guns, knives, or other poten-
tially dangerous objects, within surveillance videos.
• Real-Time Monitoring: The goal is often to perform real-time
monitoring and detection in surveillance videos, making it crucial
for security applications in public spaces, airports, border control,
and more.

7. TYOLOV5: A Temporal YOLOv5 detector based on Quasi-Recurrent


Neural Networks for real-time handgun detection in video.
Author: Mario Alberto Duran-Vega School of Engineering and Science
Tecnology.
Methodology Used:Temporal Yolov5, an architecture based on Quasi-
Recurrent Neural Networks, temporal information is extracted from
video to improve the results of handgun detection.
Features:

• Temporal YOLOv5 extends the YOLOv5 model to include tempo-


ral (time-based) information, making it suitable for video analysis
and tracking over time.
• Quasi-Recurrent Neural Networks, which are designed to efficiently
model sequential data while maintaining parallel processing ca-
pabilities. This enables it to capture temporal dependencies in
videos.

-
10.0 List of Features
1. Weapon Detection 2. Tracking 3. Suspicious activity identification

11.0 System Architecture

Figure 2: System architecture

As shown below in the figure, there are three phases of architecture Object
detection, Analysis and Action. Each phase has its own steps including
training and analysis of datasets, detecting and identifying objects from the
video and lastly alerting the system of any suspicious and abnormal activity
or any detected deadly weapons.
12.0 List of Modules and Functionality
1. Data Acquisition and Preprocessing:
Collect data from various sensors (visual cameras, thermal cameras, au-
dio sensors). Preprocess data to ensure uniform quality, correct format,
and remove noise.
2. Weapon Detection Module:
Detect potential weapons in captured images or video streams. Utilize
machine learning models (such as CNNs) trained to identify weapon
signatures. Output the location and type of detected weapons.
3. Feature Extraction:
Extract relevant features from sensor data for both weapon detection
and behavior analysis. For weapon detection, features might include
shape, color, and thermal signatures. For behavior analysis, features
might include motion vectors, body keypoints, and social interactions.
4. Integration and Fusion:
Combine outputs from weapon detection and behavior analysis mod-
ules. Create a comprehensive situational awareness by considering both
potential threats and context.
5. Anomaly Detection:
Compare detected behaviors against expected patterns. Flag behaviors
that are unusual or potentially threatening.
6. Alert Generation and Prioritization:
Generate alerts based on the outputs of weapon detection and behavior
analysis. Assign priorities to alerts based on the perceived threat level.
7. Human Verification and Intervention:
Allow human operators to review alerts and verify potential threats.
Provide real-time visual feeds and summaries of detected behaviors.
Enable operators to take appropriate actions, such as alerting security
personnel.
8. Feedback Loop and Learning:
Continuously improve the system’s accuracy and performance based
on operator feedback. Update machine learning models and behavior
patterns to adapt to evolving scenarios.
9. Continuous Improvement and Adaptation:
Regularly update and refine machine learning models to stay effective
against evolving threats. Adapt the system to changes in the environ-
ment and user needs.

13.0 Goals and Objectives


• Goal :
Data collection:Collect a diverse dataset of images and videos of weapons
in a variety of contexts, including different backgrounds, lighting con-
ditions,and poses.
Data preprocessing: Clean and normalize the data to ensure that it is
in a consistent format.
Model training: Train a deep learning model to detect weapons in the
preprocessed data.
Video capture and processing: Capture video frames from a camera or
other video source
Weapon detection and alerts: Use the trained model to detect weapons
in the video frames. Generate alerts if a weapon is detected.

• Objectives :

– To train a YOLO model to detect a variety of weapons, including


guns, knives, and other dangerous objects.

– To optimize the YOLO model for speed and accuracy, so that it


can be deployed in real-time applications.

– To develop a user-friendly interface for the weapons detection sys-


tem, so that it can be easily used by non-experts.

14.0 Methodology
1. Dataset
Raw images are not appropriate for analysis purposes and need to be
converted into the processed format, such as jpeg, jpg, and tiff for
further analysis. The image size is reconstructed into a square image.
The images were resized into 416px x 416px resolution to reduce the
computational time and then the images were then retained in the RGB
format. Dataset is created by collecting the good pixel weapon images
and making them ready for the creation of the dataset.

Figure 3: Dataset of images containing the weapons – Handgun, Knife and


Heavy guns(416px416px )

2. Separation of the dataset into train and test data


Once the image labeling process is completed, this complete data set
is compressed into a zip file and upload into google drive. The dataset
uploaded is then divided into 70data and 30different folders which can
be used for training the model.
3. Yolov5 Algorithm
YOLOv5 (You Only Look Once, Version 5) is a realtime object de-
tection algorithm that identifies specific objects in videos, live feeds,
or images. Previous methods, like regionbased convolutional neural
networks (R-CNN), require thousands of network evaluations to make
predictions for one image which can be time-consuming and painful to
optimize. In YOLOv5, The feature extraction and object localization
were unified into a single monolithic block. Their single-stage archi-
tecture, named YOLO (You Only Look Once) results in a very fast
inference time. It takes the entire image in a single instance and pre-
dicts the bounding box coordinates and class probabilities for these
boxes. The biggest advantage of using YOLO is its superb speed –
it’s incredibly fast and can process 45 frames per second. Unlike other
methods where images are scanned with a sliding window, in YOLO
whole image is passed into a convolutional neural network and predicts
the output in one pass.

4. General Working principle of YOLO


The concept behind YOLO algorithm is that it splits the input image or
video into s*s grid which is responsible for detecting objects. Bounding
boxes are used for object detection. The midpoint of a bounding box
is responsible to detect an object which has four parameters like ‘bx’,
‘by’ represents center of bounding box,’ bw’ is width, ‘bh’ is height if
object is present.’ C’ is the class object and pc is the probability of
class object which tells the percentage of an object found in bounding
box. The YOLO formula is Y= (PC, Bx, By, Bw, Bh, C) PC- Detects
the presence of object in each grid its value is 1 if present else 0 Bx -
Centre point in object By - Center point in bounding box Bw – Width
of bounding box Bh – Height of bounding box C - Class of object either
c1,c2,c3 for 3 classes (gun, pistol and knife)

5. Training the model


The darknet repo is cloned from GitHub and using the darknet53.conv.74
file, the pre-trained files is used for transfer learning and the neural net-
work is trained for our weapon data. The training takes place for 6000
iterations. Once the training is completed the training. weights file and
yolov5.cfg file is generated which can be used for weapon detection.

6. Implementation Using OpenCV


After the training is completed, the training. weights file and yolov3.cfg
file is generated which can be used for weapon detection. We used
OpenCV for detecting the presence of a weapon in the live video. Af-
ter the weight files are connected successfully in the code, the Input
video is obtained through a web camera or internal file. The weapon
detected in the video along with the confidence score is displayed. If a
weapon is detected an alert sound is used to indicate the presence of
a weapon in that detected frame.Once the weapon is detected in the
frame, the program gives an alert sound and a message to indicate the
presence of a weapon in that frame. This can be useful to police offi-
cers who are constantly on patrol to make them aware of the weapon
in the video stream.

15.0 Scope of the Project


Removing rain streaks from images in computer vision applications like surveil-
lance and object recognition is crucial. Single image deraining is a challeng-
ing problem. The proposed solution involves bilateral LSTMs to decompose
the image into rain streak and background layers. Bilateral LSTMs effi-
ciently propagate deep features across stages, forming a bilateral recurrent
network (BRN) that allows interaction between rain streak and background
layers.The proposed method demonstrates better generalization performance
on real-world rainy datasets.A simpler yet effective single recurrent network
(SRN) is also introduced for image deraining.

16.0 Software and Hardware Requirements


• Software requirements:

– Python3 Libraries:
– OpenCV
– NumPy
– TensorFlow or PyTorch
– Scikit-learn
– Dlib
– Imutils

• Hardware requirements:

– A 64-bit Windows/Linux Machine with a support of at least:


– Quad-core Processor
– 4 GB DDR4 RAM
– 2 GB NVIDIA GPU
– Web application Development:
Angular and Flask
17.0 Input to the Project
1. Video Streams: Real-time video feeds from surveillance cameras posi-
tioned in various locations within the public space are essential. These
video streams capture the activities and behaviors of individuals in the
area of interest.
2. Image Analysis: You can input single images or image sequences (frames)
from a video feed for analysis. These images can be captured from var-
ious sources, including security cameras, smartphones, other devices.

18.0 Algorithms
In behavior analysis in public spaces using OpenCV, several algorithms and
techniques are commonly employed to detect and analyze human behaviors.
Some of the key algorithms used are:
• Object Detection Algorithms:
Object detection algorithms, such as YOLO (You Only Look Once),
Faster R-CNN, and SSD (Single Shot Multi-Box Detector), are com-
monly used to locate and identify objects in image or video frames,
including weapons.
• Convolutional Neural Networks (CNNs):
CNNs are crucial for feature extraction in object detection. They rec-
ognize patterns and features in image data.
• Deep Learning for Image Classification:
Deep learning models are trained to classify objects, including weapons,
based on the visual features extracted from images using TensorFlow,
PyTorch These models may include CNNs and fully connected layers.
• Real-time Video Processing Algorithms:
Algorithms for real-time video processing, such as optical flow algo-
rithms, frame differencing, and background subtraction by using RTSP,
are used to efficiently analyse video streams with OpenCV.
• Preprocessing Algorithms:
Image and video preprocessing techniques are used to enhance the qual-
ity of input data. These can include techniques for noise reduction,
contrast adjustment, and resizing using ResNet,MobileNet.
• Alerting Algorithm:
An algorithm is needed to generate alerts when a weapon is detected.
This typically involves sending notifications to security personnel or
relevant authorities through email, SMS, or other communication chan-
nels.

• Data Annotation Algorithms:


Algorithms for data annotation are used to label images and videos
with information about the presence of weapons. This is essential for
supervised machine learning.

• Privacy Enhancement Algorithms:


To address ethical and privacy concerns, privacy-enhancing algorithms
and techniques may be employed to obscure or anonymize non-relevant
parts of the video feed.

• Stress Testing Algorithms:


Stress testing algorithms and tools are used to assess the system’s per-
formance under high loads, ensuring it can handle multiple camera
feeds simultaneously without performance degradation.

19.0 Expected Outcomes


In this project we will be able to perform :-

• Weapon detection

• Weapon classification

• Object detection

• Alert generation

• Indicates confidence score


20.0 Plan of Project Execution
Sr.No Month Tasks
1 July Topic Finalization and Topic Presentation
2 August Initiation and Requirement Gathering
3 September Analysis
4 October UML Designing and Task distribution
5 November Layer 1: Analysis
6 December Layer 2 and 3: Implementation
7 January Model Evaluation + Further layers
8 February Layer 4: Implementation + Future Scope Analysis
9 March Testing + Report Writing
10 April Final Presentation

21.0 References
[1] T. O’Shea and J. Hoydis, “An introduction to deep learning for the phys-
ical layer,” IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 4, pp. 563–575,
Dec. 2017, doi: 10.1109/TCCN.2017.2758370.
[2] G. Aceto, D. Ciuonzo, A. Montieri, and A. Pescapé, “Mobile encrypted
traffic classification using deep learning: Experimental evaluation, lessons
learned, and challenges,” IEEE Trans. Netw. Service Manag., vol. 16, no.
2, pp. 445–458, Feb. 2019, doi: 10.1109/ TNSM.2019.2899085
[3] A. Taha, H. H. Zayed, M. E. Khalifa, and E.-S. M. El-Horbaty, “Exploring
behavior analysis in video surveillance applications,” Int. J. Comput. Appl.,
vol. 93, no. 14, pp. 22–32, May 2014, doi: 10.5120/16283-6045
[4] Dong-Gyu Lee, Hrung-II Suk, Sung-Kee Park, Seong-Whan Lee “Motion
Influence Map for Unsual Human Activity Detection”, 2015.
[5] Arun Kumar Jhapate, Sunil Malviya, Monika “Unusual Crowd Activity
Detection using OpenCV and Motion Influence Map”,2020.
[6] P.Bhagya Divya, S.Shalini, R.Deepa, Baddeli Sravya Reddy,“Inspection
of suspicious human activity in the crowdsourced areas captured in surveil-
lance cameras”,International Research Journal of Engineering and Technol-
ogy (IRJET), December 2017.
[7] Jitendra Musale,Akshata Gavhane, Liyakat Shaikh, Pournima Hagwane,
Snehalata Tadge, “Suspicious Movement Detection and Tracking of Human
Behavior and Object with Fire Detection using A Closed Circuit TV (CCTV)
cameras ”, International Journal for Research in Applied Science Engineering
Technology (IJRASET) Volume 5 Issue XII December 2017.
[8] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, “HMDB:
A large video database for human motion recognition,” in Proc. Int. Conf.
Comput. Vis., Nov. 2011, pp. 2556–2563.
[9] E. Bermejo, O. Deniz, G. Bueno, and R. Sukthankar, “Violence detection
in video using computer vision techniques,” in Computer Analysis of Images
and Patterns. Berlin, Germany: Springer, 2011, pp. 332–339.
[10] D. Freire-Obregón, M. Castrillón-Santana, P. Barra, C. Bisogni, and
M. Nappi, “An attention recurrent model for human cooperation detection,”
Comput. Vis. Image Understand., vols. 197–198, Aug. 2020, Art. no.
102991.
[11] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand keypoint detec-
tion in single images using multiview bootstrapping,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1145–1153.
[12] JIANYU XIAO SHANCANG L, (Member, IEEE), QINGLIANG XU11
School of Computer Science and Engineering at Central South University,
China”Video-based Evidence Analysis and Extraction in Digital Forensic In-
vestigation”, IEEE Access (2020).
[13] An Improved YOLOv3-based Neural Network for De-identification Tech-
nology Ji-hun, Dong-hyun Lee, Kyung-min Lee, Chi-ho Lin Won School of
Computer Semyung University Chungcheongbuk-do, Korea,IEEE 2018.
[14] ] Francisco Luque Sanchez, Isabella Hupont, Siham Tabik,Francisco Her-
reraF, “Revisiting crowd behavior analysis through deep learning: Taxonomy
anomaly detection, crowd emotions, datasets, oppurtunities and prospects.”,
ELSEVIER (2019).
[15] Elizabeth Scaria, Aby Abahai T and Elizabeth Isaac, “Suspicious Ac-
tivity Detection in Surveillance Video using Discriminative Deep Belief Net-
wok”, International Journal of Control Theory and Applications Volume 10,
Number 29 -2017.

Name and Signature of Guide Signature of HOD

You might also like