0% found this document useful (0 votes)
9 views

IEEE A Comprehensive Approach To Multiple Threat Detection Using Edge Computing and Advance Object Detection

Research paper for missile detection

Uploaded by

Hamza Amjad
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

IEEE A Comprehensive Approach To Multiple Threat Detection Using Edge Computing and Advance Object Detection

Research paper for missile detection

Uploaded by

Hamza Amjad
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

A Comprehensive Approach to Multiple Threat

Missile Detection Using Real Time Computing


and Advance Object Detection

M Hamza Bin Amjad Dr. Khalid Awan Muhammad Mudassir


Department of Computer Science Department of Computer Science Department of Computer Science
Comsats University Islamabad Comsats University Islamabad Comsats University Islamabad
Attock Pakistan Attock Pakistan Attock Pakistan
[email protected] [email protected] [email protected]

Abstract— Passive missiles, including short-range air-to-air I. INTRODUCTION


missiles and man-portable air defence systems
A. Background:
(MANPADS), pose a significant threat to both civilian and
military aircraft. This study aims to explore methods that
can be utilized in a passive missile approach warning Aircraft operating in combat zones and other dangerous
system. Initially, the research focuses on analysing the regions are perpetually at risk of missile attacks. Analysis of
missile plume's spectrum to determine the most suitable aircraft losses due to enemy action since the 1960s reveals
range for detecting passive missiles. By comparing the that approximately 70% of all losses are attributed to passive
benefits and drawbacks of various ranges and considering heat-seeking missiles. This is surprising given that radar-
their technological challenges, the study concludes that the guided Surface-to-Air Missile (SAM) systems have longer
solar blind UV (SBUV) range is the most appropriate for engagement ranges, faster response times, better
missile detection. manoeuvrability, larger warhead capacities, and greater
precision. However, the primary reason for the effectiveness
of passive missile attacks is their difficulty to detect. These
Following this foundational analysis, the study's main
missiles lock onto the heat signature of aircraft engines and
objectives are to detect missiles by removing background
silently strike their targets, as depicted in Figure 1.1.
clutter and to classify them as either threatening or
approaching based on a series of images. The detection
process relies on convolutional neural networks (CNN),
moving object tracking algorithms, and post-processing
methods. A significant challenge in using CNN models is
the availability of training data, which this research
addresses by creating synthetic data through 3D simulations
in the chosen spectrum.

Additionally, the study estimates the direction and speed of


detected threats using moving object tracking techniques
and the extended Kalman filter (EKF). This information is
crucial for classifying missiles as approaching or non-
approaching and distinguishing them from other flying
objects like jets. While the EKF can also be used for
position and range estimation, some essential parameters for
these calculations were not available without actual sensors, Figure 1.1: IR guided missile seeking aircraft tail
leaving this aspect for future research.
In recent conflicts, such as those in Iraq and Afghanistan,
Keywords—UAV, Object Detection, YOLO, Real Time 80% of successful missile strikes were unobserved. A
Computing, Missile Plume, Dataset Generation significant lesson from these conflicts is the urgent need for
systems that can alert aircraft when they are being targeted.
Capt. Paul Overstreet, program manager for USN Electronic
Warfare, emphasized that these unobserved missile kills
caused substantial damage, and there were limited
precautions available to mitigate such threats.

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©2024 IEEE


B. Infrared & Ultraviolet Signatures of Missiles & Missile plumes also emit UV radiation. The UV region
MANPADS: ranges from 100 to 400 nm, with the atmosphere being
transparent to UV radiation. Visible radiation peaks between
A guided missile consists of three main systems: the control 400 and 700 nm. The UV band is divided into UVA (315-
system, the warhead, and the propulsion system, all housed 400 nm), UVB (280-315 nm), and UVC (100-280 nm,
within an airframe. Solid propellant rockets provide including SBUV). The SBUV region, typically defined from
propulsion with burn times of just a few seconds, sufficient 240 to 280 nm, is primarily free of solar radiation due to
to accelerate the missile to speeds up to 3.5 Mach or higher absorption by the Earth's ozone layer. Below the ozone
for modern rockets and missiles, as illustrated in Figure 1.2. layer, the atmosphere is transparent to wavelengths as short
as 200 nm, where oxygen absorption limits transmission. All
these spectral components can be used for detecting passive
missiles, but the false alarm rates vary with the background
emissions in each spectral range.

II. LITERATURE REVIEW

A. Missile Detection Technologies:


Figure 1.2: Constituents of Guided Missile Infrared (IR) guided missiles pose a significant threat to
aircraft. Various technologies are used to detect missile
The primary reason passive missile attacks are so effective signatures, including Pulse-Doppler radar, active detection,
is their stealth. The hot plume of a missile has distinct and passive detection using Infrared and Ultraviolet (UV)
spectral signatures, emitting in the ultraviolet (up to 400 sensors. An advanced form of UV detection utilizes SBUV
nm), visible (450-750 nm), near infrared (NIR) (750-1100 sensors. Each technology has its own pros and cons
nm), and mid-infrared (2-16 μm) spectra. The infrared
region lies between the microwave and visible portions of B. Active Detection: Pulse-Doppler Radar
the electromagnetic spectrum and is further divided into near Similar to a Radar Warning Receiver (RWR), which
IR and short-wave IR (0.7 to 2.5 μm), mid-wave IR (3 to 5 identifies hostile radar and guided missile threats, a radar-
μm), and long-wave IR (8 to 12 μm) wavelengths, as shown based detection system can be installed on aircraft. This
in Figure 1.3. NIR is particularly useful for missile plume system identifies passive missiles and Man-Portable Air-
detection due to its high energy content. Defense Systems (MANPADS) by detecting their cross-
sectional area using the pulse Doppler technique.

C. Passive Detection:
Passive Missile Warning Systems detect the heat emitted by
the missile's flame rather than emitting their own signals and
detecting reflections. The challenge is to distinguish missile
heat signatures from background noise. Passive detection
mainly uses two spectral regions: the Infrared (IR) band and
the Ultraviolet (UV) band.

D. Infrared Sensing:
All objects emit IR energy if they are above absolute zero
(0°K). Hotter objects emit more energy, and the peak
wavelength decreases as the temperature increases. IR
Figure 1.3: Constituents of UV spectral band UVC, UVB and UVA energy behaves similarly to visible light, traveling in straight
lines and being reflected or absorbed upon hitting surfaces.
For detecting heat-seeking missiles, IR radiation from all IR-based missile warning systems must filter out numerous
sources except rockets is considered background clutter, detected objects before issuing a warning.
which can reduce the effectiveness of missile detection
systems. Fortunately, most IR energy emitted by the Earth's E. Ultraviolet Sensing:
surface falls within the 10 μm window, while the Sun's UV detection technology emerged in the early 1950s. It is a
radiation peaks in the visible band and reflects off the dual-use photoelectric detection technology similar to
Earth's surface, dominating the region above 3 μm and infrared and laser detection. UV radiation from the sun is
leaving a window around 4 μm for missile detection. The absorbed by the atmosphere, creating a relatively uniform
sky also reflects and scatters a certain amount of IR, but its UV background. This uniformity allows for easy
intensity is lower than that of the Earth's surface. discrimination of aircraft and other objects against the UV
background. The UV spectrum ranges from 100nm to
The relative amounts of IR energy emitted at various 400nm, with specific regions useful for detection.
wavelengths for different temperatures. The prominent
wavelengths for CO, CO2, and H2O and their intensities are F. Previous Research and Development:
observable in missile plumes.
After an IR-guided missile is launched, it autonomously A. Collecting Data:
follows its target until impact or fuel exhaustion. This "fire- Many online datasets are available for building a CNN
and-forget" capability means the missile continuously image classifier. However, the required dataset for this study
adjusts its trajectory toward the target. However, the is unique and classified, so it's not publicly accessible.
missile's direction change does not happen instantaneously. Ideally, a Solar Blind UV camera would record UV videos
of missiles, but since this camera is unavailable, missile
The threat of MANPADS has led to the development of videos from public sources like YouTube were used. These
unguided missile warning sensors. These sensors use wide videos are in the visible spectrum and include background
field-of-view UV or infrared detectors to sense missile clutter, lacking UV plume details, as shown in Figure 3.1.
plume emissions Extensive processing is needed to isolate the plume from the
background in these videos.

G. UV Atmospheric Scattering:
Solar radiation is absorbed by ozone in the stratosphere and
troposphere, creating a dark mid-UV region in the
atmosphere. This allows missile plume UV emissions to be
detected against a dark background, providing high contrast.
However, atmospheric scattering limits detection range,
spreading the plume image and creating a radiance field
around the point source.

H. UV Plume Signature Modeling:


Detection models estimate missile emissions, atmospheric
propagation, and optical signal detection to analyze UV
detection performance. Solid propellant rockets use Figure 3.1: Plume Firing in Visual Spectrum
aluminum particles in fuel, producing UV emissions through
chemi-luminescence reactions.
B. Creating SUBV Scenarios:
I. Passive IR Airborne Threat Warning:
IR sensor systems face challenges in detecting airborne To generate the necessary scenarios, specialized 3D
threats due to low signal-to-clutter ratios. Continuous rendering software was used to create animations. Once a
surveillance and efficient signal processing algorithms are few UV videos of missiles were simulated, synthetic 2D
required. Modern IR arrays must cover wide areas with high images resembling the video were generated using GAN
spatial resolution to resolve potential threats. Single-frame, models. Aircraft have multiple missiles approaching
single-band detection algorithms are near their performance warning system sensors with different perspectives, so
limits, and improvements will likely come from multi- camera positions were varied in simulations to capture
frame, multi-band algorithms. multiple views.

J. Time To Impact Estimation: The dataset consists of approximately 1500 images/frames,


Active systems like pulse-Doppler radar can determine the split into 60% for training and 40% for validation.
speed and range of threats. Passive systems lack this
capability and require additional parameters to estimate C. Scenario 1 – Missile Moving Away:
distance. Algorithms use object area, intensity, and position
If a missile is fired by the aircraft or a friendly jet flying in
change to determine distance and warn of approaching
formation, it will move away from the aircraft. As seen by
missiles.
the Missile Approaching Warning System sensors, the
missile plume will appear to decrease in size in subsequent
K. Detection Of Fast-Moving Objects:
frames, as shown in Figure 3.2.
Fast-moving objects (FMOs) appear as blurred streaks in
single frames due to their high speed. Detection methods
include algorithms for localization, de-blurring, and motion
analysis. A combination of stereo vision and motion analysis
can robustly detect moving objects, using Kalman filters for
continuous tracking.

Figure 3.2: Decreasing plume size (frame 1 showing a large plume


III. METHODOLOGY size (left) and frame 15 showing a small plume size (right))
This portion describes a synthetic model to simulate Solar
Blind UV (SBUV) characteristics of missile plumes, aimed D. Scenario 2 – Missile Approaching:
at training an AI detection model. This method is necessary In this case, a missile targets the aircraft from behind. The
because an SBUV camera is not available, and it's difficult tail sensor sees the missile plume enlarging as it approaches.
to use actual missiles for training. Figure 3.3 shows the SBUV image of an approaching
missile, where the plume appears smaller initially and larger
in later frames, forming a ring due to the missile body G. Scenario 5 – Missile Chasing Another Aircraft:
blocking the center.
In this scenario (Figure 3.6), the missile approaching
warning system sensor on the subject aircraft detects a
missile fired at another aircraft flying in a different
direction. This helps train the system to differentiate
between approaching threats and non-threats. It also helps
filter out the tail plumes of both aircraft as clutter.

These scenarios are crucial for training the detection system


to handle various real-time encounters with passive missiles.
The simulation approach is adopted due to the lack of SBUV
hardware and the difficulty of firing actual missiles, with
efforts made to keep the models as realistic as possible.

Figure 3.3: Missile Approaching

E. Scenario 3 – Aircraft Flying Over Power Lines:


Figure 3.4 shows a jet flying over high-power transmission
lines. These lines produce a corona discharge with a spectral
emission band similar to the missile plume (240-280 nm).
This discharge, invisible to the naked eye but visible to UV
and SBUV sensors, creates background clutter for our
application. The relative motion between the aircraft and
wires makes the discharge appear as a moving object.
Simulating this helps train the AI to recognize and filter out
this clutter.

Figure 3.6: Missile Chasing Another Aircraft Flying in a Different


Direction

H. Data Labeling:

Supervised learning models need a "labelled dataset" for


their training. In deep learning, data labelling (or data
annotation) is the process of identifying and tagging data
samples. This is typically done manually, often with the help
of software.

Figure 3.4: Aircraft Flying Over High Power Transmission (High For instance, if you want to create a system that can detect
Transmission Wires Emitting Corona) airplanes in images, you need to train the deep learning
model with a dataset that includes video frames and the
F. Scenario 4 – Formation Flying: coordinates of a bounding box around each airplane, as
Figure 3.5 illustrates a scenario where two aircraft are flying shown in Figure 3.7. If there are different types of objects to
in formation, and a missile chases both initially. The camera identify, each object needs a class label.
on one aircraft's tail captures the missile changing its
trajectory to follow the other aircraft. This scenario helps In object detection, the dataset includes the original image
train the system to identify when a detected missile is not a and five values for each object: the class label and the
threat to the subject aircraft, despite the increasing intensity coordinates of the bounding box where the object is located.
and size of the plume. These values can be formatted as (class xmin ymin xmax
ymax) or (class xmin ymin width height).

We will convert all the scenarios (Videos) into frames using


Labelling Master (Open Source) software. After converting
into frames, we will than make a bounding box against our
target. We will also define class like plume, missile
approaching etc

Figure 3.5: Missile Chasing a Parallel Aircraft


of the backbone. This helps in detecting
small objects and improving localization
accuracy.
o PAN: The Path Aggregation Network
further refines the feature maps by
aggregating features from different layers,
improving the robustness of the model.
3. Head:
o Detection Head: The detection head of
YOLOv8 uses anchor-free mechanisms,
which simplifies the architecture and
Figure 3.7: Labelled Dataset Required by Supervised Learning
improves detection accuracy. This head
predicts bounding boxes, objectness
I. Labelling Tools: scores, and class probabilities directly.
1. Labelling Master (Open Source) o Anchor-Free Detection: YOLOv8's head
2. DarkLabel (For Windows Only) does not rely on predefined anchor boxes,
3. Super Annotate (Paid) which helps in reducing the number of
hyperparameters and simplifies the
training process.
J. Frames / Images (Dataset):
4. Decoupled Head:
o Classification and Regression: YOLOv8
features a decoupled head where the tasks
of classification and regression are
handled separately. This decoupling
allows for more specialized learning,
enhancing the model's performance.
5. Enhanced Training Techniques:
o Data Augmentation: YOLOv8 employs
advanced data augmentation techniques
like mosaic augmentation, which
combines four training images into one,
improving the model's robustness to
various image conditions.
o Auto-Anchoring: YOLOv8 uses auto-
anchoring to automatically generate
Figure 3.8: Multiple Training Frames
optimal anchor boxes based on the
training data, further improving detection
K. Yolov8 Architecture performance

YOLOv8 is a state-of-the-art object detection model K. Dataset Formation


developed by Ultralytics. It builds upon the success of its
Data is labeled in modified image label software with 3
predecessors (YOLOv3, YOLOv4, YOLOv5) with several
classes [plum3, Missile Approaching, Corona]. After those
improvements in terms of accuracy, speed, and flexibility.
two folders are generated with the name [train, val]. Make
Here is a detailed explanation of the architecture and key
folder with the name dataset or any other suitable name. In
features of YOLOv8:
the dataset folder make two folders with the name [images,
labels]. In the images folders make two folders with the
1. Backbone: name [train, val]. In the train folder copy the images
o Feature Extraction: YOLOv8 utilizes an generated in step1 and place them in the newly created train
advanced convolutional neural network folder. Same is for val folder. In the labels folders make two
(CNN) as its backbone for feature folders with the name [train, val]. In the train folder copy the
extraction. This backbone is designed to text file of the images generated in step1 and place them in
efficiently capture features at multiple the newly created train folder. Same is for val folder. After
scales, which is crucial for detecting this the dataset formation is completed.
objects of varying sizes. Write the following command for training of model:
o Focus on Speed and Accuracy: The python train.py –img 1280 –batch 10 –epochs 100 –
backbone has been optimized for both dataset Dataset.yaml -- weights yolov5s.pt –cache
speed and accuracy, leveraging techniques a) train.py: python file containing the training code.
like CSP (Cross Stage Partial connections) b) img: image size
and PAN (Path Aggregation Network). c) batch: batch size is the number of images which
2. Neck: undergoes training at one time. This will depend on your
o Feature Pyramid Network (FPN): The CPU or GPU memory you can change accordingly.
neck of YOLOv8 incorporates an FPN to d) Epochs:
enhance feature maps from different layers e) data: the path of your YAML file which you created
f) weights: give the yolov5s.pt weights.
Following command is used for the detection:
python detect.py –source data/images/image name –
weights runs\train\exp\weights\best.pt –conf 0.25
a) detect.py: python file containing the detection code.
b) source: the image/video for detection save at data/images
location. Give 0 for webcam.
c) weights: give the location of the folder where weights file
is saved.

IV. RESULTS
Here are the results of our algorithm, on which we test Figure 4.3: Detection Of Decreasing Plume
different scenarios (plume, corona, approaching missile) etc.
The approach velocity of a threat can be estimated using
V. CONCLUSION
certain calculations, though it won't give the exact speed.
This estimate is useful for distinguishing between a missile
and another aircraft since missiles and aircraft typically have Passive missile warning systems are vital components of the
different approach speeds. Missiles tend to approach much Electronic Support (ES) systems in an Electronic Warfare
faster compared to other aircraft. This differentiation helps (EW) suite on aircraft. These systems alert pilots and guide
in classifying and identifying the type of threat more Electronic Countermeasure (ECM) systems, such as
accurately. Directed Infrared Countermeasures (DIRCM), to defend
against incoming guided missiles.

Using Solar Blind UV (SBUV) sensors for passive missile


detection offers significant advantages over traditional
detection methods. UV detection is particularly effective
against surface-to-air missiles and man-portable air-defense
systems (MANPADS), which are common threats to
aircraft.

This study combined deep learning, machine learning, and


conventional 2D tracking algorithms for moving object
detection to achieve its goals. The detection and
classification methods developed in this study proved to be
efficient and reliable. In the future, the study could be
Figure 4.1: Detection Of Plume expanded to estimate other crucial parameters like the
approach velocity and time to impact of the threat, making
the developed system a comprehensive part of any modern
aircraft's EW suite.

A. Future Work

Future work could involve tracking multiple targets


simultaneously and prioritizing threats in complex combat
scenarios. Enhancements could include predicting the
missile's position and exact velocity in 3D using the
Extended Kalman Filter (EKF). This would require detailed
missile parameters obtained through real-time physics-based
modeling, including combustion models, particle scattering
models, and environmental modeling of the terrain.

Calculating the time to impact (TTI) and range will also be


possible once the position and velocity are determined.
Figure 4.2: Receding Missile
Another challenge is the reduced detection range of missile
warning sensors against advanced missiles with low infrared
(IR) and UV emissions. Research into alternative detection
methods could address this issue. Advancements in SBUV
sensor technology hold promise for developing more
accurate missile detection systems. However, further
research is necessary to enhance the reliability and 8. J. S. Kulchandani and K. J. Dangarwala, “Moving
reproducibility of these sensors. object detection: Review of recent research trends,”
in 2015 International Conference on Pervasive
to divide the workload among different devices). By Computing (ICPC), vol. 25. IEEE,pp. 1–5, 2015.
adjusting these parameters, we can achieve the desired 9. W. Hu, T. Tan, L. Wang, and S. Maybank, “A
balance between latency and accuracy. survey on visual surveillance of object motion and
behaviors,” IEEE Transactions on Systems, Man,
The main contribution of our study is the identification of and Cybernetics, Part C (Applications and
these optimal parameter combinations, which can be tailored Reviews), vol. 34, no. 3, pp.334–352, 2004.
to meet the specific needs of users. By distributing the 10. B. Karasulu and S. Korukoglu, ”Performance
workload across multiple edge devices and adjusting Evaluation Software: Moving Object Detection and
compression values, we demonstrated that our system can Tracking in Videos” Springer Science & Business
perform effectively in real-time scenarios without relying on Media vol. 50, pp 90-95, 2013.
central servers or cloud resources. 11. J. M. Chaquet, E. J. Carmona, and A. Fern´andez-
Caballero, “A survey of video datasets for human
action and activity recognition,” Computer Vision
VI. REFERENCES and Image Understanding, vol. 117, no. 6, pp. 633–
659, 2013.
12. A. A. Stocker and John Wiley & Sons ”Analog
1. C. Rabe, U. Franke, and S. Gehrig, “Fast detection
VLSI circuits for the perception of visual motion”
of moving objects in complex scenarios,” in 2007
in International Journal of Computer Science and
IEEE Intelligent Vehicles Symposium. IEEE, pp.
Mobile Computingvol. 45, no. S10, 2006.
398–403, 2007.
13. Y.-J. Yeh and C.-T. Hsu, “Online selection of
2. S. S. Sumit, J. Watada, A. Roy, and D. Rambli, “In
tracking features using adaboost,” in 2007 16th
object detection deep learning methods, yolo shows
International Conference on Computer
supremum to mask R-CNN,” in Journal of Physics:
Communications and Networks. IEEE, pp. 1183–
Conference Series, vol. 1529, no. 4, pp.42-46,
1188, 2007.
2020.
14. H. A. Patel and D. G. Thakore, “Moving object
3. R. Gavrilescu, C. Zet, C. Fos,al˘au, M. Skoczylas,
tracking using kalman filter,” International Journal
and D. Cotovanu, “Faster RCNN: an approach to
of Computer Science and Mobile Computing, vol.
real-time object detection,” in 2018 International
2, no. 4, pp. 326–332, 2013.
Conference and Exposition on Electrical And
15. K. Farkhodov, S.-H. Lee, and K.-R. Kwon, “Object
Power Engineering (EPE). IEEE, pp. 0165–0168,
tracking using csrt tracker and RCNN.” in
2018.
Bioimaging journal, pp. 209-212, 2020
4. T. Barbu, “Pedestrian detection and tracking using
16. D. Rozumnyi, J. Matas, F. Sroubek, M. Pollefeys,
temporal differencing and hog features,”
and M. R. Oswald, “Fast moving object detect:
Computers & Electrical Engineering, vol. 40, no.
Robust detection of fast moving objects,” in
4,pp. 1072– 1079, 2014.
Proceedings of the IEEE/CVF International
5. U. S. Hansen, E. Landau, M. Patel, and B. Hayee,
Conference on Computer Vision, pp. 3541–3549,
“Novel artificial intelligence-driven software
2021.
significantly shortens the time required for
17. C. J. Weaver, D. L. Wu, P. K. Bhartia, G. J.
annotation in computer vision projects,” Endoscopy
Labow, and D. P. Haffner, “A long-term cloud
International Open, vol. 9, no. 04, pp. E621–E626,
albedo data record since 1980 from UV satellite
2021.
sensors,” Remote Sensing, vol. 12, no. 12, p. 1982,
6. U. Nepal and H. Eslamiat, “Comparing YOLOv3,
2020.
YOLOv4 and YOLOv5 for autonomous landing
18. J. Du, “Understanding of object detection based on
spot detection in faulty UAVs,” Sensors, vol. 22,
CNN family and YOLO” in Journal of Physics:
no. 2, p.464, 2022.
Conference Series, vol. 1004, no. 1 IOP Publishing,
7. B. Yan, P. Fan, X. Lei, Z. Liu, and F. Yang, “A
pp. 012029, 2018.
real-time apple targets detection method for picking
robot based on improved yolov5,” Remote Sensing,
vol. 13, no. 9, p. 1619, 2021.

You might also like