0% found this document useful (0 votes)
32 views

Disaster Paper Version 2

Uploaded by

Vijay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Disaster Paper Version 2

Uploaded by

Vijay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

AI Mapping for Rapid Disaster Assessment

1 2
Dr. L. Meenachi Vijayaragavan K T S
Department of Information Technology, Department of Information Technology,
Dr. Mahalingam College of Engineering and Technology, Dr. Mahalingam College of Engineering and Technology,
Pollachi, India. Pollachi, India.
1 2
[email protected] [email protected]@gmail.com
3 4
Deepak A Roshan Karthick T
Department of Information Technology, Department of Information Technology,
Dr. Mahalingam College of Engineering and Technology, Dr. Mahalingam College of Engineering and Technology,
Pollachi, India. Pollachi, India.
3 4
[email protected] [email protected]

Abstract: The Disaster pose significant threats to minimizing casualties and property damage, as well as
human life, infrastructure, and socioeconomic stability. expediting the process of recovery and rehabilitation,
Using deep learning-based image analysis techniques, prompt and efficient disaster management is essential.
we provide a novel strategy in this research to improve Recent developments in computer vision and deep
catastrophe management. Three major components learning have presented viable ways to improve
comprise our methodology: determining the type of disaster management procedures by using automated
disaster, assessing the building damage degree, and picture analysis methods. The objective of this project
determining the disaster occurrence. Our approach, is to create a complete catastrophe management
which makes use of Convolutional Neural Networks system by utilising deep learning techniques. In
(CNNs), can reliably identify catastrophic events from particular, we concentrate on determining the
pictures, allowing for quick mitigation and response catastrophe type, assessing the building damage
actions. Further supporting customised response plans degree, and determining the disaster occurrence—three
and resource allocation is MobileNet V2's ability to crucial facets of disaster response. By utilising U-Net,
precisely classify different sorts of disasters. In order MobileNet V2, and Convolutional Neural Networks
to prioritise rescue efforts and infrastructure (CNNs), we hope to offer stakeholders engaged in
restoration, we also used U-Net architecture for disaster response and recovery operations practical
building damage level assessment. Through the insights. This project's primary goal is to create a
integration of various models into a cohesive system, CNN-based model that can reliably identify disasters
we provide a thorough approach to catastrophe using satellite and aerial data. This methodology
management, equipping stakeholders with useful facilitates quick and accurate identification of disaster-
information for effective coordination of responses. By affected areas, enabling prompt response operations,
highlighting the effectiveness of deep learning in by analyzing visual cues indicative of disaster events,
tackling difficult social issues and boosting resilience such as smoke, debris, or water.
in the event of calamities, our research advances plans
for disaster planning and response.
We then use MobileNet V2, a compact and effective
picture classification model, to classify the kind of
disaster that is shown in the images. Since different
Keywords – Disaster management, Deep learning, types of disasters require different mitigation and relief
Convolutional Neural Networks (CNNs), MobileNet methods, this information is essential for properly
V2, U-Net, Image analysis, Disaster occurrence allocating resources and customizing response tactics.
determination, Disaster type identification, Building Finally, we employ U-Net architecture to evaluate the
damage assessment, Rapid response. degree of damage incurred by structures in areas
impacted by disasters. This methodology helps
prioritise rescue operations and focus infrastructure
repair efforts by separating building structures from
aerial photos and evaluating structural soundness.
Our ultimate goal is to lessen the effects of
catastrophes on people and livelihoods by optimising
1. Introduction resource allocation and improving situational
The Natural or man-made, disasters pose serious awareness through the integration of various models
problems for communities all over the world, affecting into a single disaster management system.
infrastructure, businesses, and lives. For the purpose of
To effectively detect symptoms of disasters like fires,
floods, or earthquakes, a Convolutional Neural
2.Literature Survey Network (CNN) model is developed for disaster
occurrence determination. This model analyses aerial
In recent times have seen promising results in disaster
and satellite imagery. To help with customized
management applications from merging deep learning
response plans and resource allocation, a MobileNet
methods with remote sensing data. Following the V2 model is then used to classify the kind of disaster
earthquake in Turkey, Xia et al. [1] presented a deep shown in the photos. Finally, a U-Net architecture is
learning application for assessing building damage us- applied to precisely evaluate the degree of building
ing ultra-high-resolution remote sensing data. Their re- damage in places affected by disasters. This technique
search demonstrates how deep learning models are helps prioritize rescue attempts and guide
useful for precisely recognising and evaluating build- infrastructure repair work by separating and analysing
ing damage, which is important for setting priorities building structures from imagery.
for rescue missions and managing recovery activities. The suggested system seeks to enhance situational
In a similar vein, Xu et al. [2] employed convolutional awareness, optimize resource allocation, and
neural networks (CNNs) to detect building deteriora- ultimately improve the efficacy of disaster response
tion in satellite imagery. Their research shows how and recovery activities by integrating these
components.
CNNs can be used for automatic damage identifica-
tion, which can speed up reaction times and decision-
making in emergency situations. Computer vision and
satellite photography have been investigated by Kim et
al. [3] for disaster assessment, with a focus on water-
related building damage identification. Their study
emphasises how crucial it is to use cutting-edge tech-
nologies to improve disaster response capacities, par-
ticularly in situations where conventional evaluation
techniques might not be as effective.

Furthermore, the problem of constructing damage Figure1 : Framework of the Swiftsat.


identification from satellite photos on highly unbal-
anced datasets was tackled by Wang et al. [4]. Their
research emphasises how important it is to have strong
procedures that can manage a wide range of complex 3.1 Dataset
scenarios that arise in post-disaster settings. A combin-
The xBD dataset, a comprehensive collection of
ation of machine learning and deep learning method satellite photos capturing 19 different natural disasters,
based on images was presented by Vinod et al. (2022) is the dataset used in this research effort. The xBD
for the prediction of natural disasters. A smart flood collection includes 22,068 photos in total, each of
disaster prediction system that combines neural net- which shows a different disaster scenario. With a
works and the Internet of Things was presented by combined size of 45,361.79 square kilometres, these
Bande and Shete (2017). Furthermore, a systematic photos cover a large portion of the areas devastated by
study of prediction techniques in emergency manage- disasters. Moreover, the xBD dataset has thorough
ment was carried out by Huang et al. in 2021. These annotations for the buildings that are visible in the
studies emphasise how crucial it is to use cutting-edge images. Across the dataset, 850,736 building polygons
technology like IoT, machine learning, and deep learn- have been annotated, providing important ground truth
ing to improve the precision and effectiveness of dis- data for training and assessing models aimed at
disaster management tasks like assessing the amount
aster management and prediction systems.
of building damage. For deep learning and disaster
management academics and practitioners, this large
and varied dataset is an invaluable resource. We can
create and validate reliable models for determining the
type of catastrophe, assessing building damage, and
determining the occurrence of disasters by utilising the
xBD dataset. This helps to enhance efficient disaster
3. Proposed System response plans.
The proposed system integrates cutting-edge deep
learning algorithms to enhance disaster management
capabilities. Determining the sort of disaster, assessing Disaster Level Structure Description
the extent of building damage, and determining the
No Damage Undisturbed. No sign of water, structural
disaster's occurrence are its three main parts.
or shine damage, or burn marks.
Minor Damage Building partially burnt, water Disaster occurrence determination plays a pivotal role
surrounding structure, volcanic flow in effective disaster management, facilitating prompt
nearby, roof elements missing, or visible response and mitigation efforts. This study adopts
cracks.
Convolutional Neural Network (CNN) models to
Major Damage Partial wall or roof collapse, encroaching analyze aerial and satellite imagery, aiming to detect
volcanic flow, or surrounded by potential signs of disasters. The CNN model employed
water/mud. for this purpose comprises multiple convolutional
layers followed by pooling layers, strategically
Destroyed Scorched, completely collapsed, designed to extract pertinent features from input
partially/completely covered with
images. These features undergo processing through
water/mud, or otherwise no longer
present
fully connected layers to generate predictions
regarding the presence or absence of disasters.
Table 1

3.1 Disaster Occurrence Determination

Figure 2

The training dataset for the CNN model encompasses Gradient-based optimization algorithms such as
a diverse collection of annotated images illustrating Stochastic Gradient Descent (SGD) or Adam
various types of disasters, ranging from fires and optimizer are employed for model training, with
floods to earthquakes and storms. Supervised learning careful adjustment of learning rate scheduling to
techniques are utilized for model training, wherein ensure stable convergence. Performance evaluation
input images are associated with corresponding labels during training involves validation data, with
signifying the presence or absence of disasters.To hyperparameters fine-tuned to optimize key metrics
bolster the model's robustness and generalization including accuracy, precision, recall, and F1 score.
capability, data augmentation methods like rotation, Upon completion of training, the CNN model is
scaling, and flipping are integrated into the training equipped to efficiently analyze new aerial and satellite
process. Furthermore, transfer learning is leveraged by imagery in real-time, accurately pinpointing regions
initializing the CNN model with weights pre-trained potentially affected by disasters. This capability
on extensive image datasets, allowing the model to empowers emergency responders and disaster
harness knowledge acquired from generic image management authorities to swiftly prioritize and
features. allocate resources for effective disaster response and
mitigation efforts.
V2 model is trained on labeled image datasets
encompassing various disaster types, including
3.2 Disaster Type Identification wildfires, floods, earthquakes, and hurricanes.
Through this process, the model learns to extract
Disaster type identification plays a
pertinent features from input images and categorize
pivotal role in effective disaster management,
them into predefined disaster categories.
necessitating tailored response strategies and resource
allocations. In our proposed disaster management
system, we advocate for the incorporation of
MobileNet V2, a cutting-edge image classification To enhance generalization and robustness, we employ
model, to address this critical challenge. MobileNet data augmentation techniques such as rotation, scaling,
V2, renowned for its lightweight architecture and flipping. Moreover, transfer learning is employed
optimized for efficient image classification tasks, by fine-tuning the pre-trained MobileNet V2 model on
strikes a balance between model complexity and a localized dataset of disaster images, tailored to the
computational efficiency, rendering it suitable for specific region or context. Once trained, the
deployment on resource-constrained platforms like MobileNet V2 model proficiently classifies unseen
mobile phones or edge devices. images depicting different disaster types. This
classification capability empowers emergency
responders and policymakers to prioritize response
efforts and allocate resources effectively, contingent
The MobileNet V2 architecture is characterized by
upon the nature and severity of the disaster. In
depthwise separable convolutions, a design choice that
summary, the integration of MobileNet V2 augments
markedly reduces the number of parameters compared
the situational awareness of our disaster management
to traditional convolutional layers, thereby achieving
system, facilitating informed decision-making and
high accuracy while minimizing computational
proactive response measures to mitigate the adverse
overhead. During the training phase, the MobileNet
impacts of disasters on affected communities.

Figure 3
assess the damage levels of buildings within disaster-
affected areas.The U-Net architecture consists of an
3.3 Building damage level Assessment encoder-decoder network with skip connections,
enabling precise segmentation of objects in images
Building damage level assessment is a critical aspect
while preserving spatial information. For building
of disaster management, enabling responders to
damage level assessment, the U-Net model is trained
prioritize rescue efforts and allocate resources
on a dataset comprising aerial or satellite images of
effectively. In this project, we employ the U-Net
disaster-affected areas, along with corresponding
architecture, a convolutional neural network (CNN)
ground truth labels indicating the extent of building
commonly used for semantic segmentation tasks, to
damage.
The encoder part of the U-Net model extracts features The models' effectiveness in determining the degrees
from input images, gradually reducing spatial of building damage across several categories is shown
resolution through a series of convolutional and by the evaluation measures. The precision score of
pooling layers. These extracted features are then 0.89 for "No Damage" indicates a great ability to
propagated to the decoder part of the network through categorise intact buildings. However, lower precision,
skip connections, allowing for precise localization of recall, and F1 scores indicate that the models have
damage within buildings. During training, the model difficulty detecting minor, major, and destroyed
learns to map input images to corresponding damage damage categories. This indicates that more work has
level labels using techniques such as stochastic to be done to improve the models' capacity to
gradient descent with backpropagation. The loss distinguish between different degrees of building
function used for training typically includes metrics damage. This could involve expanding the dataset or
such as categorical cross-entropy or dice coefficient, adjusting the model's parameters. If successful, this
which measure the similarity between predicted and would increase the models' usefulness in disaster
ground truth segmentation masks. Once trained, the U- response and recovery operations. The success of the
Net model can accurately segment building structures U-Net model underscores the potential of deep
and classify the severity of damage within them, learning techniques in building damage assessment for
providing valuable insights for disaster response and disaster management applications. By leveraging the
recovery efforts. This enables responders to prioritize U-Net architecture's ability to preserve spatial
areas with significant structural damage for immediate information and capture fine-grained details, the model
attention, facilitating efficient allocation of resources demonstrates exceptional accuracy in quantifying the
and aiding in the timely restoration of infrastructure. severity of building damage. The high level of
agreement between the model's predictions and ground
truth annotations highlights its reliability and utility in
informing decision-making processes during disaster
response efforts. By providing timely and accurate
insights into the spatial distribution and severity of
4. Result and Discussion building damage, the U-Net model facilitates effective
The evaluation of the U-Net model for building resource allocation and response coordination,
damage level assessment was conducted using an ultimately aiding in the mitigation of disaster impacts
independent test dataset comprised of aerial and and the restoration of affected communities.
satellite imagery depicting disaster-affected regions.
The findings demonstrate the efficacy of the model in
accurately segmenting building structures and 5. Conclusion
classifying the extent of damage. Quantitative metrics
such as Intersection over Union (IoU) and Dice In summary, this study highlights the effectiveness of
coefficient were employed to measure the utilizing deep learning methodologies, particularly
segmentation accuracy of the U-Net model. CNNs and architectures such as U-Net, to bolster
Additionally, qualitative analysis of the model's various facets of disaster management. By developing
predictions showcased precise localization of damage, and implementing models focused on disaster
distinguishing between undamaged, partially damaged, occurrence identification, disaster type categorization,
and severely damaged regions within buildings. Visual and assessment of building damage levels, we have
inspection of the segmentation masks corroborated underscored the potential of these technological
these findings, with the model's predictions closely solutions in elevating situational awareness and
aligning with ground truth annotations. response efficiency. sThrough the accurate
identification of disasters, classification of their types,
and evaluation of building damage severity, our
proposed system empowers responders to make well-
Categories Precision Recall F1 Score
informed decisions, optimize resource allocation, and
streamline recovery endeavors. The incorporation of
No Damage 0.89 0.87 0.88 sophisticated deep learning algorithms into disaster
management frameworks presents a promising avenue
Minor 0.64 0.51 0.57
for strengthening resilience and mitigating the adverse
Damage
impacts of calamities on both communities and
Major 0.71 0.43 0.53 infrastructure. Further exploration and collaborative
Damage efforts are essential for refining and scaling these
methodologies for practical deployment, thus
Destroyed 0.59 0.52 0.55 advancing overall preparedness and response
capabilities in the face of impending disasters.
Average 0.71 0.58 0.63

Table 2
[14] Jin, Ge, Yanghe Liu, Peiliang Qin, Rongjing Hong, Tingt-
ing Xu, and Guoyu Lu. "An End-to-End Steel Surface Classi-
References fication Approach Based on EDCGAN and MobileNet
[1] Xia, Haobin, Jianjun Wu, Jiaqi Yao, Hong Zhu, Adu Gong, V2." Sensors 23, no. 4 (2023): 1953.
Jianhua Yang, Liuru Hu, and Fan Mo. "A Deep Learning Ap-
[15] Sun, Haixia, Shujuan Zhang, Rui Ren, and Liyang Su.
plication for Building Damage Assessment Using Ultra-High-
"Maturity classification of “Hupingzao” jujubes with an imbal-
Resolution Remote Sensing Imagery in Turkey Earth-
anced dataset based on improved mobileNet V2." Agricul-
quake." International Journal of Disaster Risk Science 14, no.
ture 12, no. 9 (2022): 1305.
6 (2023): 947-962.
[16] Sutaji, Deni, and Oktay Yıldız. "LEMOXINET: Lite en-
[2] Xu, Joseph Z., Wenhan Lu, Zebo Li, Pranav Khaitan, and
semble MobileNetV2 and Xception models to predict plant dis-
Valeriya Zaytseva. "Building damage detection in satellite im-
ease." Ecological Informatics 70 (2022): 101698.
agery using convolutional neural networks." arXiv preprint
arXiv:1910.06444 (2019). [17] Williams, Christopher, Fabian Falck, George Deligian-
nidis, Chris C. Holmes, Arnaud Doucet, and Saifuddin Syed.
[3] Kim, Danu, Jeongkyung Won, Eunji Lee, Kyung Ryul Park,
"A Unified Framework for U-Net Design and Analysis." Ad-
Jihee Kim, Sangyoon Park, Hyunjoo Yang, Sungwon Park,
vances in Neural Information Processing Systems 36 (2024).
Donghyun Ahn, and Meeyoung Cha. "Disaster assessment
using computer vision and satellite imagery: Applications in [18] Anand, Vatsala, Sheifali Gupta, Deepika Koundal, and
water-related building damage detection." (2023). Karamjeet Singh. "Fusion of U-Net and CNN model for seg-
mentation and classification of skin lesion from dermoscopy
[4] Wang, Ying, Alvin Wei Ze Chew, and Limao Zhang. "Build-
images." Expert Systems with Applications 213 (2023):
ing damage detection from satellite images after natural dis-
119230.
asters on extremely imbalanced datasets." Automation in
Construction 140 (2022): 104328. [19] Singla, Danush, Furkan Cimen, and Chandrakala Aluganti
Narasimhulu. "Novel artificial intelligent transformer U-NET for
[5] Bande, Swapnil, and Virendra V. Shete. "Smart flood dis-
better identification and management of prostate cancer." Mo-
aster prediction system using IoT & neural networks." In 2017
lecular and cellular biochemistry 478, no. 7 (2023): 1439-
International Conference On Smart Technologies For Smart
1445.
Nation (SmartTechCon), pp. 189-194. Ieee, 2017.
[20] Wang, Xiuhua, Guangcai Feng, Lijia He, Qi An, Zhiqiang
[6] Huang, Di, Shuaian Wang, and Zhiyuan Liu. "A systematic
Xiong, Hao Lu, Wenxin Wang et al. "Evaluating urban building
review of prediction methods for emergency manage-
damage of 2023 Kahramanmaras, Turkey earthquake se-
ment." International Journal of Disaster Risk Reduction 62
quence using SAR change detection." Sensors 23, no. 14
(2021): 102412.
(2023): 6342.
[7] Gupta, Ritwik, Bryce Goodman, Nirav Patel, Ricky Hosfelt,
[21] Hong, Zhonghua, Hongyang Zhang, Xiaohua Tong, Shijie
Sandra Sajeev, Eric Heim, Jigar Doshi, Keane Lucas, Howie
Liu, Ruyan Zhou, Haiyan Pan, Yun Zhang, Yanling Han, Jing
Choset, and Matthew Gaston. "Creating xBD: A dataset for
Wang, and Shuhu Yang. "Rapid fine-grained Damage Assess-
assessing building damage from satellite imagery." In Pro-
ment of Buildings on a Large Scale: A Case Study of the Feb-
ceedings of the IEEE/CVF conference on computer vision and
ruary 2023 Earthquake in Turkey." IEEE Journal of Selected
pattern recognition workshops, pp. 10-17. 2019.
Topics in Applied Earth Observations and Remote Sens-
[8] Gupta, Ritwik, Richard Hosfelt, Sandra Sajeev, Nirav Patel, ing (2024).
Bryce Goodman, Jigar Doshi, Eric Heim, Howie Choset, and
[22] Irwansyah, Edy, Hansen Young, and Alexander AS Gun-
Matthew Gaston. "xbd: A dataset for assessing building dam-
awan. "Multi Disaster Building Damage Assessment with
age from satellite imagery." arXiv preprint
Deep Learning using Satellite Imagery Data." International
arXiv:1911.09296 (2019).
Journal of Intelligent Systems and Applications in Engineer-
[9] Bai, Yanbing, Junjie Hu, Jinhua Su, Xing Liu, Haoyu Liu, ing 11, no. 1 (2023): 122-131.
Xianwen He, Shengwang Meng, Erick Mas, and Shunichi
Koshimura. "Pyramid pooling module-based semi-siamese
network: A benchmark model for assessing building damage
from xBD satellite imagery datasets." Remote Sensing 12, no.
24 (2020): 4055.

[10] Wajid, Mohd Anas, Aasim Zafar, Hugo Terashima-Marín,


and Mohammad Saif Wajid. "Neutrosophic-CNN-based image
and text fusion for multimodal classification." Journal of Intelli-
gent & Fuzzy Systems 45, no. 1 (2023): 1039-1055.

[11] Song, Huaxiang, and Yong Zhou. "Simple is best: A


single-CNN method for classifying remote sensing im-
ages." Networks & Heterogeneous Media 18, no. 4 (2023).

[12] Sambandam, Shreeram Gopal, Raja Purushothaman,


Rahmath Ulla Baig, Syed Javed, Vinh Truong Hoang, and Kiet
Tran-Trung. "Intelligent surface defect detection for submers-
ible pump impeller using MobileNet V2 architecture." The In-
ternational Journal of Advanced Manufacturing Techno-
logy 124, no. 10 (2023): 3519-3532.

[13] Li, Yanyu, Ju Hu, Yang Wen, Georgios Evangelidis, Kam-


yar Salahi, Yanzhi Wang, Sergey Tulyakov, and Jian Ren.
"Rethinking vision transformers for mobilenet size and speed."
In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pp. 16889-16900. 2023.

You might also like