0% found this document useful (0 votes)
23 views

Disaster_Prioritization (1)

Uploaded by

Akshat Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Disaster_Prioritization (1)

Uploaded by

Akshat Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

1

VELLORE INSTITUTE OF TECHNOLOGY,


VELLORE

CSE3999 – Technical Answers To Real-World


Problems
FALL SEMESTER 2021-22

School of Computer Science and Engineering, VIT University

J-Component

Disaster Prioritization

Submitted By
Akshat Singh, 18BCE0936,[email protected](official VIT mail)
Rishi Kakkar, 18BCE0875, [email protected](official VIT mail)
Rahul Pandey, 18BCE0729,[email protected](official VIT mail)
Nitesh Jha, 18BCE2496, [email protected](official VIT mail)
Shubham Shah, 18BCE2484,[email protected](official VIT mail)
2

1
ACKNOWLEDGEMENTS
We would like to express special thanks to our course faculty Dr. Senthil Kumar K
for his guidance, support and all the facilities that were required for completing our
project. By doing this project We got enriched with great knowledge and experience
which can help us in our future. We would also place on record our sense of gratitude
to one and all who, directly or indirectly, have lent their helping hand in this project.

Signature of Student(s)

Rishi Kakkar Subham Shah

Akshat Singh Rahul Pandey

Nitesh Jha

Date: 26-11-2021
3

CERTIFICATE
This is to certify that Akshat Singh, 18BCE0936, Rishi Kakkar, 18BCE0875, Rahul
Pandey, 18BCE0729, Nitesh Jha, 18BCE2496, Shubham Shah, 18BCE2484 from Vellore
Institute of Technology (VIT) have successfully completed their project work in the
field of Social and Information Networks for the topic “Disaster Prioritization”. This
is a record of his own work carried out during the Winter Semester of the Academic
Year 2020-21 under the guidance of Dr. Senthil Kumar K. He has presented his
project in the presence of faculty.

Signature of Faculty

Dr. Senthil Kumar K

Signature of Student(s)

Rishi Kakkar Subham Shah

Akshat Singh Rahul Pandey

Nitesh Jha

Date: 26-11-2021
4

Table of Contents
SNo. Content Pgno.

1. Abstract 5

2. Introduction 6

3. Literature Survey 7

4. Methodology 8

5. Architecture Diagram 12

6. Implementation 13

7. Results and Analysis 19

8. Conclusion and Future work 22

9. References 23

10. Appendix (plagiarism report, paper


communication, sample codes, etc.)

Disaster Prioritization
5

ABSTRACT
We aim to channelize the structure of computers to provide easy processing
of information to produce fast outputs. This is the reason for choosing a
computer or basically an algorithmic approach to rank disasters rather than
statistically computing by hand for some values. Provides a short
description of the incident , and each incident was chosen with considering
said parameters. We have used the basic parameters from the user such that
the user can get assessment documentation of his disasters. Which are
properly and mathematically ranked such that it is easier for any further
ideas and loss prevention techniques or preventive measures.

Secondly, A drafted documentation is robust output to the user queries and


helps to address the solution in a well mannered and professional way so
that the self analysis of the user is faster and better. Rather than spending
time making a report he just needs to focus on the ideas present for each of
the loss values and make an extensive modular report. Which is more clear
and provides a guide towards reducing the losses, making sure the counter
measures are properly weighted according to decision making to prevent
life damage or monetary damage.

Lastly, Making sure not to miss any parameter which can be useful to
produce a final output and take part in the analysis. This is human nature to
make mistakes, which is not good to make life threatening decisions based
on robust logic and fast mathematics. But considering the depth of thought
put out to a simple idea by humans, just suggesting basic ideas to them can
boost their efficiency.
6

INTRODUCTION

● Objective

We can see that disasters are abundantly occurring during an ongoing


pandemic, which can itself be a challenge but if not prioritized we might
end up in loss (life,land or money) . Thus we aim to provide a
software(Website) which ranks the disasters and provides an assessment
document for each of the disasters.In this project we aim at developing a
process through which disasters can be prioritized in an appropriate
manner and within no time.

● Motivation

This project was taken up by us as we saw the emergence of various


disasters occurring around the world and could observe that something
like a prioritization algorithm is an essential requirement. We thought
that it would be highly beneficial for a state or even a country for that
matter to foresee what measures it can take ahead of time and what kind
of preparations and resources would be required if any such disaster
occurs unexpectedly. We are making this project to deal with the problem
of prioritizing disasters to optimize their management. The formula to
calculate ranking and then suggesting the counter measures to reduce or
eliminate the impact by those disasters.

● Proposal

Thus, After exploring Objective and Motivation we propose a


model to rank disasters and prepare a document of analysis and
suggest measures according to the parameters provided by the user,
thus helping them to easily apply these measures with exploring
them further. Also proposed models have been analyzed to show the
7

data is enough to specify and rank the disasters among the


collection. Validating the product/software.

LITERATURE SURVEY

Koks, E.E. and Thissen, M., 2016. A multiregional impact assessment model
for disaster analysis. Economic Systems Research, 28(4), pp.429-449.

This paper presents a recursive dynamic multiregional supply-use model, combining


linear programming and input–output (I–O) modeling to assess the economy-wide
consequences of a natural disaster on a pan-European scale. It is a supply-use model
which considers production technologies and allows for supply side constraints

Tun Lin Moe , Pairote P Moe, T.L. and Pathranarakul, P., 2006. An
integrated approach to natural disaster management: public project
management and its critical success factors. Disaster Prevention and
Management: An International Journal.

The main aim of this project was to propose an unprecedented integrated approach to
effectively managing natural disasters. It focuses on three main objectives, namely, the
framework for disaster management from a public project management perspective, an
integrated new approach for successful and effective management of a disaster crisis
and a set of critical factors for managing such disasters.

Fan, C. and Mostafavi, A., 2018, April. Establishing a framework for


disaster management system-of-systems. In 2018 Annual IEEE International
Systems Conference (SysCon) (pp. 1-7). IEEE. (2018)

The objective of this paper is to propose a System-of-Systems (SoS) framework for


disaster management systems and processes to better analyze, design and operate the
heterogeneous, interconnected, and distributed systems involved in disasters. With
increasing frequency and severity of disasters, improvement of efficiency and
effectiveness of disaster management systems and processes is critical.

Armenakis, C. and Nirupama, N., 2013. Prioritization of disaster risk in a


community using GIS. Natural Hazards, 66(1), pp.15-29.
8

Prioritization of disaster risk was carried out for a community in Toronto, Canada. .
Geographic information systems (GIS) were used for spatial analysis, including spatial
overlays and clipping for extracting spatial and attribute information related to people’s
vulnerability, critical infrastructure and land use. In order to determine disaster risk, the
overall community vulnerability was evaluated by combining social, economic,
physical and environmental vulnerabilities. This paper uses the propane explosion
incident as the case in point to demonstrate the methodology and procedure used to
evaluate risk using GIS techniques.

Proposed Work (Methodology):

This project aims to prioritize disasters by mapping the correct and valuable
assets during a disaster as well identify the disasters. The methodology is
described below:

● Collecting the data

● Review of the data

● EDA, Statistical Analysis

● Feature Mapping

● Parameter Identification

● Dashboard with all the required attributes

● After analysis, Website to categorize user disasters

● Output will be a document of risk assessment.


9

Impact factor:

This is the collection of three important losses during the disaster, i.e. population
loss, Property loss and Economic loss. Population loss counts how much of the
population was affected from the survey (0-49 % (less), 50-99, 100). Property
loss includes acquisitions, remodels, and safety measures. Economic loss is in
dollars for the whole payment done by the government towards either of the other 2
losses. Each of these have a weight factor.

Probability factor :

This is the second important factor for deriving the final rankings This is done by
frequency analysis by each of the disasters with the years of occurence (notice the
graph of Factor are plotted in EDA).

This is done with 3 rankings -> Frequent, Rare and Occasional.

Which can be derived from the percentage of occurrences of disasters from the all
available information from the dataset.

Deriving Ranking:

After finding or discovering new factors from the existing dataset, we find the final
factor i.e. “the ranking” by using a specific formula from the constructed features from
the dataset.
10

𝐻𝑎𝑧𝑎𝑟𝑑_𝑅𝑎𝑛𝑘𝑖𝑛𝑔 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦_𝐼𝑚𝑝𝑎𝑐𝑡 × 𝐼𝑚𝑝𝑎𝑐𝑡_𝐹𝑎𝑐𝑡𝑜𝑟

𝑖=3
𝐼𝑚𝑝𝑎𝑐𝑡 _𝐹𝑎𝑐𝑡𝑜𝑟 = ∑ 𝑤𝑒𝑖𝑔ℎ𝑡_𝑓𝑎𝑐𝑡𝑜𝑟 × 𝑖𝑚𝑝𝑎𝑐𝑡 _𝑙𝑜𝑠𝑠
𝑖=1

Where, i is the SNo. number from the Table of Impact_Factor.

This hazard ranking which risk should be dealt with first, For example let’s say flood
and covid occurs at the same time, Than we must first save from floods then look for
covid. Also suggests a parallel mitigation if rankings are equal.

Analysis Methodology:

The United States experiences a large variety of natural disasters each year:
devastating hurricanes, seasonal tornadoes, and scorching wildfires are among the
events that endanger many lives and cause billions of dollars in damages. On the basis
of the first dataset we came to the conclusion that disasters have been tabulated or
recorded from 2000 to present with correct and apt information before 2000 most of
the data seems missing due to haphazard record collection.

Thus we tried to explain the frequency of dataset per year from our initial analysis and
Internally looked at statistics to come up with better graphs which were able to explain
how disasters have been increasing from 2004 to present at a constant rate.Thus, from
EDA we were able to present a foundational understanding of how disasters can be
mapped based on Frequency, Probability and Impact.
11

We can see on the above flowchart that analysis done on our dataset is extensive and
deep which helps us to pertain the features and get a deeper dive into how to proceed
with ranking of these disasters. Considering the correct features and plotting them, we
get how the final output varies accordingly.

Logical Pattern categorization and Asset mapping can boost the method to prioritize
any disaster given to us. Users facing new disasters and who have an approximate
sense of loss on the upcoming disaster can broadly classify multiple disasters and get
the assessment report to them. This report can be the basic guide to a broader or
extensive disaster prevention plan.

ARCHITECTURE DIAGRAM
12

System Flow

IMPLEMENTATION

1. Constructing a new dataset


13

We needed to construct a new dataset to have specific information which


is required after merging two datasets on their specific disaster value. This new
dataset also had 6 new columns to keep track of each of variables namely:

a. Rating(i.e. Frequency)
b. Population_impact
c. Property_impact
d. Economic_impact
e. Impact_factor
f. Hazard_ranking

These new values with an indexer and some other parameters we derive a
new dataset which helps us later to analyze them and showcase the graphs in the
results.

Dataset-1

Dataset-2
14

Final Dataset

2. Analyzing new values


15

Analysis of the new values with important parameters show how close
those are with each other, thus explaining and making the developers closer to
the model. The approximation of the new model would be clear and make the
right analysis of the presented dataset with the new values.

The results of this can be seen in the Analysis and results section which
clearly explains each figure and the contribution of them in the further
development to produce the correct results. The graphs as a whole made clear
that we need to provide a comparison and documentation for the disasters of
users so that we can provide a basic structural guide towards resolving those
disasters and ordering them as per their risk and vulnerability.

3. Final Webpage Output


16

This is the second last step towards completion of the product which is to
serve as UI, where they can register disasters to interact with our algorithm.
There are two sides of a webpage, client and server, firstly client side to serve
the user and server side produce results required by them.

Screenshots of the Webpage below can help to know what is the actual
interface and results of the product which we are building. The 4th point or the
last step is to produce the assessment documentation of the number of disasters
specified by the user.

Simplistic design

For two disasters.

The download dialog box.

4. Document Generation
17

Final output to the user after entering the details to the webpage is the
document which provides the assessment with values and ranking of those
disasters. It also tries to guess the type of disaster which can be ignored, if not
correct and does not affect the final assessment output.

The document has a standard template which also suggests counter


measures for each of the three impacts which can be explored further to get
better preventive measures.

Generated-Document-Information

Document-View.
18

Note: For each disaster we have each page explaining and assessing the
particular disaster individually.

Procedure

1. You first Generate the 6 values described in Step 1, and concatenate with other
values to get the dataset called “Hazard_Ranked.csv”, which is the baseline for
our Step 2.

2. Step 2 has the analysis phase which explores the “Hazard_Ranked” dataset
further and builds up the platform for moving to Step 3 and 4 which are the top
level architectural designs.

3. Step 2 also sets up our stage for making the ranking more accessible to the
developers to understand what needs to be developed so that we can attain the
results from Step 3.

4. Step 4 is done using pre generated templates and then produces a custom write
up for the disaster value entered by the user and suggests measures. This
document act as the stonewall for the disasters and exploring them further could
lead to better results to precisely evade the losses.
19

ANALYSIS AND RESULTS

With above information we can, show the analysis results below:

Above image represents the ranking of each of the disasters accordingly and
see Tornado and Hurricanes with most ranking differentials due to
frequency.
20

Stacked bar graph shows the hazard ranking of different disasters with how
many and what are the rankings of individual disasters.

The above figure


21

Each type of incident and their frequency colored by Rating assigned to


them, This justifies the fact of categorizing things properly.

Example link

This link shows the output document shown in the figure, Currently not
active in the internet due to domain issues.
22

CONCLUSION

We were able to analyse and visualize the data to accurately portray the desired hazard
rankings of various incidents taking into consideration multiple factors.The data
supports and functions well with our chosen formula for calculating the impact factor
of events.Future work may involve making the data more intuitive to new and
untrained users to accurately be able to get insights from the graphs and hazard
rankings and take preventive Measure accordingly. For Incidents, to get an accurate
location each data was added in the database by past references and data that is
available.

Furthermore, a short description of incidents must be given, and each incident must be
chosen according to a specific category such as control time in days, death rate per
day, finance requirement, or other items on the available category list. Moreover, it is
necessary to check whether location of incidents are verified or these incidents have
been responded to already. The checked information is so important for responders
because managers could figure out several tasks done on those incidents, and no need
to send more volunteers in unnecessary statements.

Conclusively we have provided a documented report which we can find on the website
and download on the local computer. Thus providing an assessment to disasters and
suggest counter measures for each impact in the document.

FUTURE WORK

● Include machine learning based prioritization.


● Provide a graphical based user interface to the users
● Try to implement the Traveling Salesman Problem
● Link the program to google maps or any GPS system which would also help in
considering factors like terrain, it’s distance from the management team.

REFERENCES
23

1. A Multiregional Impact Assessment Model for disaster analysis


2. An integrated approach to natural disaster management
3. Establishing a Framework for Disaster Management
System-of-Systems
4. Prioritization of disaster risk in a community using GIS

APPENDIX

Analysis Code:

import dash

import dash_core_components as dcc

import dash_html_components as html

import numpy as np

import pandas as pd

import plotly.graph_objs as go

import plotly.express as px

import sys

app = dash.Dash()

df = pd.read_csv("Datasets/us_disaster_declarations.csv")

df2 = pd.read_csv('Datasets/Final_dataset.csv')
24

ans = df[['fy_declared','incident_type']].value_counts().sort_index(level=0)

ans = ans.reset_index().set_index(['incident_type'])

vals = df['incident_type'].unique()

years = df['fy_declared'].unique()

val = df['state'].value_counts()

imp = df['incident_type'].value_counts()

frequent = list(imp[imp/df.shape[0] > (4/68)].index)

occasional = list(imp[(imp/df.shape[0] <= (4/68)) & (imp/df.shape[0] >


(1/68))].index)

rare = list(imp[imp/df.shape[0] <= (1/68)].index)

efd = {j:(i + 1) for i,j in enumerate(df2['damageCategory'].unique())}

df['Rating'] = 0

df.loc[df['incident_type'].isin(frequent),'Rating'] = 3

df.loc[df['incident_type'].isin(occasional),'Rating'] = 2

df.loc[df['incident_type'].isin(rare),'Rating'] = 1

df['Economic_impact'] = np.nan

df['Population_impact'] = np.nan

df2['Property_impact'] = 0

df['Property_impact'] = np.nan

ifs = df2['propertyAction'].value_counts()
25

high_imp = ifs[ifs/df2.shape[0] > (30/21 - 1)].index

med_imp = ifs[(ifs/df2.shape[0] <= (30/21 - 1)) & (ifs/df2.shape[0] > ((30/21 -


1)/50))].index

low_imp = ifs[(ifs/df2.shape[0] <= ((30/21 - 1)/50))].index

df2.loc[df2['propertyAction'].isin(high_imp),'Property_impact'] = 3

df2.loc[df2['propertyAction'].isin(med_imp),'Property_impact'] = 2

df2.loc[df2['propertyAction'].isin(low_imp),'Property_impact'] = 1

j=0

su = 0

su2 = 0

for i in df.loc[(df['fy_declared'] >= 1997) & (df['fy_declared'] <= 2018)].index:

if j == df2.shape[0]:

break

df.loc[i,'Economic_impact'] = df2.loc[j,'actualAmountPaid']

df.loc[i, 'Population_impact'] = efd[df2.loc[j, 'damageCategory']]

df.loc[i, 'Property_impact'] = df2.loc[j,'Property_impact']

su+=efd[df2.loc[j, 'damageCategory']]

su2+=df2.loc[j, 'Property_impact']

j+=1

df['Economic_impact'] =
df['Economic_impact'].fillna(df2['actualAmountPaid'].mean())

df['Population_impact'] = df['Population_impact'].fillna(su/df2.shape[0])

df['Property_impact'] = df['Property_impact'].fillna(su2/df2.shape[0])
26

std = df['Economic_impact'].std(ddof=0)

mn = df['Economic_impact'].mean()

high_imp = df[df['Economic_impact'] >= (std + mn)].index

mid_imp = df[(df['Economic_impact'] < (std + mn)) & (df['Economic_impact']


>= mn)].index

low_imp = df[df['Economic_impact'] < mn].index

df['Economic_impact_modified'] = 0

df.loc[df['Economic_impact'].isin(high_imp),'Economic_impact_modified'] = 3

df.loc[df['Economic_impact'].isin(mid_imp),'Economic_impact_modified'] = 2

df.loc[df['Economic_impact'].isin(low_imp),'Economic_impact_modified'] = 1

df['Economic_impact'] = df['Economic_impact_modified']

del df['Economic_impact_modified']

df['Impact_factor'] = df['Economic_impact']*1 + df['Property_impact']*2 +


df['Population_impact']*3

df['Hazard_ranking'] = round(df['Impact_factor']*df['Rating'])

finals =
df.groupby(['incident_type','fy_declared']).size().reset_index(name='Counts').sort
_values(by=['Counts','incident_type'],ascending=False)

fed = df['Rating'].value_counts()

dfed =
df.groupby(['incident_type','Hazard_ranking']).size().reset_index(name="Counts
").sort_values(by=['Counts','Hazard_ranking'],ascending=False)

print(df['Hazard_ranking'].mean())

fig = px.scatter( y = df['Hazard_ranking'].values, x = df['incident_type'].values)

fig2 = px.bar(dfed, x = 'incident_type', y = 'Counts',color = 'Hazard_ranking',


title='Stacked')
27

fig3 = px.bar(finals, x = 'fy_declared', y = 'Counts', color ='incident_type')

app.layout = html.Div(

html.Div([

dcc.Graph(

id='plot-bars',

figure=fig3

),

dcc.Graph(

id='plot-pie',

figure=go.Figure(data=[go.Pie(labels=val.index.values[:20],values=val.values[:2
0])])

),

dcc.Graph(

id='plot-bar2',

figure=fig2

),

]),

html.Div([

dcc.Graph(

id='plot-bar',

figure=go.Figure(data=[go.Bar(y = fed.values, x =
["Frequent",'Occasional','Rare'],name='Rating')])
28

])

if __name__ == "__main__":

app.run_server(debug=True)

Website Code:

from flask import Flask

from flask import render_template, request, send_file

import pandas as pd

from conv_pdf import PDF

df = pd.read_csv("..\Hazard_Ranked.csv")

ordinal = lambda n: "%d%s" %


(n,"tsnrhtdd"[(n//10%10!=1)*(n%10<4)*n%10::4])

app = Flask(__name__)

@app.route('/',methods=['GET','POST'])

def router():

if request.method == 'GET':

return render_template('index.html', answer=False)

else:
29

try:

val = int(request.form['dis'])

except:

val = 0

print(val)

return render_template('index.html', answer=True,num=val)

@app.route('/submit',methods=['POST','GET'])

def subs():

del df[df.columns[0]]

if request.method == 'POST':

rankings = []

paths = "Files/"

number = 1

basic = "Files/Template_diss.txt"

modif = open(basic,'r').read().split(' ')

modif.append('\n')

modif.append('\n')

text_fill = {}

pdf = PDF()

pdf.set_title('Lol')

pdf.set_author('Rishi Kakkar, Akshat Singh, Rahul Pandey, Subham Shah,


Nitesh Jha')
30

for ans in request.json:

print(ans)

filler = []

filler.append("Disaster_"+str(number))

l = list(map(int, list(ans.values())))

print(l)

impact = l[1]*2+l[2]*1+l[3]*3

txt = ("1. "+open(paths+"Population_"+str(l[3])+".txt",'r').read()+"\n\n")

txt+=("2. "+open(paths+"Economic_"+str(l[1])+".txt",'r').read()+"\n\n")

txt+=("3. "+open(paths+"Property_"+str(l[2])+".txt",'r').read()+'\n')

text_fill[filler[0]] = txt

Ranking = l[0]*impact

# Code to choose the nearest values to ranking

nearest_ranking = df.iloc[(df["Hazard_ranking"] -
Ranking).abs().argsort()[:2]]

ans = nearest_ranking[nearest_ranking['Rating'] == l[0]]

disas = None

if ans.shape[0] == 0:

find_from = df[df['Rating'] == l[0]]


31

nearest_ranking = find_from.iloc[(find_from["Hazard_ranking"] -
Ranking).abs().argsort()]

ans =
nearest_ranking.sort_values(by=['Population_impact','Economic_impact','Propert
y_impact'],ascending=False)

disas = ans['incident_type'].values[0]

else:

ans =
ans.sort_values(by=['Population_impact','Economic_impact','Property_impact'],a
scending=False)

disas = ans['incident_type'].values[0]

if disas is not None:

filler.append(disas)

cond = ((df["Hazard_ranking"] - Ranking).abs().sort_values() == 0.0)

if df[cond].shape[0] > 0:

filler.append(100)

else:

filler.append(((2*Ranking -
ans['Hazard_ranking']).abs()/Ranking).values[0]*100)

filler+=[l[3],l[1],l[2]]

print(filler)

rankings.append([Ranking,filler])

number+=1

rankings = sorted(rankings,key=lambda x:
(x[0],x[1][3],x[1][4],x[1][5]),reverse=True)

num = 1
32

for rank in rankings:

rank[1].insert(3,ordinal(num))

num+=1

pages = []

chp = 1

for rank in rankings:

num = 0

page = ""

done = False

print(rank)

for w in modif:

if w.find(';') != -1:

if not done and num == 4:

w = w.replace(';',str(rank[0]))

done = True

else:

w = w.replace(';',str(rank[1][num]))

num+=1

page = page+" "+w

page+=text_fill[rank[1][0]]

pdf.print_chapter(chp,'Assessment for '+rank[1][0], page)

chp+=1
33

pdf.output('Document.pdf','F')

return send_file('Document.pdf',as_attachment=True)

if __name__ == '__main__':

app.run('0.0.0.0',debug=True)

Document-Generator Code:

from fpdf import FPDF

title = 'Disaster Priortization and Assessment Document (Ordered with priority)'

class PDF(FPDF):

def header(self):

# Arial bold 15

self.set_font('Arial', 'B', 15)

# Calculate width of title and position

w = self.get_string_width(title) + 6

self.set_x((210 - w) / 2)

# Colors of frame, background and text

self.set_draw_color(0, 80, 180)

self.set_fill_color(230, 230, 0)

self.set_text_color(220, 50, 50)

# Thickness of frame (1 mm)

self.set_line_width(1.5)
34

# Title

self.cell(w, 9, title, 1, 1, 'C', 1)

# Line break

self.ln(10)

def footer(self):

# Position at 1.5 cm from bottom

self.set_y(-15)

# Arial italic 8

self.set_font('Arial', 'I', 8)

# Text color in gray

self.set_text_color(128)

# Page number

self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')

def chapter_title(self, num, label):

# Arial 12

self.set_font('Arial', '', 12)

# Background color

self.set_fill_color(200, 220, 255)

# Title

self.cell(0, 6, 'Chapter %d : %s' % (num, label), 0, 1, 'L', 1)


35

# Line break

self.ln(4)

def chapter_body(self, txt):

# Read text file

# Times 12

self.set_font('Times', '', 12)

# Output justified text

self.multi_cell(0, 5, txt)

# Line break

self.ln()

# Mention in italics

self.set_font('', 'I')

self.cell(0, 5, '(end of excerpt)')

def print_chapter(self, num, title, name):

self.add_page()

self.chapter_title(num, title)

self.chapter_body(name)

# pdf = PDF()

# pdf.set_title(title)
36

# pdf.set_author('Jules Verne')

# pdf.print_chapter(1, 'A RUNAWAY REEF', '20k_c1.txt')

# pdf.print_chapter(2, 'THE PROS AND CONS', '20k_c2.txt')

# pdf.output('tuto3.pdf', 'F')

Google drive for codes:

GitHub Code

Plagiarism Report

Link to Report

You might also like