0% found this document useful (0 votes)
1 views30 pages

Evaluación de Inundaciones, Determinación de Peligros y Gestión Del Riesgo (2020) .-31-60

The document discusses the evaluation and modeling of flood volumes and hydrographs, particularly focusing on the Mistassibi River's 1976 flood. It highlights the complexities in reconstituting flood hydrographs, the relationship between flood peak and volume, and the use of deterministic and stochastic modeling approaches. Additionally, it emphasizes the importance of considering various factors such as antecedent conditions, snowmelt, and subsequent events in flood analysis.

Uploaded by

sepuu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views30 pages

Evaluación de Inundaciones, Determinación de Peligros y Gestión Del Riesgo (2020) .-31-60

The document discusses the evaluation and modeling of flood volumes and hydrographs, particularly focusing on the Mistassibi River's 1976 flood. It highlights the complexities in reconstituting flood hydrographs, the relationship between flood peak and volume, and the use of deterministic and stochastic modeling approaches. Additionally, it emphasizes the importance of considering various factors such as antecedent conditions, snowmelt, and subsequent events in flood analysis.

Uploaded by

sepuu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

C + E = D + E

C = D
B

C
D
E

Figure 2.11 Equivalence of the flood volumes

An example of the application of this approach is visible on Figure 2.12. The 1976
flood of the Mistassini River has been simulated by four successive floods, all
with an identical recession pattern. For each flood, the river flow and the flood
volume have been adjusted in order to fit the observation. The blue area
represents the simulated volume of the flood series. The various peaks can be
exactly reproduced, in magnitude and timing; the close fit of the recession curves
confirms that they belong to the same family. The periods of growing river flow
show some small discrepancies due to not considered short rainfall events; in
addition, a very last small rainfall early July has not been considered.

To within half a percent, the integration of the blue area indicates a volume of
4’000 x 106 m3. This corresponds to the official estimates.
Figure 2.12 Observed and FIM-simulated Mistassini 1976 flood

2.8. HYDROGRAPH RECONSTITUTION – COMPLEX EVENTS

The reconstitution of a flood hydrograph for a specific frequency taking into


account the results of the statistical analysis of the peak discharge and the flood
volume (for specific durations) can lead to some discrepancies.

An intuitive approach consists in modifying an observed hydrograph considering


the results of the statistical analysis to reproduce the peak discharge and the
volume of the flood for different duration. Figure 2.13 illustrates this approach
based on the largest spring flood observed on the Mistassibi River (1976). These
hydrographs respect the peak discharge and the volume estimated by statistical
analysis for different durations, however the recession pattern does not respect
the pattern observed on historical floods, which is a characteristic of the drainage
area.
Figure 2.13 Mistassibi River – Spring Flood Hydrograph –
Without consideration of recession characteristics

As shown previously in this chapter, the flood recession follows a specific pattern
which is independent of the flood duration. Under this assumption, a large flood
should take more time to return to the normal conditions than a smaller flood
event; this will have an impact on the reconstitution of the flood hydrograph.
Figure 2.14 illustrates the reconstitution of the same flood hydrographs, but this
time considering the recession pattern of the drainage area. The flood peak and
volume are respected and the results appear more realistic.

It should be noted that such hydrograph(s) is only representative of one possible


pattern for specific period of recurrence. Such analyses should be performed
with different patterns (for the same period of recurrence) to evaluate the
consequences of floods. Intuitively, for systems with relatively large reservoirs,
the later the flood peak discharge occurs, the more critical will be the
consequences, since the reservoir will be at higher level at the occurrence of the
peak discharge.
Figure 2.14 Mistassibi River – Spring Flood Hydrograph –
With consideration of recession characteristics

2.9. RELATION BETWEEN FLOOD PEAK AND VOLUME

For drainage areas where major floods are caused by a single event, the flood
peak discharge and the flood volume could have similar periods of recurrence.
However, for larger drainage areas with more complex flood conditions, the
situation is different. The longer the flood duration will be, lower should be the
correlation between the peak discharge and the volume.

As example, an analysis was performed on the Mistassibi River data. As shown


on Figure 2.15, the coefficient of determination (R2) between the maximum daily
volume of the flood (daily peak discharge) and the five-day maximum volume is
about 98.2%. It decreases significantly for longer comparison periods.
Figure 2.15 - Mistassibi River –
Comparison of the spring flood volume for different durations

It is not possible to generalize conclusions based only on this specific case;


however, this shows that precautions must be taken in the reconstitution of the
flood hydrograph, since the relation between the peak daily volume and the
overall flood volume is not straight forward. It also illustrates the risk of performing
an analysis with only one hydrograph, since numerous combinations are
possible.

2.10. DETERMINISTIC APPROACH

The flood evaluation for different return periods can be performed using a
deterministic approach, particularly for locations where observed discharges data
are not available or only for a short duration. Results of the statistical and
deterministic analyses of the flood volume for rainfall event(s) should lead to
results in a similar range; this is particularly the case for small drainage areas, for
which a single rainfall event is usually considered.
A deterministic approach is also mainly used to evaluate the Probable Maximum
Flood, considering various possible scenarios maximizing the consequences on
the system 2.
The following elements must be considered for applying similar approaches:
a. Main rainfall event
The main rainfall event causing the flood normally corresponds to the expected
flood return period; higher is the probability to see such event, higher the number
of combinations that can generate a similar flood discharge or volume.
b. Antecedents events
Antecedent events are particularly important to establish the conditions prevailing
before the occurrence of the main rainfall event. This will have an impact on the
river discharge before the event and, even more important, on the soil moisture.
The more saturated the soil will be, the faster the response time of the system
will be (increasing the peak but also the volume for a specific duration). A similar
situation can be observed if a major rainfall event occurs when the soil is frozen.

Similarly, the base flow does not simply represent a reference river discharge on
which the peak flow is added but is an integral in the building up of the flood
structure. The net peak (additional discharge over the peak flow) generated by
an incoming flood volume is not a constant independent from the inflow pattern.

To evaluate the PMF, a large rainfall should be considered shortly before the
PMP to saturate the soil and ensure a maximum runoff.
c. Snow
The snow cover and the snowmelt period will have a direct impact on the flood
volume and the peak discharge. Both factors are important, since a rapid
snowmelt of a large snow cover will most likely generate large floods. When the
snow cover is an important part of the flood volume, the spring flood is very often
the largest one of the year, triggered by the combination of snowmelt and rainfall.
Deep snow covers increase the likelihood of large floods.

Before melting, the snow cover must be primed by warm temperatures bringing
it close to the melting point. A realistic temperature sequence must be developed
based on the observed conditions in the drainage area.
d. Subsequent events
Events following the main event can have a significant impact on the flood volume
and its duration. The impact of the subsequent events is particularly significant
until the reservoir returns to its maximum operation level (MOL). The longer the

2 The PMF scenario with the maximum peak discharge is not automatically the
worst scenario (maximum water level) for a dam with regulation capability. It
happens that a spring PMF (with lower peak discharge, but larger volume)
reaches a higher water level in a reservoir than a summer PMF (rainfall event).
duration to return to the MOL, the more vulnerable is the system in case of a new
large flood.
The subsequent events often occur in period(s) when large discharges have been
observed. This is particularly true for the evaluation of the PMF. For floods with
lower return periods however, it is not obvious to determine a subsequent
sequence of events, since there is in general no direct relation between the main
rainfall event and the subsequent events.
e. Initial conditions
Since oftentimes the objectives of such studies consist in determining the
maximum water level in a reservoir corresponding to a specific period of
recurrence, the initial conditions of the system are an important factor. The
expected volume of storage available before the flood will depend of the period
of the year. Normally the MOL is considered for a rainfall flood event.

However, for a spring flood (with a large percentage of the flood volume
generated by the snow cover), the reservoir level and the mode of operation
during the first part of the flood (until it reaches the MOL) will depend on the
expected conditions at this time of the year. The volume available for flood routing
will be larger; it is therefore unlikely that the spillway will be operated at full
capacity at the beginning of the flood. It may even not reach this discharge at all,
because of the uncertainty related to the final flood volume.
f. Comments
The evaluation of the flood volume for rainfall event(s) on large drainage areas
or for spring flood is complex, since it involves different events or conditions as
discussed above. If it is possible to identify the most likely scenario(s) to generate
a PMF, the number of scenarios to determine the 1:100, 1:1000 or 1:10,000 year
flood is almost infinite, since the combination of events leading to lower return
periods depends on too many combinations of parameters.

Comparison of the results from flood statistical analyses of spring flood volume
and deterministic analyses of the PMF volume can lead to inconsistencies. For
example, the extrapolation of the flood volume for a 1:10,000 year return period
can be higher than the volume of the PMF. Some explanation can be proposed:
- The statistical analyses overestimate the flood volume. The number of
recorded floods (usually a few tens) considered in a flood extrapolation
to the range of 1:1,000 years or more does not always guarantee a high
quality estimation, since the trends are not always well defined;
- Some of the parameters used in the deterministic analyses were
underestimated (for instance subsequent events). Since a period of
analysis of several weeks can follow the PMP, it is difficult to consider
realistic rainfall events during this period;
- Most probably a combination of both factors.
2.11. STOCHASTIC MODELING

Stochastic modelling can be seen as an answer to the evaluation of the flood


volume, whose results can be compared with those of a stochastic analysis of the
flood peak and flood volume or with the deterministic evaluation of the PMF.
Stochastic modelling appears to be particularly interesting for floods caused by a
set of events (such as the combination of snowmelt and rainfall events) and for
systems of reservoirs in cascade. Such modelling will have to reproduce the
succession of precipitation and air temperature along the year (to determine if the
precipitation will be rain or snow), the flood routing through the drainage area
including soil moisture variations, infiltration, losses, snowmelt and the reservoir
management (i.e. the operation of the control structures), to determine the
maximum water level to be observed at the site(s). Thousands of years will have
to be stochastically generated and simulated to estimate the probability related
to very large floods.
One of the main challenges in this process consists in representing adequately
the statistical distribution of each parameter and the temporal correlations
between them, such as:
- The relation and duration between precipitation events;
- The distribution of the precipitation on large drainage areas;
- The relation between air temperature and precipitation (rainfall or
snowfall);
- The variation of the air temperature and the snowmelt process (if
applicable).

The situation becomes even more complex for large drainage areas, since spatial
correlations and sometimes orographic effects at different locations must also be
considered. At the same time, the probability of deficiencies in the system and
“human” actions can play a significant role in the spatial and temporal evolution
of the flood.

The accuracy of the physical model(s) to represent large flood events must also
be considered, because of the limitations on the information available to calibrate
the model for such large floods. Usually a large number of assumptions (explicit
or implicit) are made at the basis of such a model; this of course generates larger
uncertainties.

If it is well known that the extrapolation of large floods may be subject to a


significant degree of uncertainty 3, because of the limitations of the sample
available to fit the statistical distribution. Similar concerns should also be
considered about the parameters considered in stochastic modelling and the final
results obtained.

3 CDA Guidelines “Flood statistics are subject to a wide margin of uncertainty,


which should be taken into account in decision-making.”
The use of stochastic modelling to evaluate flood characteristics and the
corresponding flood level is discussed in more details in bulletin 170 and in the
present bulletin (chapter 3).

2.12. FLOOD VOLUME – EXTREME VALUES

Data of maximum floods observed in several countries around the world date
back to 1984, when the International Association of Hydrological Sciences (IAHS)
published the “World Catalogue of Maximum Observed Floods”. Also, ICOLD
Committee on “Dams and floods “published, in 2003, the Bulletin 125 on Dams
and Floods, which contributed with more significant data related to maximum
floods, mainly for dams and reservoirs. Recently, in 2014, a new and more
extensive review of maximum floods has been carried out on the data of flows
and volumes of maximum floods.
For the analysis of the peak discharge, the envelope curves method with the
Francou-Rodier (F-R) equation can be used. The F-R equation is the relationship
between the peak flow and the catchment area:
𝑲𝑲
𝑸𝑸 𝑨𝑨 𝟏𝟏−𝟏𝟏𝟏𝟏
=� �
𝑸𝑸𝟎𝟎 𝑨𝑨𝟎𝟎
where:
- Q= Peak flow (m3/s)
- A= Catchment area (km2)
- Q0 = 106 m3/s
- A0 = 108 km2
- K= Francou-Rodier coefficient

For each peak discharge the coefficient K is calculated by:


𝒍𝒍𝒍𝒍𝒍𝒍𝒍𝒍 − 𝒍𝒍𝒍𝒍𝒍𝒍𝑸𝑸𝟎𝟎
𝑲𝑲 = 𝟏𝟏𝟏𝟏 · �𝟏𝟏 − �
𝒍𝒍𝒍𝒍𝒍𝒍𝒍𝒍 − 𝒍𝒍𝒍𝒍𝒍𝒍𝑨𝑨𝟎𝟎

The database on flood volumes comes from the surveys carried out by ICOLD; it
consists of 187 records on volume of maximum floods in dams and reservoirs
from 15 most significant countries in this field.
The methodology used is similar to that used for the analysis of the peak flows,
assessing the relationship between the flood volumes and the catchment area,
through the equation:
𝑲𝑲𝑽𝑽
𝑽𝑽 𝑨𝑨 𝟐𝟐− 𝟏𝟏𝟏𝟏
=� �
𝑽𝑽𝟎𝟎 𝑨𝑨𝟎𝟎
where:
- V= Flood volume (hm3)
- V0 = 50 × 106 hm3
- A0 = 108 km2
- Kv = Coefficient of flood volume

Therefore, for each flood the coefficient Kv is calculated by:


𝐥𝐥𝐥𝐥𝐥𝐥 𝑽𝑽 − 𝐥𝐥𝐥𝐥𝐥𝐥 𝑽𝑽𝟎𝟎
𝑲𝑲𝑽𝑽 = 𝟏𝟏𝟏𝟏 · �𝟐𝟐 − �
𝐥𝐥𝐥𝐥𝐥𝐥 𝑨𝑨 − 𝐥𝐥𝐥𝐥𝐥𝐥 𝑨𝑨𝟎𝟎

Figure 2.16 shows the relationship between the flood volume and the catchment
area for the floods analysed with the available data. It defines an envelope curve
of the extreme flood volumes with a value of Kv = 10.5

Figure 2.16 – Flood Volumes -


Envelope curve of extreme flood volumes – Kv = 10.5

The highest value was in Brazil (Tocantis reservoir) with a Kv= 10.5, in a 1980
flood.

Figure 2.17 shows the relationship between specific volume and the catchment
area. The specific volume, a measure of the volume generated per unit of the
catchment area, is expressed by :
𝑉𝑉
𝑉𝑉𝑠𝑠 =
𝐴𝐴
where:
- Vs = Specific volume (mm)

Figure 2.17 – Relationship between specific volume (Vs)


and catchment area

It should be noted that there is no systematized database on flood volumes


around the world. The results presented are a preliminary analysis, which should
lead to prepare a more detailed database of flood volume. Such database can be
used to perform initial evaluation of flood volumes on drainage areas presenting
similar conditions, mainly for validation purposes. It should be further expanded
in the future.

2.13. IMPACT OF CLIMATE CHANGE ON FLOOD VOLUME

It is widely recognized that climate change will increase the variability of extreme
events. The increase in air temperature will have an impact on the maximum
rainfall that can be observed in several regions of the world; this will in turn have
a direct impact on the floods peak discharge and the flood volume.
In the northern areas and for spring floods, the impact on the volume of the major
floods will be generally less important on the spring flood than the impact on the
peak flows, since for a watershed, projected reductions in the snowpack volume
may partially offset the expected increases in rainfall (Ouranos 2015). In this
case, the volume of the flood could be similar but it may occur over a shorter
period, since the snowmelt season will possibly be shorter (which could lead to
higher peak discharge). However, this conclusion cannot be generalized because
the regional conditions can change significantly over the world. Some recent
meteorological events will probably have some impacts on our understanding of
their characteristics and of their consequences 4.

2.14. RECOMMENDATIONS

- Representing adequately the volume of the design flood and considering


this volume in the design of the dam and its hydraulic structures is
essential to adequately consider the storage effect of the reservoir and
to optimize the size of the structures. Whatever the approach selected to
evaluate the flood hydrograph, an estimate should in any case check the
volume of the resulting flood and validate it comparatively with the
precipitation/snowfall volume.
- The recession of the flood is not depending on the peak discharge, but
on the characteristics of the drainage area and the river; it is important to
respect the recession pattern in the hydrograph reconstitution.
- For a same peak discharge and a same flood volume, the shape of the
hydrograph can have an impact on the maximum level in a reservoir
(depending of the operation rule of the reservoir). It is important to
perform a sensitivity analysis on the shape of the hydrograph. Intuitively,
a hydrograph with a late peak discharge could have more impact than a
hydrograph with an early peak discharge, since the reservoir could then
be at a higher level.
- For precipitations concerning an entire water catchment, an estimate
based solely on the precipitation volume and the watershed signature
may give worthwhile indications to cross-check the results of traditional
estimation methods.
- There is no guarantee that a direct relation can be found between the
peak discharge and the flood volume for floods deriving from a
combination of events (such as a flood from snowmelt). However, without
further information, a conservative assumption will be to consider that the
1:N-year peak discharge is corresponding to the 1:N-year flood volume
for any duration.
- Reconstitution of a flood hydrograph must respect the physical
characteristics of the drainage area; the recession period of the
hydrograph follows a pattern independent from the flood magnitude.

4 For example, Hurricane Harvey released about 1300 mm of rain in the Houston area
(USA) in 2017. The hurricane remained stationary for a few days, moved away from
the area and came back a few days later.
2.15. REFERENCES

Alberta Transportation - Transportation and Civil Engineering Division - Civil


Projects Branch, “Guidelines on Extreme Flood Analysis”, November 2004.

Bacchi, B., Brath, A., Kottegoda, N.T., “Analysis of the Relationships Between
Flood Peaks and Flood Volumes Based on Crossing Properties of River Flow
Processes”, Water Resources Research, Vol. 28, No. 10, pp 2773-2782, October
1992

Carter, R.W., Godfrey. R.G., “Storage and Flood Routing”, Manual of Hydrology:
Part 3. Flood-Flow Techniques, GEOLOGICAL SURVEY WATER-SUPPLY
PAPER 1543-B, Methods and practices of the Geological Survey, 1960.

Gaál, L., Szolgay, J., Kohnová, S., Hlavčová, K., Parajka, J., Viglione, A., Merz
R., and Blöschl, G., “Dependence Between Flood Peaks and Volumes: A Case
Study on Climate and Hydrological Controls”, Hydrological Sciences Journal, 60
(6) 2015.

Guillaud, C. “A review of the reliability of extreme flood estimates”. In Proceedings


of Canadian Dam Safety Conference, Victoria B.C, 2002

Joos B., “Flood Integration Method (FIM)”. ICOLD proceedings, Stavanger, 2015

International Committee on Large Dams, “Dams and Floods – Guidelines and


Case Histories”. Bulletin 125, ICOLD, Paris, 2003
International Committee on Large Dams, “Flood Evaluation and Dam Safety”.
Bulletin 170, ICOLD, Paris, 2016

Louie, P.Y.T. and Hogg, W.D. “Extreme Value Estimates of Snowmelt”, Canadian
Hydrology Symposium, pp 64-76, 1980

Berga, L. Personal Communication, 2018.


Micovic, Z. “An Overview of Three Hydrologic Flood Hazard Estimation Methods
Used by BC Hydro”, ICOLD 2013 International Symposium, Seattle, USA.

Molini, A., Katul, G.G., and Porporato, A., “Maximum Discharge from Snowmelt
in a Changing Climate“, Geophysical Research Letters, VOL. 38, L05402, 2011

Newton, D.W. “Realistic assessment of maximum flood potentials” Journal of


Hydraulics Div., Amer. Soc. of Civ. Engrs., v.109 no.6 pp. 905-918, 1983.
Ouranos, “Probable Maximum Floods and Dam Safety in the 21st Century
Climate”. Report submitted to Climate Change Impacts and Adaptation Division,
Natural Resources Canada, 39 p, 2015.

Pramanik, N., Panda, R.K., Sen, D., “Development of Design Flood Hydrographs
Using Probability Density Functions”, Hydrological Processes, Hydrol. Process.
24. 415-428 (2010).
SNC-Lavalin Inc., “Gestion du réservoir Gouin – Étude complémentaire”, Juin
2001
Wang, Cheng, "A joint probability approach for the confluence flood frequency
analysis", Retrospective Theses and Dissertations. Iowa State University, Paper
14865 The Gumbel mixed model for flood frequency analysis, S Yuea, T.B.M.J
Ouardaa, B Bobéea, P Legendre1, b, 1, , P Bruneau1, b, 1, , Journal of Hydrology
Volume 226, Issues 1–2, 20 December 1999, Pages 88–100, 2007

Sheng Yue; Taha B. M. J. Ouarda; Bernard Bobée; Pierre Legendre; and Pierre
Bruneau , “Approach for Describing Statistical Properties of Flood Hydrograph”
http://ascelibrary.org/doi/abs/10.1061/(ASCE)1084-
0699(2002)7:2(147)#sthash.sIFP4f7S.dpuf
3. STOCHASTIC APPROACH TO FLOOD HAZARD
DETERMINATION

3.1. INTRODUCTION

The traditional concept of the Inflow Design Flood (IDF) has been and is still being
used to size the dam and its designated flood discharge facilities (i.e. spillway,
low level outlets) so that the dam could safely pass either a flood of pre‐
determined probability of exceedance or the Probable Maximum Flood (PMF).
The IDF standard is directly linked to the dam hazard classification so that low
hazard dams are designed using smaller IDF than high hazard dams. For high
(or extreme) hazard dams, two general world trends have developed (ICOLD,
2003):

1. USA, UK, Canada, Australia and countries under their economic and
technological influence use the PMF methodology. The PMF is defined as the
most severe “reasonably possible” combination of rainfall, snow accumulation,
air temperatures, and initial watershed conditions. The PMF is a deterministic
concept and its probability of occurrence cannot be determined. Theoretically, it
represents the upper physical flood limit for a given watershed at a given season.
In reality, PMF estimates are typically lower than the theoretical upper limit by
some variable amount that depends on the available data, the chosen
methodology and the analyst’s approach to deriving the estimate (Micovic et al.,
2015).
2. Most European countries use probabilistic methods to derive an inflow
flood characteristic (typically peak flow of certain duration) with return periods
ranging from 1,000 to 10,000 years.
For lower‐hazard dams, the IDF selections criteria vary but typically include either
a percentage of the PMF or return periods shorter than 1,000‐years (ICOLD,
2003).

In recent years, an increasing number of dam owners have started to apply


various forms of risk informed decision making process in their dam safety
assessments regarding flood hazard. For example, the IDF selection guidelines
published by US Federal Emergency Management Agency (FEMA, 2013)
suggests that besides the traditional prescriptive approach to IDF selection, a
risk‐ informed hydrologic hazard analysis should be carried out at the discretion
and judgment of dam safety regulators and owners ‘‘for dams for which there are
significant trade-offs between the potential consequences of failure and the cost
of designing to the recommended prescriptive standard’’. The guidelines suggest
that an integral part of risk‐informed hydrologic hazard analysis is the
development of hydrologic loads that can consist of peak flows, hydrographs, or
reservoir levels and their annual exceedance probabilities (AEP). Another
example is the latest guidelines by Australian National Committee on Large Dams
on selection of acceptable flood capacity for dams (ANCOLD, 2017). While the
guidelines retained the PMF concept as part of a simplified risk procedure for
extreme hazard dams, there is a clear emphasis on risk assessment even for the
cases when the PMF needs to be derived, it is recommended that its
“reasonableness” be considered and assessed using the procedure outlined in
Nathan et al. (2011) so that the degree of conservatism implicit in the PMF is
justified and properly aligned with dam safety decisions regarding potential dam
upgrade costs.
Note that risk informed approach to flood hazard for dam safety implies knowing
the probability of dam overtopping due to flood. This practically means that the
full probability distribution of reservoir levels needs to be derived so that the
exceedance probability of the reservoir level corresponding to the dam crest
could be quantified. The IDF standard is necessary for sizing the surcharge
storage, height of a dam and outlet works, and could be useful in assessing safety
of dams with fairly steady reservoir level (where a full pool assumption is not
unreasonable) and without active discharge control systems such as gated
spillways or low level outlets. However, in regard to individual dams or dam
systems with fluctuating reservoir elevation and active discharge control systems,
the IDF concept is inadequate for use in flood hazard risk assessment analyses.

For those types of dams and dam systems, the IDF, by characterizing inflow to
the reservoir, does not provide the necessary information (i.e. magnitude and
probability) on the flood hazard in terms of hydraulic forces acting on the dam
itself (peak reservoir level). The commonly used solution for this problem is to
route the IDF through the reservoir and determine the resulting peak reservoir
level, and thereby obtain at least some information (magnitude but not probability)
on the flood hazard acting on the dam. In addition, the IDF concept typically
assumes that, during an extreme flood, everything operates according to the plan,
i.e. accurate reservoir level measurements, spillway gates open as required,
necessary personnel available on site, communication lines fully functioning. In
other words, the IDF concept does not address possibility of “operational flood”
in which a dam could fail due to a combination of a flood that is much smaller
than the IDF and one or more operational faults.

The number of possible combinations of unfavourable events causing such a


failure is very large and increases with the complexity of the dam or system of
dams. Consequently, the probability of dam failure due to an unusual combination
of relatively usual unfavourable events, which individually are not safety critical,
is larger than the probability of dam failure solely due to an extremely rare flood.
Baecher et al. (2013) stated that, for a complex system such as flow control at a
dam, the number of possible combinations of unfavorable events is
correspondingly as large as the probabilities of any one combination occurring is
small. As a result, the chance of at least one pernicious combination occurring
can be large. There are many examples of “operational flood” failures; two North
American examples are illustrated below:
- Canyon Lake Dam on Rapid Creek in South Dakota failed on June 9th,
1972, resulting in 238 fatalities. The reason for the dam failure was not the lack
of flood passing capacity but the inability to use the spillway which was clogged
by debris.
- Taum Sauk Dam in Missouri overtopped and failed on December 14th,
2005. The reason for the overtopping was not high inflow but the error in reservoir
level measurement (the pressure transducers that monitored reservoir levels
became unattached from their supports causing erroneous water level readings,
i.e. reported reservoir levels that were lower than actual levels). In addition, the
emergency backup reservoir level sensors were installed too high, thereby
enabling overtopping to occur before the sensors could register high reservoir
level.

Clearly, risk informed decision making for dam safety requires more than the IDF
concept. In order to have any scientifically‐based idea of the probability of dam
overtopping due to floods, it is necessary to focus on estimating probabilities of
peak reservoir level. The process can be described as follows:

- The reservoir inflow of a certain probability of exceedance is only the


starting value that gets modified by a complex interplay of starting reservoir level,
reservoir operating rules and decisions, and reliability of discharge facilities,
personnel and measuring equipment on demand.

- At the end of the process, the reservoir outflow and associated peak
reservoir level have different exceedance probability than the reservoir inflow that
started the process.

- The exceedance probability of the peak reservoir level is what determines


the probability of dam failure due to flood hazard; the probability of the reservoir
inflow is no longer the major driving parameter, but only one of the inputs needed
to calculate the probability of the peak reservoir level.

Note that the peak reservoir level, unlike the reservoir inflow, is not a natural and
random phenomenon and its probability distribution cannot be computed
analytically (e.g. by using statistical frequency analysis methods). The probability
of the peak reservoir level is the combination of probabilities of all factors that
influence it, including reservoir inflows, initial reservoir level, reservoir operating
rules, system components failure, human error, measurement error, as well as
unforeseen circumstances. Thus, the approach to estimating the full probability
distribution of the peak reservoir level consists of some kind of stochastic
simulation that includes as many of these factors and scenarios as possible. It is
a complex multi‐disciplinary analysis which is currently beyond technical
capabilities of some dam owners. However, without it, the proper risk‐informed
dam safety management is not possible. The main goal of stochastic simulation
approach to flood hazard is to carry out probabilistic analysis of various flood
characteristics (inflow, outflow, peak reservoir level) resulting from floods on a
dam system and derive the continuous probability distributions which could then
be used to evaluate exceedance probabilities of various reservoir levels including
the level corresponding to the dam crest (dam overtopping level) as well as the
level resulting from the PMF. That way, different design criteria could be
considered and evaluated at various flood frequency levels, thereby departing
from widely used strict “pass/fail” deterministic design criteria.

3.2. BASIC PRINCIPLES OF STOCHASTIC APPROACH TO FLOOD


HAZARD

In deterministic approaches, a particular flood characteristic (e.g. inflow, outflow,


routed reservoir level) is the result of a fixed combination of meteorological,
hydrological and reservoir routing‐related inputs. For instance, the peak reservoir
level resulting from the PMF is derived using a fixed combination of the following
inputs:

- Rainfall magnitude and its spatial and temporal distributions over the
watershed (typically provided in form of Probable Maximum Precipitation)

- Initial snowpack accumulation within the watershed

- Air temperature sequence during the PMF event

- Initial soil moisture content of the watershed

- Initial reservoir level

- Availability and operating sequence of discharge


On the other hand, stochastic approaches treat those inputs as variables instead
of fixed values considering the fact that any flood characteristic could be caused
by an infinite number of different combinations of inputs. The variation of the
flood‐producing input parameters is achieved by stochastic sampling either from
empirical distributions or from theoretical probability distributions fitted to
observed data. Note that most stochastic flood analyses assume process
stationarity, i.e. probability distributions of input parameters are not changing over
time. Consequently, they do not capture potential changes in
hydrometeorological parameters and their inter‐relationships that could result
from long‐term climate change.

Another thing that all stochastic approaches to flood hazard for dam safety have
in common is the use of a deterministic watershed model (i.e. rainfall‐runoff
model) to convert rainfall and snow/glacier melt into watershed runoff, which
ultimately becomes the reservoir inflow. In terms of watershed model simulation,
the stochastic flood hazard methods could employ:

- Event‐based simulation where a watershed model is used to convert


rainfall storm event of certain probability into a flood hydrograph (typically 3‐7
days duration). Initial watershed conditions such as soil moisture content and
snowpack accumulation have to be assumed and described stochastically.
- Continuous simulation where watershed model is used to convert
historical or synthetic rainfall time series into a continuous reservoir inflow record
from which flood events of interest can be directly extracted. In this case, initial
watershed conditions are continuously accounted for by the watershed model,
which is an obvious advantage over event‐based simulation approaches.

Boughton and Droop (2003) presented a review of continuous simulation for


design flood estimation. Despite their theoretical advantages, it should be noted
that continuous simulation models face the challenge of model complexity
needed to accurately represent the full range of watershed runoff, from droughts
and low flows to very large floods typically needed for dam safety applications.
Nathan (2017) provided an excellent discussion on some particular issues that
should be carefully considered prior to using continuous simulation models to
derive flood frequency curves over a probability range of relevance to dam safety
(i.e. return periods of 1,000‐years and beyond). An example of continuous
simulation used for design flood estimation is the GRADE method (Hegnauer et
al., 2014) developed in the Netherlands and used to derive design discharges for
the rivers Rhine and Meuse with drainage areas of 165,000 and 21,000 km2,
respectively. Stochastic weather generator based on the nearest‐neighbour
resampling is used to produce rainfall and temperature series that preserve
statistical properties of the original, historically observed data. Synthetic rainfall
and temperature data are generated at multiple locations simultaneously in order
to preserve their spatial distribution over the watershed, without making
assumptions about the underlying joint distributions. The continuous record of
50,000 years of daily weather data is simulated using a simple nonparametric
resampling technique where daily rainfall amounts are resampled from the
historical record (56‐year for the Rhine basin; 73‐year for the Meuse basin) with
replacement.

Note that this approach does not generate daily rainfall amounts greater than
those observed in the historical record; however, the technique of resampling with
replacement creates different temporal patterns resulting in multi‐day rainfall
amounts higher than those observed in the historical record. A watershed model
is used to calculate runoff from this synthetic rainfall and temperature series, and
the runoff is then routed using a hydrodynamic model to account for complexities
associated with retention and flooding along particular river stretches. This
procedure yields the continuous record of 50,000 years of daily discharges at or
near the points where rivers Rhine and Meuse enter the Netherlands. Finally, the
flood values for the various return periods are obtained by ranking the annual
maximum discharges in the generated 50,000‐year sequence in the ascending
order, where the rank in this ordered set determines the return period. The main
source of uncertainty in the GRADE method is the relatively short length of the
historical precipitation and temperature series used in the stochastic weather
generator. Using less than 100 years of historical data to generate 50,000 years
of synthetic data affects the ability to accurately capture year‐to‐year variability
over the long periods of time. For instance, resampling from a relatively wet
baseline series will result in relatively wet long‐duration synthetic series, which in
turn will increase uncertainty associated with derived flood discharge values,
especially for higher return periods. This large uncertainty is reflected in the
GRADE flood frequency results for the Meuse River, where the best estimate of
10,000‐yr flood of 4,400 m3/s is given with the 95% uncertainty range of 3,250 to
5,550 m3/s.
Finally, there are stochastic approaches that fit somewhere in between event‐
based simulation and continuous simulation – they could be called semi‐
continuous or hybrid approaches. They utilize a watershed model that has been
calibrated to satisfactorily represent hydrological behaviour of the watershed over
a long continuous period for which historical record of climate input data is
available. This creates a continuous database of watershed initial conditions
which can be stochastically sampled at any time/season of the year and
combined with a rainfall event of a certain duration and probability, sampled from
rainfall magnitude‐frequency curve. The end result is thousands of flood
hydrographs ranging in magnitudes from common to extreme. The advantage
over event‐based simulation approach is that statistical distributions of initial (pre‐
storm) watershed conditions are likely more realistic since they do not have to be
arbitrarily assumed. The advantage over continuous simulation approach is that
there is no need to carry out the difficult task of generating thousands of years of
continuous synthetic rainfall and temperature sequences of questionable
accuracy. Examples of semi‐continuous or hybrid stochastic flood models are
SEFM (Schaefer and Barker, 2002) and SCHADEX (Paquet et al., 2013).

3.3. MAIN ASPECTS OF STOCHASTIC FLOOD HAZARD MODELLING


FOR DAM SAFETY

There are three distinct aspects of stochastic flood simulation for a hydroelectric
system consisting of a single or multiple dams and reservoirs.

1. Simulation of natural runoff from the local watershed and inflow into the
reservoir.

2. Simulation of reservoir operating rules (if any), i.e. flood routing for a
single reservoir or a system of multiple dams and reservoirs.
3. Simulation of on‐demand availability of various system components such
as failure of different discharge facilities, telemetry errors, human operator errors,
or some combination of those.

Ideally, all three aspects are combined within the stochastic simulation
framework, and multi‐ thousand years of extreme storm and flood annual maxima
are generated by computer simulation. The simulation for each year contains a
set of climatic and storm parameters that are sampled through Monte Carlo
procedures based on the historical record and collectively preserved
dependencies among different hydrometeorological inputs. Execution of a
rainfall‐ runoff model combined with reservoir routing of the inflow floods through
the system and stochastically modelled failure/availability of various system
components provides the computation of a corresponding multi‐thousand year
series of annual flood maxima. Simulated flood characteristics such as peak
inflow, maximum reservoir release, inflow volume, and maximum reservoir level
are the parameters of interest.
However, it is extremely difficult if not impossible to accurately cover all three
aspects of stochastic flood simulation due to the enormous complexity of a
dam/reservoir system and all possible interactions among its components. That
is why in practical applications not all flood producing factors are modelled to the
same extent ‐ some are treated as stochastic variables, and some are fixed or
not modelled at all. For instance, the third aspect of flood hazard simulation
mentioned above (stochastic simulation of on‐demand availability of various
system components) is rarely carried out due to complexities and difficulties in
describing probability distributions of variables such as spillway gate failure or
human error. A recent study by Micovic et al. (2016) attempted to cover all three
aspects of stochastic flood hazard simulation on a system of three dams and
reservoirs with seasonally fluctuating reservoir levels and active discharge control
systems. The aim of the study was to examine how the inclusion of spillway gate
failures likelihood functions in the stochastic flood modelling framework affects
the probability of dam overtopping. The results indicated that dams are much
more likely to be overtopped due to an unusual combination of relatively common
individual events than due to a single extreme flood event. The all three aspects
of stochastic simulation framework are discussed in the following sections.

3.3.1. Stochastic simulation of reservoir inflows

An example of hydrometeorological inputs to stochastic flood model and the


dependencies that exist in the stochastic simulation of a particular input are listed
in Table 3.1. Note that natural dependencies are prevalent throughout the
collection of hydrometeorological variables. The natural
dependencies/correlations are preserved in the sampling procedures with a
particular emphasis on seasonal dependencies. For instance, the sampling of
freezing levels is conditioned on both month of occurrence and 24‐hour
precipitation magnitude. The watershed conditions for soil moisture, snowpack,
and initial reservoir level are all inter‐related and inherently correlated with the
magnitude and sequencing of daily, weekly and monthly precipitation. These
inter‐relationships are established through calibration to historical streamflow
records in long‐term continuous watershed modelling and the state variables are
stored for each day of the calibration period (typically 25‐years or more). The
inter‐dependencies are preserved for each flood simulation through a resampling
procedure.
Table 3.1. Hydrometeorological Inputs to SEFM
for BC Hydro’s Campbell River System

Model input Dependencies Probability Comments


model
Storm Independent Normal End‐of‐month
seasonality distribution storm
occurrences
72‐hour Independent 4‐parameter Developed from
precipitation Kappa regional
magnitude distribution precipitation
analyses and
isopercental
spatial storm
analyses
Temporal/spatial Resampling 15 prototype
distribution of Independent from equally‐ storms, 72‐hour
storms likely historical to 144‐hour long
storms time‐series
Air temperature Temporal Resampling Pattern indexed
and freezing patterns are from historical to 1,000 mb
level temporal matched one‐to‐ storms temperature and
patterns one to prototype freezing level for
storms day of max. 24hr
precipitation
Air temperature Storm Physically‐ For day of
at 1,000 mb magnitude based stochastic maximum
model 24-hour
precipitation in
storm
Air temperature Independent Normal For day of
lapse‐rate distribution maximum
24-hour
precipitation in
storm
Freezing level 1,000‐mb Physically‐ For day of
temperature, based stochastic maximum
temperature model 24-hour
lapse‐rate and precipitation in
storm storm
magnitude
Watershed Seasonality of Resampling of Sampled from
model storm historical antecedent
antecedent conditions Oct condition files.
conditions 1983 – present Sampled year is
(snowpack, soil independent,
moisture) sampled month
corresponds to
month sampled
from seasonality
of storm
occurrence.
Initial reservoir Seasonality of Resampling of Sampled from
level storm and historical recorded
watershed conditions with reservoir level
model the same data. Sampled
antecedent reservoir year has similar
conditions operating rules antecedent
as the current precipitation as
ones (1998 – year sampled for
now) watershed
model
antecedent
conditions.

3.3.1.1. Storm seasonality

The seasonality of storm occurrence is defined by the monthly distribution of the


historical occurrences of storms with widespread areal coverage that have
occurred over studied area. This information is used to select the date of
occurrence of the storm for a given stochastic simulation. The basic concept is
that the seasonality characteristics of extraordinary storms used in stochastic
flood simulations should be the same as the seasonality of all significant storms
in the historical record. The term “significant” is somewhat subjective, but it
usually refers to storm events where precipitation maxima for a given storm
duration exceeds a 10‐ year return period at three or more precipitation gauges
within the studied area. This criterion assures that only storms with both unusual
precipitation amounts and broad areal coverage would be considered in the
analysis.

Figure 3.1 shows an example of this procedure applied to the 1,463 km2 Campbell
River watershed on Vancouver Island, BC, Canada and using the 72‐hr storm
duration. The procedure resulted in identification of 69 significant storms within
the 1896‐2009 period. A probability‐ plot was developed using numeric storm
dates (9.0 is September 1st, 9.5 is September 15th, 10.0 is October 1st, etc.) and
it was determined that the seasonality data could be well described by a Normal
distribution. A frequency histogram was then constructed based on the fitted
Normal distribution to depict the twice‐monthly distribution of the dates of
significant storms for input into a stochastic simulation framework.

Figure 3.1 Probability plot and frequency histogram of storm seasonality for the
Campbell R. watershed in Canada

Figure 3.1 shows that significant historical storms have occurred in the period
from early October through about mid‐March with a mean date of December 21st.
The probability of occurrence of a storm for any given mid‐month or end‐of‐month
can be determined from the incremental bi‐ monthly frequencies depicted in the
Figure 3.1 histogram (e.g. zero probability for September mid‐month, and
probability of 0.0228 for September end‐month).

3.3.1.2. Precipitation magnitude-frequency relationship

Generally speaking, floods could result from storms of various durations. This is
especially true for very large watersheds (e.g. > 10,000 km2), where significant
floods could originate from intense short‐duration storms covering only a part of
the watershed, or from a wide‐spread general synoptic storm of longer duration
that cover the entire watershed area.

The direction of the storm and its speed of movement over the watershed is also
an important factor. Consequently, proper stochastic flood modelling process
should be sampling rainfall storms of different duration from their respective
frequency distributions and weight modelled floods according to the observed
frequency of different duration rainfall storms used to produce said floods.

This approach implies the existence of separate rainfall‐frequency relationship


for all considered storm durations. However, the stochastic modelling is often
simplified by choosing so called “critical storm duration” for a particular watershed
and deriving a rainfall‐frequency relationship only for that duration. This
simplification is particularly effective in watersheds with drainage area sizes
under approximately 5,000 km2 where it was relatively easy to determine the
typical storm duration from precipitation gauges in the area, and where exclusion
of other storm durations in stochastic flood simulation would have relatively minor
effect on overall accuracy.
The vast majority of precipitation stations record on a daily basis which results in
logical choices of the 1‐day, 2‐day, 3‐day or 4‐day duration for the precipitation‐
frequency analysis. For instance, BC Hydro experience shows that for
watersheds in Pacific Coastal zone of British Columbia the 72‐hr duration (3‐day)
is the most representative of the typical storm duration, whereas for some
watersheds in Interior British Columbia it is the 48‐hr duration. Similarly,
Électricité de France generally uses the 3‐day duration in their SCHADEX
stochastic model for most French watersheds, with some small and flash‐flood
prone watersheds being treated with shorter storm durations.
The magnitude of precipitation relevant to dam safety analyses is typically several
orders of magnitude more extreme than what has been observed in the historic
record. As such, the estimation of this range of rainfall presents special difficulties
and requires the extrapolation of relatively short historical data records. This
extrapolation is challenging, especially considering that rainfall input is generally
the most significant contributor to resulting stochastically derived flood
hydrographs. There are different ways to do this extrapolation and obtain
precipitation magnitude‐frequency relationship over the entire probability domain
of interest, including return periods of 10,000 years and beyond. A couple of
examples are described here:

1. In the SEFM model (Schaefer and Barker, 2002), the precipitation


magnitude‐frequency analysis is done by utilizing the regional L‐moment analysis
(Hosking and Wallis, 1997). By employing the regional precipitation‐frequency
analysis we can compensate for the short length of available hydrometeorological
record by considering a larger study area. This approach takes advantage of the
situation that the size of the study area is much larger than the typical areal
coverage of storms for the duration of interest, and there will be many storms in
the regional dataset with return periods higher than indicated by the chronological
length of the historical record.

Applying this approach to the Campbell River watershed above Strathcona Dam
on Vancouver Island, BC, Canada included assembling storm data from all
locations that were climatologically similar to the Campbell River region.
Precipitation annual maxima series data were assembled for the critical duration
(72‐hour in this case) from all stations on Vancouver Island and stations between
latitude 47° and 52° N from the Pacific Coast eastward to the crest of the Coastal
Mountains (Canada) and Cascade Mountains (USA). This totaled 143 stations
and 6,609 station‐years of record for stations with 25‐years or more of record.
The precipitation‐frequency relationship (Figure 3.2) was developed through
regional L‐ moment analyses of point precipitation and spatial analyses of
historical storms to develop point‐to‐area relationships and determine basin‐
average precipitation for the watershed using the 4‐parameter Kappa distribution.
The uncertainty bounds were developed through Latin‐hypercube sampling
method (McKay et al., 1979; Wyss and Jorgenson, 1998) where regional L‐
moment ratios and Kappa distribution parameters were varied to assemble 150
parameter sets and perform Monte Carlo simulation using different probability
distributions for individual parameters.
Figure 3.2 Computed 72‐hour precipitation‐frequency curve and 90%
uncertainty bounds for the 1193 km2 Strathcona Dam watershed

2. In the SCHADEX model (Paquet et al., 2013), the precipitation


magnitude‐frequency analysis was catchment‐specific (a regional approach is
used if local data are lacking) utilizing weather patterns classification and Multi‐
exponential distribution linked to specific weather patterns (Garavaglia et al.,
2010). The underlying hypothesis is that a rainfall sampling based on days having
similar atmospheric circulation patterns will provide more homogeneous sub‐
samples which will in turn reduce uncertainty associated with extrapolation of
short sample size. The rainfall stochastic generator of SCHADEX is based on the
concept of a 3‐day event so‐called “centered rainfall event”. Therefore,
SCHADEX only develops precipitation magnitude‐frequency for the central daily
rainfall (dark blue in Figure 3.3 below), and precipitation quantiles of the day
before and the day after (adjacent rainfalls, light blue in Figure 3.3) are estimated
using the probabilities of the ratios “Pa‐/Pc” and “Pa+/Pc”, computed from rainfall
events identified in the historical record. Note that Pa‐, Pc and Pa+ represent
daily rainfall amounts for the day before the central day, the central day and the
day after the central day, respectively.
Figure 3.3 SCHADEX concept of centered rainfall event
(with central and adjacent rainfalls)

3.3.1.3. Temporal and spatial distribution of storms

The process of stochastic storm generation requires both spatial and temporal
storm templates that are scalable. The spatial and temporal storm templates are
linearly scaled by the ratio of the desired basin‐average precipitation of certain
duration to the basin‐average precipitation of the same duration observed in a
selected storm template (i.e. prototype storm). These storm templates should be
prepared from as many historically observed storms as possible in order to
capture diversity among storms in terms of spatial and temporal distribution of
precipitation. Typically, 10 to 20 storm templates should be enough to capture
storm diversity over a given watershed, for watersheds sizes up to about
5,000 km2. Larger watersheds should be divided into zones of a size suitable for
describing the spatial and temporal variability of storm types that may affect the
watershed on a given day, with separate storm analyses carried out for each zone
within a watershed.

These kinds of decisions are typically site‐specific and depend of detailed


meteorological analysis of the given area, including use of measurable
parameters such as geopotential height contour maps for different pressure
heights, precipitable water, convective available potential energy and the scale
of the precipitation footprint (synoptic, mesoscale, or local convective storms)
determined by the percentage of gauges in the area exceeding a specified daily
precipitation threshold. For instance, there could be large areas with fairly simple
climatology, whereas some relatively small watersheds could have complex
climatology with multiple storm types with different spatial, temporal and seasonal
characteristics, resulting in a mixed population of storms and floods.
In the presented example of the 1,463 km2 Campbell River watershed located in
Pacific Coastal zone of British Columbia, as mentioned earlier in Section 3.1.2, it
was fairly straightforward to determine the typical storm duration from
precipitation gauges in the area, meaning that exclusion of other storm
durations/types in stochastic flood simulation would have relatively minor effect
on overall accuracy.
Spatial storm templates for a given storm are developed analyzing rainfall data
and using GIS analyses to compute basin‐average precipitation of certain
duration (e.g. 24, 48, 72‐hr). The analyzed rainfall data could be hourly or daily
as well as from point measurements or from radar, depending on what type of
rainfall data is required as an input into a rainfall‐runoff model used in stochastic
flood simulation. An example of spatial storm template (a 72‐hour precipitation
for the October 1984 storm over the Strathcona Dam basin) is shown in
Figure 3.4.

Temporal storm templates could be developed as a 3‐day template with the


centered main rainfall and adjacent rainfalls as assumed in SCHADEX
methodology and described in the previous section (Figure 3.3). A more elaborate
way to develop temporal storm templates is utilized by the SEFM method where
hourly rainfall data from many point‐measurements station within a given
watershed are used to obtain the basin‐average rainfall of specified duration
(e.g. max.72‐hr). This is followed by the examination of the 10‐day period of
precipitation encompassing the max. 72‐hour precipitation using daily synoptic
weather maps, radiosonde data and air temperature temporal patterns.

This procedure leads to the identification of the time span during which there was
a continuous influx of atmospheric moisture from the same air mass where
precipitation was produced under similar synoptic conditions. The identified time
span provides the starting and ending times for the precipitation segment that is
independent of surrounding precipitation and scalable for stochastic storm
generation. An example of this type of temporal storm template is shown in
Figure 3.5 which depicts the observed 10‐day period of basin‐average
precipitation for the storm of October 14‐23, 2003 for the Strathcona Dam basin,
with the portion of the hyetograph (in blue) that was identified as the independent
scalable segment of the storm and therefore adopted for use as a prototype storm
for stochastic storm generation.
Figure 3.4 Spatial storm template (October 1984 storm)

Figure 3.5 Temporal storm template (October 2003 storm)

3.3.1.4. Air temperature and freezing level temporal


patterns

The usual approach is to first stochastically simulate 1,000 mb air temperature


(i.e. temperature at or near the sea level), followed by stochastic simulation of air
temperature lapse‐rates that are required for computing both freezing levels and
air temperature values within the full elevation range of a given watershed.

Within the stochastic flood simulation framework, 1,000 mb air temperatures


during extreme storms could be simulated using a variety of approaches. One
example is a physically‐based probability model for 1,000 mb dewpoint
temperatures derived from monthly maximum dewpoint data (Hansen et al.,
1994). This probability model utilizes monthly upper limit dewpoint data and the
magnitude of the maximum 24‐hour precipitation within the storm relative to 24‐
hour PMP. The 1,000 mb dewpoint temperatures are drawn from a symmetrical
Beta Distribution bounded by lower and upper bounds as shown in Figure 3.6 for
December in the Vancouver Island region, BC, Canada. A separate relationship,
similar to Figure 3.6, is used for each month because 1,000 mb dewpoint
climatology changes with season. Higher maximum 1,000 mb dewpoints are
possible in the fall months of October and November than in the colder winter
months of January and February, which implies that freezing levels tend to be
somewhat lower for storms in the colder winter months.

Figure 3.6 Range of 12‐hour persisting 1,000 mb dewpoint temperatures


utilized by dewpoint temperature probability model (December example)
The next step is to stochastically generate air temperature lapse‐rates. For
example, analyses of upper air sounding data from Northwestern Washington
and Central California stations reveal that air temperature lapse‐rates on the day
of maximum 24‐hour precipitation for noteworthy storms are well described by
the Normal Distribution (Figure 3.7). The mean value was found to be 5.1°C/1000
m, which is near the saturated pseudo‐adiabatic lapse‐rate. Similar results were

You might also like