Part Six – The Light at the End of the Tunnel is a Thunderstorm

[The entire report can be downloaded as PDF, flood_years-r0-20241201.pdf]
[The start: https://www.scienceisjunk.com/the-100-year-flood-a-skeptical-inquiry-part-1/]

Decisions need to be made. A building must be declared inside, or outside, of a flood plain. A dam must be built to specific dimensions. Decision makers want clear-cut answers to their questions. Diligent scientists can work long and hard to provide those answers. But science is hard on a good day – and there aren’t very many good days. People providing answers need to understand and to be clear about the remaining uncertainty. People making decisions need to learn how to make decisions under uncertainty.

Conclusion and Recommended Further Work

In this article we mostly discussed ways to get hints about the future from past data, and limits of those methods. We also mentioned deterministic ways to work out how a drainage responds to rain events, and how difficult that can be in the details.

On the use of applying conventional statistical tools on historical precipitation data we saw that:
 – The available data is usually pretty bad, short in duration and full of holes or bad information,
 – Rare events do not show up often enough to give a good representation in small data sets,
 – The standard assumptions of statistics (independence, stationarity, identical distributions, thin tails) are false for real-world data, especially weather,
 – Sophisticated analysis techniques (e.g. GEV) may under-predict extremes for those reasons,
 – Simple statistical techniques (e.g. waterfall plots) may be a better choice, eschewing the precision of other methods when precision was never there to begin with.

This article is the result of two months part-time work from a diligent amateur (with a few graduate college math credits). It only superficially covers the problem of local weather modeling. And the issues of water basin mapping and functional modeling are barely introduced. But this article has enough firm results that it should inspire further related work.

We might suggest from among the following.

Data Cleanup and Expansion

This might be one good job for an AI tool. We saw one instance where archive rain data was clearly wrong, from a contemporary news report. Comparing billions of archive records to other sources is just the kind of laborious job that computers are built for. In addition to correcting data, more detail might be found, finer than daily data, where it’s not already on hand. We don’t need hourly observations for every day going back centuries, just more detail during storms, as might be found from written word accounts.

Checking Limits of the GEV/GPD Methods

This author is not well versed in the literature on the derivations and known limits of the formal extreme value theorems, but it has not been hard to find data sets which stretch credibility of their results. In particular, I suspect that time-series rain data is never going to fit a simple distribution well, and I think it might be best modeled as having a compound probability distribution. In such a distribution there are multiple steps. First there is a discrete (stepwise) distribution giving the odds of a few different distributions corresponding to different broad weather patterns, and their possible combinations.

Does this compound probability resolve to a stable distribution? Does that distribution obey a form of the central limit theorem (which we need to start a GEV derivation)? Someone more motivated than me might attempt an exact solution using a small model (perhaps three different exponentials). What I might be tempted to do is a large Monte Carlo attack on the problem, which can practically be much more complicated.

Fix the Flood Maps around Asheville!

The most direct and certain result from the work presented here is that the official flood maps for southern Asheville, NC, are on the dangerous side of wrong. The frequency of property-damaging floods is underestimated by a factor of at least two, but possibly ten. To narrow down that range, to get precise but properly conservative flood maps, will require diligent, honest, earnest effort. That effort is demanded.

Previous part: https://www.scienceisjunk.com/the-100-year-flood-a-skeptical-inquiry-part-5/
Back to the beginning: https://www.scienceisjunk.com/the-100-year-flood-a-skeptical-inquiry-part-1/

 

One thought on “The “100-year Flood”, A Skeptical Inquiry, part 6 (of six, for now)”

Comments are closed.