Editor’s Note: The more accurately a flood model can determine where floodwater will go and how deep it will be at each location, the better it will perform. While the resolution of a flood model is important, the quality of the data underlying its digital terrain model is more so.
Flood is a hazard to which a vast area of the United States is vulnerable. Losses resulting from floods are rising as development continues apace, perhaps particularly in urban areas where previously permeable surfaces are paved over. In addition, precipitation patterns may be evolving with climate change; “wetter” storms and more flooding are expected. Yet insurance take-up for the flood peril remains low.
The scale and complexity of this hazard are reflected in the scale and complexity of the catastrophe models created to help manage the risk. For example, the domain of the Verisk Inland Flood Model for the United States (Figure 1) comprises 18 separate hydrological regions covering an area of approximately 8.2 million square kilometers (3.2 million square miles). It covers all areas that contribute to flooding in the entire contiguous United States (excluding the Great Lakes) in addition to some streams and catchments outside the country’s borders that drain to rivers in the United States. The river network modeled is more than 2.2 million km—about 350 times the radius of the Earth (6,371 km), or the equivalent of about three trips to the moon and back. More than 6,000 river gauges and 500 tidal stations are used for calibration. Within this enormous and complex hydrological domain, losses are estimated for both on-floodplain and off-floodplain locations.1
To develop an accurate view of flood risk, many factors must be taken into account, including an understanding of the weather systems that produce precipitation and the building characteristics of insured properties that experience flooding. While knowing the volume of water involved in a flood event is fundamental, the other key to calculating potential damage is knowing where floodwater will go and how deep it will be at each location. Understanding that requires detailed information about the topography of the entire modeled domain.
Water flows and takes on the shape of its container—which in the case of a flood is the landscape into which it pours or drains. Therefore, the better the topographical data used in a catastrophe model, the more accurately flood inundation can be modeled.
In Verisk’s flood model, a digital terrain model (DTM) is used in the hydraulic component for estimating flood depths. Topography (i.e., ground elevation) can change within a few meters, making one location prone to flooding while another nearby might not be. High resolution data is required to accurately estimate location-level losses, so the quality of the DTM is of paramount importance.
30-Meter Resolution
The basis of the Verisk DTM is high precision ground surface elevation data contained in the National Map maintained by the United States Geological Survey (USGS) and in the public domain. The National Map provides data at both 30- and 10-meter resolution for the conterminous United States, and provides data at a 3-meter resolution for a limited number of locations. For a comparison of 30- and 10-meter resolution, see Figure 2.
The National Map provides both a digital surface model, which includes trees and other surface features, and a DTM with these features removed. Most data incorporated into the National Map is still based on digitized topographical maps of varying age and accuracy, but more and more LiDAR survey data is becoming available and being incorporated. Some of this data is lower in resolution, and some higher, but even LiDAR does not guarantee a great data set; all of these sources require quality control to create bare-earth elevations free of artifacts (spurious features that arise from small errors in the data collection method). A good quality 30-meter DTM can prove superior in practice to a poor quality 10-meter one. For example, a 10-meter DTM based off an interpolated topographical map will typically not have the level of detail and accuracy available in a 30-meter DTM based on a native 10-meter LiDAR data set.
A digital terrain model represents the landscape using a grid, each cell of which is set at an averaged elevation for its location. The size of the cells determines the resolution of the model—the smaller the cell, the more accurately topography can be represented and the higher the resolution of the model.
When Verisk launched the industry’s first probabilistic inland flood model for the United States in 2014, we employed a DTM at a resolution of 30 meters, which corresponded to one of the National Map resolution offerings. When a few inches of elevation can make the difference between damage occurring and a property being spared, 30 meters might seem a coarse resolution for a flood model. So, why was it adopted as the basis for our flood model and hazard layers in 2014?
The answer lies in the prodigious amounts of data and compute power required to model such a large domain, and in the limitations of the elevation and exposure data available. Verisk conducted extensive testing to ensure optimal accuracy and concluded that, at that time, a 30-meter resolution was optimal in terms of accuracy, operational requirements, and availability over the full extent of the model’s domain.
In the few years since the Verisk model was first launched, several advances have taken place, including:
- Computing technology can handle significantly larger data sets
- The quality of the surveys on which the 30-meter and 10-meter DTMs was based has improved, and LiDAR coverage has significantly expanded
- We have become more sensitive to various DTM issues and have incorporated ways in our modeling efforts to account for them
- Data at 10-meter resolution provides finer detail, enabling hydrographic corrections to be made and more accurate hydraulic modeling possible
10-Meter Resolution
Using fine resolution data for a flood application can introduce challenges; the finer the resolution, the greater the demands on the data and the more important data quality becomes. Whatever the resolution, imperfections and artifacts are inevitably encountered within the data used to build a DTM. Furthermore, the finer the resolution, the more work it takes to identify and process artifacts and structures to produce a usable DTM. Rivers are often misrepresented and need accurate conditioning and revision, which may add further complexity and uncertainty to fine resolution data (Figure 3).
Creating a large DTM at high resolution is an extremely labor-intensive task that contributed substantially to the more than 70 person-years invested in creating Verisk’s U.S. flood model; there are no short cuts. Fortunately, with the much better technical services available now, far more data can be processed, and many more artifacts can be identified and resolved. Although it still requires a great deal of work, creating a DTM at a 10-meter resolution has become a practical proposition. While a flood model using a 30-meter DTM might flood a wide area, at 10-meter resolution the spread of floodwater can be more accurately confined (Figure 4). This greatly improves the modeling of flood extents for more frequent, less severe events (return periods of 2, 5, or 10 years).
3-Meter Resolution
If 10-meter resolution is better than 30-meter, wouldn’t an even finer resolution be better still? Yes and no. To give an extreme example, having a model that can resolve the water elevation at sub-meter resolution would increase the detail (and complexity) of the model, but it would not help evaluate replacement costs any better for an entire building. At scales that fine, water elevation within a building remains constant and the footprint of most buildings is much larger than a 3-meter square; large buildings or campuses can be significantly greater in area.
Tools for flood risk assessment need to enable better assessment of risk across the entire property, not just at a single geocode such as a roof-top centroid, for example. At a resolution this fine, the requirement for better exposure data quality—including accurate geo-codes—is significantly greater. For a limited but growing number of locations, 3-meter data exists; where it is of better quality, Verisk has resampled it into our 10-meter view (Figure 5), but a resolution this fine is not yet practical for the full extent of the model’s domain.
The Sum of Its Parts
By combining advances in technology and computational resources with what we have learned about the nuances and quality of DTM surveys, Verisk has been able to effectively leverage the full impacts of increased DTM resolution. The enhanced Inland Flood Model for the United States currently being developed by Verisk will include a nationwide 10-meter data elevation model and river channel corrections for increased accuracy in hydraulic modeling.
The quality of a DTM and its resolution, however, are not the only determinants of a flood model’s efficacy. When modeling flood hazard, many components are required, and each has its own level of accuracy and uncertainty. It is essential, in particular, to have the exact location of a property accurately recorded and to ensure that the detailed exposure data explicitly includes not only primary property-specific risk characteristics but secondary ones as well, such as foundation type and first floor height, to capture the true nature of the risk. To obtain a reasonable loss assessment for a given building its ground elevation and geocoded location must both be accurately represented.
Ultimately whether the DTM is 30- or 10-meter resolution does not matter as much as the quality of the DTM, and any flood model is only as good as the sum of all its parts and the quality of the data used to develop it.
References
1 This article focuses on the modeling of on-floodplain risk.