Skip to main content
U.S. flag

An official website of the United States government

An evaluation of debris-flow runout model accuracy and complexity in Montecito, CA: Towards a framework for regional inundation-hazard forecasting

June 17, 2019

Numerous debris-flow inundation models have been applied retroactively to noteworthy events around the world. While such studies can be useful in identifying controlling factors, calibrating model parameters, and assessing future hazards in specific study areas, model parameters tailored to individual events can be difficult to apply regionally. The advancement of debris-flow modeling applications from post-event model validation of individual case studies to pre-event forecasting that can be implemented rapidly and at regional scales is critical considering the fatalities and extensive infrastructure damage caused by debris flows that inundated a developed fan in Montecito, CA following heavy rain on 9 January 2018. In this study, we evaluated the tradeoffs between model accuracy and simplicity in the context of the need for a framework that can be used in conjunction with initiation models and storm predictions for rapid, large-scale inundation hazard mapping as a component of post-fire debris-flow hazard assessments. We used numerical (FLO-2D) and empirical (LAHARZ) models to simulate debris flows from one of the drainages upstream of Montecito that was burned in the Thomas Fire in December 2017 and compared model results with field observations and building damage assessments collected immediately following the event. Initial testing demonstrated that LAHARZ can simulate channelized flow but is not able to replicate flow bifurcations or avulsions, which are critical aspects of flows travelling over populated fans. FLO-2D simulations matched well with observed inundation area data, but variably under and overpredicted inundation height, deposit depth, and velocity. We found that FLO-2D and LAHARZ had true positive rates of 0.84 and 0.6, respectively, and both models had similar false positive rates (0.3 and 0.35, respectively). Our model evaluation framework allowed us to compare model results with detailed field observations and will serve as a platform for more extensive model testing in the future.