Skip to main content
U.S. flag

An official website of the United States government

Evaluating the predictions associated with climate change just became easier with the development of new statistical methods designed to assess the performance of models producing many of these predictions.

Evaluating the predictions associated with climate change just became easier with the development of new statistical methods designed to assess the performance of models producing many of these predictions.

The new U.S. Geological Survey research, done in collaboration with the University of Queensland and the National Snow and Ice Data Center, will help ecologists, managers, and policy makers examine the quality of predictions produced by individual or sets of climate models.

Management agencies and policy makers often use predictive models to help develop and choose intervention strategies, whether those are wildlife management plans, climate adaptation strategies, or even energy policies. Increasingly, sets of models are being used concurrently to represent the scientific uncertainty in the predictions.

“Are our model sets working?” asks Michael Runge, research ecologist at the USGS Patuxent Wildlife Research Center, who led the study. “If observations are falling within the bounds suggested by the model, that’s great. But we needed a way to detect when a whole model set might be failing.”

Runge and his colleagues developed statistical methods for evaluating predictions from a single model or a set of models. These methods provide a way to detect failures of a model set. They applied these methods to two data sets, one involving predictions of the breeding distribution of northern pintail ducks, and one involving predictions of Arctic sea ice.

The methods suggest the observations of summer Arctic sea-ice extent are falling within the bounds of the current set of climate models, but are now favoring those climate models that predict an ice-free Arctic in the summer around 2055.

For northern pintail ducks, the methods, had they been in use, would have detected a change in the breeding distribution of pintails in 1985, 20 years before the change was actually detected and incorporated into hunting regulations. 

Early detection of failure of a model set can trigger the work needed to diagnose the failure, build better models, and ultimately, improve the predictions used as the basis of decisions. In the practice of adaptive management, this process is sometimes called “double-loop learning.”

The article, “Detecting failure of climate predictions by M.C. Runge, J.C. Stroeve, A.P. Barrett, and E. McDonald-Madden, is available in Nature Climate Change online.

photo:USFWS

Get Our News

These items are in the RSS feed format (Really Simple Syndication) based on categories such as topics, locations, and more. You can install and RSS reader browser extension, software, or use a third-party service to receive immediate news updates depending on the feed that you have added. If you click the feed links below, they may look strange because they are simply XML code. An RSS reader can easily read this code and push out a notification to you when something new is posted to our site.