When climate models are developed, researchers test how well they replicate the climate system by using them to model past climate. Ideally, the model output will match the climate conditions that were actually recorded in the past, indicating that the model correctly characterizes how the climate system works and can be used to reliably project future conditions. However, this approach assumes that models that reliably project past climate conditions will accurately predict future climate conditions, even though the climate system might have changed.
This research contributes to generating more reliable local-scale climate projections by testing the assumption that the climatological relationships which existed in the past will continue to exist in the future. To do this, researchers developed a novel approach in which very high-resolution climate model data were used as a surrogate for historical and future "observations", allowing researchers to test how well the more commonly-used coarse-scale global climate models project future climate conditions.
Findings suggest that the assumption holds reasonably well in many cases, but there are some instances (for example in particular geographic locations, such as coastal regions, and at certain times of year, especially summer) when the assumption is not as robust. This research also explores the conditions under which the assumption does not hold, and develops ways to make the methods used to generate local information about climate change more reliable. The results of this research can improve the reliability of the climate models used by resource managers to inform vulnerability assessments, adaptation planning, and other important climate-related decisions.