Developing Effective Drought Monitoring Tools for Farmers and Ranchers

Video Transcript
Download Video
Right-click and save to download

Detailed Description

This webinar was conducted on August 7, 2017 as part of the USGS National Climate Change and Wildlife Science Center’s Climate Change Science and Management Webinar Series, held in partnership with the USFWS National Conservation Training Center. 

Webinar Summary: The South Central U.S. is one of the main agricultural regions in North America: annual agricultural production is valued at more than $44 billion dollars. However, as climate conditions change, the region is experiencing more frequent and severe droughts, with significant impacts on agriculture and broader consequences for land management. This project investigated the information needs of farmers, ranchers, and local land managers in the South Central region to develop drought monitoring tools that are effective and responsive to their needs. Several drought indicators were evaluated for their effectiveness and compared to responses from a regional survey on commonly-used drought indicators, impacts, and management strategies. A new indicator based on soil moisture was explored as an option for drought management. All indicators were compared to crop yields to assess variability among indicators and types of applications, recognizing that a single drought indicator may not be most appropriate for all applications.

Details

Date Taken:

Length: 00:57:01

Location Taken: Reston, VA, US

Transcript

Emily Fort:  Welcome, everyone. We're very excited to have our speakers today. I'd like to introduce first, Steven Quiring. Dr. Steven Quiring is a professor in the Department of Geography at Ohio State University. He received a BA in geography from the University of Winnipeg in 1999, an MA in geography from the University of Manitoba in 2001, and a PhD in climatology from the University of Delaware in 2005.

His research focuses on climate change, climate variability, floods and droughts, as well as the impact of hurricane activity on the built environment.

Also with us is Mark Shafer, Director of the Southern Climate Impact Planning Program at the University of Oklahoma. Mark Shafer established and is the University of Oklahoma lead for the Southern Climate Impact Planning Program, a NOAA RISA team for the South Central US.

He's also associate state climatologist with the Oklahoma Climatological Survey, an assistant professor at the University of Oklahoma's Department of Geography and Environmental Sustainability.

His research interests focus on natural hazards, particularly on planning for and managing societal response to extreme events and climate change. Mark holds a PhD in Political Science and an MS in Meteorology from the University of Oklahoma and a BS in Atmospheric Sciences from the University of Illinois, Urbana.

And with that, I’ll turn it over to Steven and Mark.

Mark Shafer:  Well, thank you, Emily. This is Mark Shafer.

I'm going to start the first part of this, and then Steven will pick up from there. Our project was funded by the South Central Climate Science Center.

It was looking at how people use drought information, drought indices, and how those indices perform in the South Central US and some looking at how those perhaps could be improved.

There we go. We had four project objectives on this. The first two of these we'll discuss in this webinar. They involved assessing what people already knew about drought and how they were connected to monitoring efforts.

The second part was looking at the existing indices’ performance. We also did explore other new drought tools and how those might be used. But given the time, we'll just focus on those first two parts of this project.

The first, drought indices. One thing is there are so many different drought indices around that it's hard to figure out what works best and what we should be evaluating.

Most hazards have a single measure that determine their intensity, but drought has multiple measures. They’re going to be dependent upon whether you have a short‑term response or a long‑term response.

There's going to be regional variation, seasonal variation in what you use, so any kind of assessment of drought indices really has to encompass multiple measures.

Some of the more common ones used are listed here. Palmer Drought Severity Index was, in fact, the first drought index really developed in the 1960s. It's been around, widely used for a long time, but others have come on.

More recently, we have a lot of water balance models, evaporative stress, things like that, and then soil moisture and vegetation health have recently become more prominent, so a number of different indices to evaluate.

These are some of the different indices that are used by the US Drought Monitor. For those who may be unfamiliar with the US Drought Monitor, it is produced weekly. It's a collaboration between NOAA, USDA, the Regional Climate Centers and the National Drought Mitigation Center, National Centers for Environmental Information, and a few others.

They rely on local experts. There's a discussion list where people can input their perspectives of their state or their locality and provide feedback to the authors. The authors sift through all these different indices to come up with an assessment of what they think the status of drought is each week.

This status goes from no drought to D0, which is abnormally dry ‑‑ It's not considered drought. It's a heads‑up, heading into drought or coming out of drought. There may be lingering impacts. -- up to D4 as being an exceptional one-in-50‑year kind of event.

They use all these indices and that's part of the problem is they have to look at all these. Some of these indices are used year‑round, that first set. Some of them are used only during certain seasons, growing season, for example.

Others are just used in certain regions of the country, spatially, that may not exist in other parts of the country. For the South Central United States, we were more interested in some of the ones -- not so much the snow pack, because snow pack is not a major issue in most of this region -- but more of the shorter term, especially, as well as the established indices that were out there.

To start with the project, we wanted to assess how information is connected to the local levels. There have been great strides among national partners, federal agencies, state governments, other partners over the last couple of decades.

The creation of the US Drought Monitor process in 1999 improved the communication among these organizations. The Western Governors' Association efforts, the National Integrated Drought Information System all helped move forward a lot of the monitoring and planning processes in communication.

The National Drought Mitigation Center has been a major part of this, a major focus for a lot of the efforts. They’ve helped with planning and also providing directions in some of this process. Our question was, how well connected is this to local communities?

By local communities, we mean county level or cities that are not necessarily participating directly in the Drought Monitor process, the discussion each week.

We sent out a regional county-level survey of drought information, asking drought information sources, needs, and communication across the SCIPP region. SCIPP is a NOAA RISA team. We cover six states in the South Central US. You'll see that in a moment.

We received 331 responses from across the six states, so a pretty good sample size as far as electronic surveys go.

We did hit our target audience. Most of those responding were from small- to medium-size locations, hitting the counties and parishes in Louisiana. Most were from under 100,000 population, and the plurality of those 5,000 to 30,000.

This is a distribution of the survey responses. We had a lot more interest, perhaps, from Oklahoma and Texas, partly because the survey was distributed in the fall of 2014.

At the time, there was a long, multiyear drought ongoing in Oklahoma and Texas, so drought was, perhaps, more at the forefront of people's activities and a little, perhaps, more interest in participating in the survey.

We did get a higher sample there, but we did get a pretty good representation of the other four states, the wetter region, called the wet states of the region. If they're aggregated, we can really see some differences emerging that we'll discuss in a few moments.

First question was, what were their perceptions of drought? Trying to get a gauge, how well connected they were, how important drought was to their activities.

Most of the respondents did not have a formal role in drought management in the sense that they were not responsible for monitoring and relaying information, specifically, according to their agency mandates or org charts or official roles.

But they did, generally, pay attention to it and had some informal interactions. The Drought Monitor, as I mentioned, goes from D0, abnormally dry, when the first indications of drought perhaps developing, up to D4, the exceptional drought.

Most agencies indicated that their actions begin around D2 level, so when the Drought Monitor shows D2. This is actually pretty much in line with what we would expect. D2 is severe drought. It's about a one-in-20‑year kind of event, something that causes severe impacts and that's when most people would probably pay attention.

There was some variability in responses, I think some concern that had to get earlier interaction, may start gearing things up earlier. Others that may not be affected until there's a long drought.

For example, a lot of resources, it takes a while for that to show up in reservoir levels, so short‑term drought may not have as much impact on them.

The actions that they would take were grouped into different sectors. We had an open‑ended question asking what kind of actions they would take at various levels. This list here shows the typical actions that would be taken by water resource professionals.

For example, they'd be monitoring pond levels, streams flow, ground water, and some restrictions they may be able to take on there and assistance, and so forth down that list.

We asked them if they had triggers for action. For example, when the Drought Monitor gets to D2, we take this action, or when the Standardized Precipitation Index dips to ‑1.5, or some triggers like that.

The vast majority did not have specific triggers. They did look at a variety of measures and monitored those, but most of them indicated they didn't have a certain level where they would say, "OK, we have to do something."

These lists here of measures are things that were self‑reported. The ones at the top of that list were reported by more people than the ones towards the bottom, reservoir levels being the most prominent type of thing that was monitored. But again, there wasn't really very much instance of, "When the lake gets to this level, we take this action."

The choice of indices that they had. We asked them to rank various indicators from “not relevant” to “critical indicator.” We gave them a list of commonly used indices. See on the next slide, that full list.

What came out of this is that soil moisture was pretty much universally the most important indicator reported at the county and local level here. Forty-two percent ranked it as a critical indicator. Forty-four percent ranked it as highly relevant. That, across the board, was seen as the most important.

Drought Monitor was the second most important tool as indicating, partly because a lot of USDA financial assistance actions are tied to Drought Monitor level. So when it gets to D2 for an extended period ‑‑ for eight weeks, for example ‑‑ or it hits D3, then certain aid programs go into effect.

That was monitored by a lot of, especially, the USDA field offices and county extension and folks like that.

Precipitation and temperature departures from normal were also commonly used.

We also gave them a list of impact indicators to see what they look at from an impact standpoint -- not from a meteorological assessment standpoint, but the impacts. Crop status, not surprisingly, in the South Central US is the dominant thing that's measured. Also, people mentioned looking at county burn bans, direct drought reports, groundwater, vegetation health, and reservoir storage, came on that list.

Here's the tables showing the breakdown of how people value different indices. The order listed is adding up highly relevant, critical indicator, those top two columns, the percentage that indicated those as either highly relevant or critical, and then going down the list.

Beyond those ones I mentioned, you start to see some more variability in here, and you see some instances where there's a little bit more spread in the indicators. For example, Keetch‑Byrum Drought Index, there's a fair number saying not really relevant, but you still have 15 percent saying it's critical.

It's probably people are involved in fire management may be looking at that, whereas others, there's a lot that say not relevant. There gets more of a spread as we go down.

You'll see, at the bottom of this, things like the forecasts, the temperature ranks, seasonal forecasts, are the least used. It may be partly because the resolution of those, it doesn't really get down to the local level.

The impact indicators, again here's a listing, going down the list. On this one it’s interesting because there wasn't as high value of critical indicators. There was a little bit more spread in these, but there were still a large number that said crop status and county burn bans were important. As you go down the list, you see wildfire is further down there.

Water quality was very important to a number of people, but it was not really as widespread impacts on that. Stream flow and media reports ranking at the bottom.

We also asked for their sources of information, where did they get their information. What we got from this was that the National Weather Service was overwhelmingly the most frequently used source of information, with 88 percent indicating they use it at least monthly, and most, weekly.

One of the challenges with the National Weather Service is there's a lot of variability among offices. Some offices don't really relate drought information much through their websites, whereas others do have a more active role.

It may not provide a common basis, but it suggests that the people who are working on drought management should be more closely interacting with those local forecast offices to better convey information.

USDA, not surprisingly, was very high up on that list, with the crop reports processes and declarations of financial assistance.

State and local mesonets were commonly used in parts, but only Oklahoma and parts of Texas really have a state‑run mesonet. Much of the region didn't have that as a data source.

It suggests that where the data source existed, it was probably a very effective means of reaching people. But that explains a lot of the “do not use” parts of it because there is no network there.

We asked questions about how accurate they thought the Drought Monitor was. Less than half thought that it was usually accurate. This was with all the efforts and all the local input that goes on. There's still a lot of people who think that it isn't quite hitting the mark. Generally, when it's off the mark, there's a tendency to view it as lagging behind.

That's where the importance of getting the impact reports into the process is important, and to adjust some of those meteorological variables, because the author, each week, has to try to decide between short‑term and long‑term indicators, and they don't always align up very well.

When one indicator is saying it's severe and another is saying there's no problem at all, it's hard to split that difference sometimes. That's where impacts can really help steer which way to indicate it.

There's some improvement that's needed in connecting these local levels with the US Drought Monitor.

Communication was something else we asked about. Most respondents, they provide drought information through their own networks, but they're perhaps not well connected to other networks.

The majority of them did not receive information or notification from other sources directly, but they did monitor their own indicators and convey to their constituency. That's another area that could be perhaps focused on a little bit.

The most common ways was providing written materials about water conservation or thresholds for action. Some did other things, but those were the primary ways of communicating.

We also looked at regional variation in these responses. This region that SCIPP has in the South Central US has a very semi‑arid area in the western part of the region and very wet, humid subtropical in the east part of the region.

So droughts tend to have different characteristics -- tend to get more intense, long‑lasting droughts in the west than we do in the east. All areas are susceptible to flash drought, so we have seen a number of those over the years. We were interested in how that affected some of their responses.

We grouped these: Oklahoma and Texas, we called them the “dry states,” and Arkansas, Louisiana, Mississippi, and Tennessee we called the “wet states.” By grouping them, we looked at all drought indicators and forecasts and saw that they tended to be viewed as more relevant in the dry states, not surprisingly, because of that long‑lasting drought.

The order of importance of these indicators, though, between the two regions, was essentially unchanged. For example, soil moisture was rated very highly in Oklahoma and Texas. It was also rated very highly in Arkansas, Louisiana, Mississippi, and Tennessee. The order was pretty substantial.

The impact indicators showed a little bit more variability. There was less emphasis on reported drought impacts in media reports in the dry states. Part of this may be that those states are very well‑connected to the Drought Monitor process, so those are already being captured in what comes out of the Drought Monitor, whereas the wet states are less active on the Drought Monitor discussion list.

There's also greater emphasis on water‑based impact indicators in the dry states. When we looked at communication sources, we did see more reliance on local mesonets and state climate offices in the dry states because those are the ones most involved in the Drought Monitor process, that's where the mesonets exist.

CoCoRaHS filled in some of that. CoCoRaHS is a voluntary network of rainfall observers, rainfall, snow and hail, and they have local or state networks through Colorado Climate Center, runs the program. They can provide daily rainfall, so they fill that need for where the mesonets don't exist.

Interestingly, water sources were less consulted in dry states even though impacts were rated more highly, so that was counter‑intuitive.

Here's just a look at the rankings of the drought indices by state. You see US Drought Monitor is up there, soil moisture and precipitation departures are pretty much all the list. There's a little variability, but it's pretty similar across the board.

The Palmer Drought Severity Index and various forecasts came out in some states, for example, Louisiana, the five‑day forecast showed up. Palmer Drought Severity Index showed up in Tennessee and Arkansas, and a little lower on the list in Oklahoma.

There was a little bit of variability in there, but qualitatively, there's similarity. This gave us the foundation for looking at the indices that Steven will discuss here in just a moment, because those were the ones that came up as most important.

The US Drought Monitor, when we looked at that, the dry states as I mentioned, were more active on the US Drought Monitor discussion list. The reports and sources are integrated more into the weekly maps. But even the states where there was this very active process -- Texas was the best‑performing state, it only achieved 55 percent as “usually accurate.”

There's a lot of work that's still needed to connect those local levels to find out a little bit more about why those perceptions exist, why they think it's missing certain things.

We also asked them to identify a contact. If a user looked at that map and said, "I don't think this is right," could they talk to somebody that would be able to be connected into that drought monitor discussion process?

Oklahoma, Louisiana, and Arkansas had the highest rates of being able to identify people and even that was about a third of the respondents could name an organization that was likely tied into it.

In Oklahoma, they're actually naming the state climatologist, Gary McManus, directly. They knew Gary was the one to go to with this.

Texas actually had the lowest, even though they had the best performance on the Drought Monitor. It may be that they believe that process is working so well, they don't really have to convey as much. Tennessee and Mississippi had the lowest connectivity and the least confidence in the Drought Monitor, came out of that.

What we found on the survey was that there is this active local network, especially in the drier states. There are opportunities to connect it better to that Drought Monitor process and other monitoring efforts and planning efforts.

Within the network, there's a wealth of information. There's more localized information was wanted, historical context was wanted, and additional indicators, improved forecasts -- it was mentioned that that would've been useful to many of the users.

The National Weather Service and state climate offices offer a significant link between the national monitoring and local use that could be explored a little bit further.

With that, I'm going to pass this off to Steven, and he's going to talk about how these indices that we mentioned here actually perform in comparison to the regions.

Steven?

Steven Quiring:  Thanks, Mark.

First of all, I'd just like to acknowledge the funding from the South Central Climate Science Center, which supported this work, and of course thank, Mark, as the PI on this project, for the opportunity to be engaged in something that is really about getting better tools into the hands of decision‑makers and evaluating how well the tools that people are choosing work.

You'll notice that things like soil moisture and precipitation departures in the US Drought Monitor were ones that were indicated as places that people went to and relied upon for information.

With this second objective in the project, we're looking at assessing quantitatively the performance of the drought indices and looking at how well they do in terms of representing both soil moisture conditions -- so what drought index can be used as a proxy for soil moisture conditions -- and then which drought indices are best‑suited for monitoring impacts when we look at the major crops in the region.

You might say, "Well, why not use soil moisture information directly?" That is driven, by and large, by the sparsity of stations, even within this region of the South Central US, where we have quite a few more mesonets and soil moisture monitoring stations than other regions of the United States.

This particular map -- the black circles here show the locations of stations that were used in our analysis. These stations are primarily from the Oklahoma mesonet and west Texas mesonet, and then there's also some stations from the USDA Soil Climate Analysis Network that were used.

In this case, we were limited to stations that provided data for a prolonged period of time, so we're looking at the period from 2000 to 2014.

There are actually more stations that exist ‑‑ not a lot more but there are some ‑‑ the Climate Reference Network, the TxSON Network, which was recently added in the Hill Country, Texas, and some other stations that don't show up here, but we focused on those with the longest period of record to do our comparison.

The idea being that, if there are existing drought indicators that are highly related to soil moisture conditions, through data sets like PRISM and others, we have high resolution, spatially‑resolved temperature and precipitation. That would allow us to calculate these indices and represent soil moisture conditions in places that don't have in situ observations.

We could also look at other sources of soil moisture information, like model‑derived soil moisture simulations or satellite‑derived soil moisture, but because our focus here was on crops, the satellites directly measure soil moisture only in the top couple centimeters of the soil, not the entire root zone, and models have their own issues and limitations and biases, so we did not focus on those two sources in this particular study.

There are six indices that we evaluated, three which I can describe as precipitation‑based. The Standardized Precipitation Index, Percent of Normal Precipitation and then precipitation expressed as percentile, so zero being driest ever recorded, 100 being wettest ever recorded.

We have three indices that we could describe as water‑balance‑based, meaning they account for both supply of moisture from precipitation and demand for moisture through evapotranspiration.

We calculated these indices monthly and we also aggregated the soil moisture data, which is often at 15‑minute or one‑hour resolution to monthly values, as well, at each of those stations that are shown here. That's the first part of what I'll discuss.

There's a number of different ways that we can represent how well these popular drought indices relate to soil moisture. One is to just look at the total number of stations, approximately 120, and calculate how many of those have correlations above 0.5, an arbitrary threshold that we selected.

We can see with that top graph here that the Standardized Precipitation Evapotranspiration Index has relatively strong correlation at the majority of stations within the region, and that drops off pretty quickly as we get to the Standardized Precipitation Index and percent normal, and the lowest number of stations with strong correlations or moderate correlations with soil moisture as the PDSI.

Another way to look at this would be to look at seasonal variations. Perhaps not surprisingly, it turns out that the relationship between drought indices and soil moisture vary significantly over the year.

In general, if we look at conditions during the warmest part of the year -- June, July, August -- we see the strongest correlations for most indices, including the SPI, SPEI, percentiles, percent normal, and that the weakest correlations tend to be in the cool season. This is because that soil moisture is influenced by recharge in fall and winter, and so it becomes somewhat less tightly coupled, root zone soil moisture and the drought indices during the cool season.

We can similarly express the graph that we showed previously, where we break it down rather than looking at overall into these seasonal categories, and that we can see that there is seasonal variability in the correlation between these indices in the cool season and warm season.

The warm season's on the top, and we see that there's many more indices that do relatively well at representing soil moisture conditions during the summer and that falls off markedly when we get into the cool season.

Perhaps more interesting than seeing the bar graphs where we just aggregate the stations is to look at the spatial patterns. There's a decided spatial pattern for the correlations for most of these indices.

If you look in the bottom right‑hand corner where we have percentiles, you can see that the highest correlations for percentile, the orange and red colors, are located in the driest part of the study region.

As Mark mentioned, there's a strong precipitation gradient, and so West Texas and the Panhandle of Oklahoma, we tend to see much higher correlations, and then as we move to the east, we see those correlations drop off. That's the case for percentiles, for SPI, or the Z‑index, and for the PDSI.

The one exception appears to be that the SPEI ‑‑ this is, I guess, Figure C on the left‑hand side, in the middle ‑‑ has relatively strong correlations over the majority of the region. Those little crosses in the center of the circle indicate that the correlations are statistically significant.

For the SPEI, 97 percent of the stations in the study region have statistically significant correlations with the SPEI, in this case, during the months of June, July, and August.

If we look at trying to answer which drought index or which drought indices are best for representing soil moisture, we should note that there is both spatial and temporal variations in the strength of the relationship.

Depending on where you are in this region, if you're a farmer or rancher in the western part, in Texas or western Oklahoma, or whether you're in Louisiana or Arkansas, the degree of correspondence, how representative a given drought index is of root zone soil moisture conditions -- and here I should note, we looked at the top 60 centimeters as our comparison -- varies quite a bit.

Of course, there's also temporal variation, so that not all times of the year have the same strength of relationships between these indices.

This is important because, of course, we would like to have soil moisture stations in all counties in the region, and for that matter in the US, given its importance and how often it was indicated as a critical or highly important indicator through the surveys. Since that's not the case, we need to look for other sources of information to represent that.

Of those, the Standardized Precipitation Evapotranspiration Index has the strongest relationship during the warm season, the Z‑index tends to have the strongest relationship during the cool season.

These are indices that are relatively closely related to one another. They're both water balance equations that look at supply and demand, so supply of water from precipitation, demand for water from evapotranspiration, and they standardize those differences for location and season.

Both of these tend to be best related, most strongly related to soil moisture.

We also wanted to look at crop impact. This particular study was focused on the largest group of land managers, those who are responsible for managing the greatest area of land in the South Central region, which is the farmers and ranchers.

As Mark noted, crop impacts were one of the important indicators that showed up in our survey.

Similarly, in the second part of the analysis, we compared the six drought indices that we used previously for soil moisture to look at the relationships with crops.

We focused on three crops, because these three crops are the ones that are planted in the largest portion of the study region, they cover the greatest area. Here we had a longer time period that we could use, from 1981 to 2014, that was determined by the availability of the USDA county‑based yields for each of these three crops.

We also need to note the different planting and harvest seasons, so winter wheat which runs from September to June, corn from March to September, and cotton from May to November.

We'll separately look at how these relationships between these indices vary within the growing season, so how highly, for example, is the SPI in May correlated with corn yields that occur at the end of the season.

We de‑trended the yield data because of the influences that changes in farming technology, fertilizers, the new seed varieties and other technological innovations have, so without that, there's a relatively pronounced upward trend in yield over time.

We de‑trended the yield and then converted it into a z‑score, or standardized representation of yield, and that was what we used for our comparison.

Also, we used those crop masks which are shown on this figure in blue, orange, and green, for winter wheat, cotton, and corn respectively, and those are based on USDA's crop data layer. For each year, the USDA records what crops are grown at what locations at 30‑meter resolution.

We used the 2008 to 2015 data to identify those locations where more than half the time a particular crop was grown in a particular location, so that we could focus on the drought conditions, the drought indices, in those locations where these crops are dominant.

Those areas that are shown in the colored shading here were what were used for our analysis, and you can see that it certainly does not cover the whole region. This is because there's significant crop rotations or changes in crop growing patterns over time.

We're focusing just on those areas where we're pretty sure winter wheat or corn or cotton is being grown during our period of record.

First, we'll look at winter wheat. This graph shows the percentage of counties where there's a statistically significant correlation between crop yield and a given drought index, and the six that we looked at are shown on the bottom figure caption.

We can see that, generally, there's two months, January and March, where we see relatively strong relationships, between 60 to 80 percent of the counties have a statistically significant relationship with one or more drought indices.

We can also look at this relationship spatially. These are the counties primarily focused in West Texas and the Panhandle of Oklahoma, where winter wheat is a dominant crop. This shows the relationship, now just taking one month, the month of January, and expressing the correlations.

Blue colors indicate lower correlations, darker reds indicate higher correlations, and a dot in the center of the county indicates that the correlation is statistically significant.

In this case, we see both variations between indices, so that some indices have more red than others, and we see that in this case, the two indices with the greatest number of counties with statistically significant correlations in January are the Z‑index and the SPEI. There's some spatial relationship that the western parts of the study region tend to have higher correlations than the eastern parts of the study region. As usual, things are complicated.

We further complicated things by sub‑setting the years. In the literature, there's arguments about whether one should use all years of crop yield data to look at relationships with drought indices, when really what we're focused on is detecting yield departures during so‑called extreme years.

We re‑did this analysis by excluding the central 40 percent of the yield, so yields that were near‑normal were excluded and years with near‑normal moisture conditions were also excluded.

That took our sample size of more than 30 years and in many cases dropped it down to about 10 years where we had extreme yield and extreme moisture conditions, and then we re‑did the correlations in those years.

Places where we have an “N/A” means that there wasn't enough years to calculate a statistically significant, there wasn't more than 10 years in this case to fit a regression.

You can see that the relationships are much stronger during the extreme years, but also the rank order of the drought indices changes. Long story short, it's complicated.

We can also look at things for winter wheat. And here, this is the summary of what we said, so in January and in March, these are the best times, and the best indices are the Z‑index and the SPEI.

We re‑did the same analysis with cotton and we see that for cotton, the prime period for moisture conditions, the most important in terms of influencing yield, occur in July and August for most of the indices.

We see there's a much smaller area where cotton's grown, and there's also some significant irrigation in some of these counties which also complicates the picture. We focused on unirrigated yields, but there are some issues with the data that we don't have time to get into, but that complicates the analysis.

If we look at the summary of the conditions, we can see that, generally, like we saw with winter wheat, that the SPEI and Z‑index are relatively highly correlated during all years, but when we look at extreme years, it’s actually the SPEI is not very useful.

I'm carefully watching the clock, so I'm going to accelerate a little bit and just cover corn so we can get to the end and have some questions and discussion.

Corn was the third crop that we looked at. Corn, obviously, has a very different growth cycle, phenology, region in which it's grown. Here, again, we focused on unirrigated corn.

In looking at the counties where unirrigated corn is grown in the South Central US, the two months with the highest percentage of counties with statistically significant relationships with drought indices occur in May and June, and then that falls off as we get closer to harvest.

You'll note in most of these cases, there's one drought index that does not respond like the others, and this is the Palmer Drought Severity Index. The Palmer Drought Severity Index, as Mark mentioned, is very well‑known. It's been around for a long time. It's one that people still look at and rely on.

It has some issues when we come to use it for representing soil moisture conditions or looking at crop impacts, and that's because of its memory. Because it's a recursive calculation where the current month value is what happened in the current month plus 0.897 times the previous month, it has about a nine‑month memory.

You'll see that while all the other drought indices are dropping off in July, August, and September, the PDSI is actually peaking at this time, because it's remembering everything that happened from February through August in its calculation.

Speaking of leading and lagging indicators, the PDSI is definitely a lagging indicator, and probably one the we wouldn't recommend farmers and ranchers to rely on because it's not highly correlated in real‑time with conditions.

You can see we have relatively few counties with unirrigated corn where it dominates the time series, in that, again there's spatial variability in terms of the strength of the correlations.

We find that corn yield is sensitive to water supply, to drought indices, during the flowering period centered on June, and that the performance of the SPI and Z‑index, SPEI are quite similar, and all of them are better than looking at precipitation departures from normal or PDSI.

Overall, we can say there's two indices that tend to do better than the others. However, there's significant spatial and temporal variations in performance.

Notably, there are some indices that people commonly use that are not on this list, and one of those is precipitation departures. Precipitation departures was flagged through the survey results as one of the top three sources of information that people look at.

It turns out that, regardless of whether one's interested in looking at soil moisture or crop yields for the dominant crops in this region, Precipitation departures, here expressed as percent of normal, is not one that's highly correlated with soil moisture conditions or with crop yields as compared to the other indices.

Similarly, the PDSI is also one that does not tend to perform well in our quantitative evaluation.

Despite the importance of soil moisture, there's not a regionally available source or product that one can use that's based on observations, to serve the needs of farmers and ranchers in this region.

Therefore, there's the necessity to rely on other proxies, other drought indices that can represent soil moisture conditions. Of those, SPEI and Z‑index are the best.

There are some challenges, in that the SPEI is not as commonly calculated as some of the other drought indices, and while it's strongly correlated with soil moisture during the growing season, it's relatively weakly correlated with soil moisture during the cool season.

There's a lot of other factors that are influencing soil moisture that are not accounted for. Rainfall runoff processes that are not accounted for by the SPEI, and so it's certainly not a perfect proxy for representing soil moisture conditions.

When it comes to crops, again, SPEI and Z‑index performed well, but it's important to look at those indices during the critical periods of growth and so there's not one index that performs well in all seasons and in all locations. With that, I'll stop there and open it up to questions for Mark or myself.

Thanks for your attention.

John Ossanna:  First question, from Ryan. In correlations with soil moisture, do you use raw volumetric moisture or soil‑adjusted metric, i.e., plant available water or saturation index? Soils across the region vary, so monthly soil moisture would vary, too, right?

Steven:  Ryan, thanks for the question. In this case, we did the analysis at each station and converted the volumetric water content at those stations into percentiles.

We took all measurements in the top 60 centimeters of the soil, and then calculated percentiles for that day using a 30‑day window centered on that day for the period of records.

It gives the relative wetness of the soil at that location with respect to the climatology, the historical observations. Percentiles are certainly not a perfect standardization approach, but it works relatively well at accounting for spatial variations in soil conditions.

The fact that we don't get...there's not a difference, if we were to plot those results or show them, for example, as a function of the percent of sand or clay, or as a percentage of the available water holding capacity for each location, we're not seeing that things group based on clay soils respond one way and sands respond another way.

Hopefully, that gets at what you were asking.

John:  Let’s see…”Thanks. Any future plans for this work? Would be interesting to extend this to grassland species of interest to CSC.”

Steven:  Yeah, I'm definitely interested in looking at applying this to other regions and to other -- you know, our goal with this particular project was to focus on farmers and ranchers in the South Central, but I think there's need to do this for other applications, other sectors, looking at pasture, looking at grasses, ecological drought -- so potentially looking at the relationships between these indices and species abundance for various marker species.

I think there's other kinds of quantitative analyses that could be done to extend this that would be helpful and informative. As we showed here, things vary a lot depending on which crop you're looking at and which region and which time of the year.

It would be over‑simplistic to take the results of this analysis and say, "Oh great, SPEI's gonna work everywhere in the US for all of the different agricultural and climatic regions and all the different species that we're interested in."

So yeah, absolutely. The rate‑limiting step in this case was availability of data sets against which to do the comparisons.

Mark:  I'd also add to that, I know that Steve DeMaso at the Gulf Coast Joint Venture has been looking at drought effects on suitable waterfall habitat along the Gulf Coast. It's a little different how that may be applied.

They use satellite measures of flooded area, non‑flooded area, as an estimate in correlations. He's trying to do some work in integrating this kind of approach into some habitat assessments for estimating what they'd have at various lead times through the year.

Also, we wanted to try to work directly with some of the producer groups in our region. We had the misfortune of the drought ending during our project, so the interest wasn't really there in getting people to commit to spending time looking at this.

It's something we hope to pursue when there's a little bit more attention on these kinds of areas, whenever it returns to this region.

John:  Excellent. Ryan, thank you for your note. Moving on to the next question with Margaret: "Steven, although not compared in this study, how would your research seem to indicate the satellite‑based Vegetation Condition Index might compare in the hierarchy?"

Steven:  That's a good question. There are a good number of indices that we did not include, here. The Vegetation Condition Index is one that's performed well in the past.

There’s some advantages to using it. It has a higher spatial resolution and when we're talking about crops, obviously it's specifically vegetation based.

We have not specifically evaluated it as part of this study. In previous work that we've done, just in the state of Texas, the Vegetation Condition Index does pretty well.

The challenge is that it is influenced by the period of record. At each pixel, there's a pixel‑wise normalization that uses the maximum and minimum NDVI values experienced for that location for that time of the year.

Because the satellite record's shorter than the period of record for the station observations for precipitation and temperature, it may not capture the full range of conditions.

There may be some issues, too, as you think about what resolution do you want to calculate the Vegetation Condition Index, and what satellite or satellites are available at that spatial resolution. Are they consistent over time, or is there some challenges as we switch from one sensor to another one?

This is, for example, MODIS‑based measurements versus Landsat-based. Or how do we stitch together the longest‑possible time series?

Some technical issues there, but I do think the Vegetation Condition Index is one that has value for these types of studies.

John:  Thank you. I see one more question come in. There we go.

"Do you have plans to close the loop, so to speak, with the stakeholders you initially interacted with? For example, create guidance on which products to use for different scenarios."

Mark:  I'll take a first stab at that. I think yes, in short. I think the continuing work of this is going to be carried on through the RISA, for example, through SCIPP. Any of these kinds of relevant information, we can look at condensing that for ways that we can send out to various partners.

Ultimately, when we get attention, we'd like to do some focus group discussions, for example, to dig a little bit more into how they think these indices perform.

I think the South Central Climate Science Center may be interested as well with some of their education and outreach activities. We haven't specifically tackled that yet, but I think the existing infrastructure, both RISA and CSC, leave an avenue for continuing that kind of work.

Steven:  I'll just quickly add, the PhD student who was working with me at Texas A&M on this project just finished her PhD and graduated this summer, and was trying to develop an app that people could put in their location and their indicator of interest and it would identify, based on the quantitative analysis that we did, which indicators performed the best.

I guess the short answer is yes, but it's not quite done.

Mark:  One more complicating factor is that Steven is now at The Ohio State University. He's not here in the region, although he still has, of course, a lot of research going on in this area.

We can communicate with him easily. We know where he went. [laughs] No getting away from us.

John:  Thank you, everyone, for your participation.

I see Ryan's typing away real quick. We'll wait for that to come in, but I would like to thank everyone for their presentation and their participation.