# Evidence of Absence

## Detailed Description

This is a recorded presentation describing a statistical software package called "Evidence of Absence" that can be used to provide evidence of compliance with incidental take permits. It will be useful to wildlife managers and wind energy operators to estimate, with reasonable certainty, that a certain number of birds or bats have been killed at wind energy facilities, even when no carcasses are found.

## Details

Image Dimensions: 1280 x 720

Date Taken:

Length: 00:56:45

Location Taken: Corvallis, OR, US

## Transcript

Good morning, or good afternoon, whichever the

case may be. Welcome to this webinar on

providing

evidence of compliance with incidental take

permits.

My name is Manuela Huso, and I am a research

statistician with the U.S. Geological Survey in

Corvallis.

My colleague and co-presenter

Dan Dalthorp is also a statistician

with the USGS here in Corvallis.

Before we get started, I'd like to thank Rick

Amidon and T.J. Miller from Fish and Wildlife for

organizing this webinar, and the Fish and

Wildlife Service's Regions 1 and 3 for their

strong support of this work, that's actually

applicable to any Region considering issuing

an ITP for any species ranging from Indiana bats

to golden eagles.

And, of course, I would like to thank you for your

interest in what we have familiarly come to know

as Evidence of Absence (that's what we've

named the software that we've produced) but

what might be more aptly called Evidence of

Compliance.

As many of you know, requests to the U.S. Fish

and Wildlife Service for Incidental Take permits

from wind developers have been rapidly

increasing in recent years,

to the point that it's starting to feel overwhelming

for some wildlife managers.

Before issuing the permit, managers must

assure that any incidental take is already

minimized and that the taking will not reduce

likelihood of survival and recovery of the species.

So the first question is, well how many is that?

This is determined typically through collision

risk models or through evidence from similar

sites

and through knowledge, sometimes very little, of

what populations can sustain.

In this example from a facility in the Midwest,

the number that they arrived at was three

Indiana bats and two northern long-eared bats

that might arrive somewhere out here over the

course of a year.

Often the permit states that if the permitted

number is exceeded,

the company will have to compensate through

some form of additional mitigation or

minimization, which can sometimes be quite

expensive. So the question we're interested in is

how will we know when the limit has been

exceeded?

Or, how can we be reasonably sure that the

company is in compliance with its ITP,

both in a given year or over the course of the life

of the permit?

So here's what we're facing:

Out there somewhere there might be no Indiana

bats, actually. But on the other hand, there

might be some.

Same for northern long-eared. When the ITP

was issued, we used the best available science

to determine that we think there will be three

Indiana bats and two northern long-eared bats

taken in a typical year.

Does this mean there will be exactly that

number every year?

Well, quite unlikely.

So in any given year there might be five. There

might be 10. But there also might be 50 out

there.

If we didn't have enough information to

accurately set the ITP level we could be quite

wrong.

So how can we know, given what we find, that

it's closer to one or two than 50?

If we only find only one or two, or maybe even

none in our searches, well really it all depends

on

the probability of detecting a carcass. Or

equivalently, the probability of missing a

carcass.

Why do we miss some carcasses? Well all you

have to do is look at the sort of landscapes that

we're dealing with and you intuitively know that

it's quite likely that we will miss some of the

carcasses on our searches.

But we can quantify our chance of missing a

carcass by looking at our protocol and the

reasons for missing them.

In a typical search protocol, we start by

designating an area around a turbine to be

searched.

This area can vary from facility to facility, from

study to study.

But in addition to having variable search areas

designated, there may be parts within it that are

just too dangerous or brushy to actually search.

Within the searchable area we do our best to

systematically comb the area searching for

fallen and birds and bats.

But what we find is certainly not necessarily all

that was killed.

First, because we often only search a subset of

the turbans.

Some animals land outside our designated

search areas, while others might land inside but

in unsearchable areas.

Some carcasses are removed by scavengers

before we can even find them,

and some are present but missed because they

fall in a hole, in a deep clump of vegetation, or

shadows hide them. Just chance.

Searchers don't find them. So there's many

reasons combined why we don't find everything

that's out there.

But in order to estimate fatality accurately, we

first need to estimate what the probability of

detecting

a carcass is so that we can understand what

fraction we've actually observed.

So let's briefly go through all these components.

First we'll start with the proportion of carcasses

in our searched area,

which depends on the relative density of

carcasses and where they land.

This composite image of bat carcass locations

across several turbines shows a higher density

in the center and towards the edges.

If rather than using this particular configuration

for the search plot,

we were to reduce it by 25%.

Well we wouldn't necessarily be losing 25% of

the animals out there.

If we simply shrink the plot by removing 25% of

the perimeter, we only lose maybe 1%, maybe

5%...certainly not 25%.

On the other hand, if for some reason

the center 25% of the plot is unsearchable,

we'd lose only about 85% the bats at this site.

The question to answer then is not what fraction

of the plot is searchable

but what fraction of the carcasses can we

expect to be in the searched area?

One approach is to model the density as a

function of distance, taking into account

search effort or, essentially, detection

probability.

This model, that shows that the relative density

is highest at about 20 m and tapers off to zero

at around 80 m

can be translated into a three dimensional

surface

that integrates to one and associates a relative

density with every point beneath the turbine.

The light color in this graphic, close to the

center, indicates that we have a higher

probability that

an animal will be in the square meter near the

turbine and farther from the turbine where it's

dark.

So what if we were able to search a plot with

this kind of configuration,

essentially only the roads and pads and the

area very close by.

If we don't consider the different densities at

different distances, we would say that since we

searched only 25% of the plot,

we would find 25% of the carcasses.

However, if we take that three dimensional

surface that we had before and superimpose

that on this plot,

we would say although we searched 25% of the

plot within that searched area,

we would expect to have about 60% of the

carcasses.

We'll see later how we can use this idea to our

advantage when considering coverage in a study

design.

Our next factor, carcass persistence,

is typically measured by placing trial carcasses

in the landscape and recording how long it takes

before they are removed by scavengers or

otherwise no longer identifiable as a carcass by

a searcher.

When we do this, typically a carcass may be

removed but feather spots or feathers may

remain behind.

Those would be identified as a carcass or at

least as a former carcass by a searcher and so

those are included as still being identifiable.

Our next factor, searcher efficiency, is also

typically measured by placing trial carcasses in

the landscape and recording whether a carcass

is observed by a searcher.

Often searcher efficiency depends on vegetation

as well as carcass size and perhaps even

coloration.

Just for fun I, show you what I hope is a pretty

rare event but was

captured by Michael Schirmacher of Bat

Conservation International to show a lonely bat

on the top of the nacelle of one of these

turbines. Of course we're not typically searching

that area.

To put this in algebraic terms, we start with M,

the actual numbers killed at a site.

But we know it doesn't necessarily equal X, the

observed number of carcasses.

Only the fraction that we call a, that arrives

within the searched area, is even potentially

detectable.

Of that, then, only a fraction r of those remain

unscavenged until the next search.

And finally, a fraction that we call p of those that

arrive in the searched area and remain

unscavenged

will actually be observed by a searcher combing

the area for carcasses.

So with this equation we can see that it's the

product of these three proportions

that forms the overall probability g that an animal

struck by the turbine will be detected during a

search.

Actually, it's not quite that simple. The equation

for g really looks something like this:

the double sum of the product of an integral. But

for our purposes and for just thinking about it

thinking about it as the product of these

proportions gets as a long way.

So we can take this equation and simply

rearrange it so that we estimate M, the total

number killed, by dividing the number that we

observe

by the probability of detection.

We should note that because all the factors in

the denominator are less than one that the

estimated fatality

can never be less than what we observe.

To put some real numbers on this, let's say we

had a site at which 28 of the 40 turbines that

were out there were searched,

and our search radius, or our search

configuration, was such that we estimated 51%

of the

carcasses that could land out there to be within

our searchable area.

Scavengers removed about 1/4 of the carcasses

before we had a chance to search

and the vegetation, etc. and chance prevented

searchers from finding about 40% of them.

Each of these proportions by itself

doesn't seem particularly small.

But when you put them together

it results in an overall probability

of about 0.16.

That's the same as finding about one in every

six carcasses that arrive, or from another

perspective,

missing five out of about every 6 that arrive.

In addition, we really don't know g and we have

to estimate it so there's uncertainty in that 0.16

number.

Perhaps we have a 95% confidence interval

around g that says it's likely between 0.07

and 0.25.

If we find a substantial number of carcasses we

can use this estimator to arrive at a

95% confidence interval for M that extends

between 124 and 443 for example.

And while this is a fairly large interval for many

purposes, this is often

quite useful.

What happens though when we don't find any

animals?

To bring this idea into more familiar context, let's

considering an analogy to a parlor game.

We're at a party and someone says "Hey, let's

play Stump the Statistician!"

We need a volunteer, so

I'm a good volunteer and I leave the room. The

rest agree to roll the die five times.

When I come back, they report "We rolled no

sixes. How many times did we roll the die?"

So, I'm a good statistician and I know how to

calculate the best answer.

I divide the number of sixes observed (that's

zero) by the probability of rolling a six on any

given roll. That's one in six, or 0.167,

and the answer is zero.

The party breaks into laughter. They love

stumping the statistician. "The answer was five!"

So what happened?

In this analogy, a roll of a die represents a

carcass killed,

so five rolls, five carcasses.

The probability of a six represents the probability

that the carcass will be detected in our search

process. It's as if, for every carcass

we roll a die, if it comes up six, which it will on

average about one in six rolls, we find a

carcass.

If it doesn't, on average five out of six rolls, we

don't find the carcass.

In this case it just happened that none of the five

carcasses were found.

Our job as statisticians

is to play that game, Stump the Statistician:

guess how many carcasses there are based on

how many carcasses we found, and our

estimate of the probability of finding a carcass.

Now let's look at this graphically.

On the X-axis is the number of possible rolls. It

could be that they didn't roll at all,

or they could have rolled one or twice or maybe

20 of 25 times.

On the Y-axis is the probability of observing

what we did.

That is, observing zero 6s given the number of

rolls on the X-axis. Of course, if you don't roll

then the probability of observing no sixes is

100%.

This is, in fact, the most likely case, and it is

the maximum likelihood estimate.

But it's just barely more likely than the next

case, when we roll once.

Even if you roll nine times

there is still a 20% chance that you will observe

no sixes.

This means that if your overall probability of

detecting a carcass is one in six, then even if

there are nine dead animals out there, you have

a 20% chance of observing none of them.

It's not until you roll 17 times that your

probability of observing no sixes

drops to 5%.

So really it wouldn't be too surprising for us to

have 10 or 15 animals out there and observe

none of them.

So the main point I'm trying to make here is that

even if we know precisely the probability of

detection, which we don't (although we do with

the roll of a die, but we don't in practice), the

best we can do is bracket the range of possible

fatalities.

In this case we can be 95% certain that there

were fewer than 17 animals out there,

having observed none, and having a one in six

chance of detection.

But now let's change the game. Instead of rolling

a die, we flip a coin and count the number of

heads observed.

Again,

the party reports zero heads, and again, I

estimate no flips. In fact, no matter what the

probability of the event when zero observations

are made, my Horvitz-Thompson estimator,

the maximum likelihood estimate, will always

give the best guess as zero. But more troubling,

it will not give me any bracket on my estimate.

The variance around this is also zero.

I show you now the same graph as before, but

modified to reflect the probability

of observing a head as being 0.5.

In this case, given that we observed zero, we

can assert with 95% credibility

that we flip the coin four or fewer times.

Or, if this were a real-life search process,

having observed zero animals,

we could assert with 95% credibility that there

were four or fewer animals killed.

But we can't claim, that is, we don't have

evidence,

that there were indeed absolutely none, even

though we didn't find any.

So my second point

is that by increasing the probability of detection,

we can narrow the bracket around our estimate

of fatality.

As the probability of detection approaches one,

the more evidence we have that the actual

fatality was indeed zero, or very close,

when we observe none.

Our current site monitoring protocol and

statistical tools that we have for estimating

actual fatality from observed carcasses are fairly

robust when the number of observed carcasses

is relatively high.

But a non-zero estimate of the dead population

using a Horvitz-Thompson based estimator can

only be achieved if at least one carcass is found

and even then

it is likely to be biased, particularly when we

don't know the detection probability.

This type of estimator is not designed to

address compliance.

When we expect to find a small number of

carcasses, or maybe even zero carcasses, we

need a new protocol and estimators that can

give us precise estimates and allow zero or few

observed carcasses to provide evidence

that a company is likely in compliance with its

ITP.

The approach we've developed

is based on Bayes theorem,

and it's appropriate when we expect low

numbers, or even no observed carcasses and

when we need

precise estimates that the set limit has not been

exceeded.

Our focus changes from asking "What is the

estimated fatality given our observed count and

our probability of detection?"

to "What is the minimum level of take we can

reasonably rule out,

given our observed count (which might be zero)

and our probability of detection?"

With that, I would like to now introduce Dan

Dalthorp, who will talk about the

approach that we've developed based on Bayes

formula,

talk about the Evidence of Absence software

that he's written, and give examples of how the

software can be used

to design protocols and estimate likelihood of

compliance at sites with ITPs.

I'm going to start out by estimating the total

fatality

of the Indiana bats in a single season to show

how the Evidence of Absence software takes

into

account all of the factors that Manuela

discussed in her excellent introduction.

We'll start off with the sampling coverage, which

is

the fraction of carcasses that land in the

searched area,

rather than the fraction of the total area that has

been searched. What goes into the

sampling coverage is the fraction

of turbines searched,

the search radius,

and the unsearched areas within the search

radius.

The next very important factor is after the

carcasses arrive in the searched area, they have

to persist until

they're actually discovered on the ground.

So we model that (the scavenging process)

using a persistence distribution.

The exponential model is the most common,

and it's the most familiar, but we've also included

the possibilities for some other more flexible

models that in practice,

end up fitting the carcass persistence

distribution a lot better. So here is a plot of the

fraction of carcasses that remained in a search

trial

vs. the number of days that the trial has been

going on. The best fit exponential function is

shown and it doesn't really fit the line very well.

But a Weibull gives a lot better fit

and that's fairly typical. Also

typical is if the log-logistic or a log

normal will fit better as well.

The next factor that goes into the estimation is

the searcher efficiency, or the probability of

finding a carcass given that it's out there at the

time of the search.

And in this particular example we're going to find

50% of the carcasses, given that they are there

at the time of the search.

You can also see with the searcher efficiency a

confidence interval, and that's because this is

based on field trials where

even if we know that we found 10 out of 20

carcasses, there's still some uncertainty about

what the actual searcher efficiency is.

It's going to be maybe 50%, it could be 60%,

but it will be around 50%...the 0.4. to 0.6 range,

0.5 searcher efficiency is

typical of what you would see if you have 100

carcasses in your search trial.

So if we take these three basic parameters:

the search coverage, searcher efficiency, and

the probability that a carcass persists until the

first search afterward,

we can combine those into a rough estimate of

the overall detection probability as Manuela

discussed earlier.

But, multiplying those three factors is only a

rough estimate,

and there are two big issues. One is that it

assumes that a carcass that is missed in one

search

cannot be found in a later search.

We know that's not true. A lot of times if a

carcass is missed in one search, if a search

comes along

within the next few days there's a reasonable

chance you can find it in the next search.

But if the

carcasses are missed in one search, they're

probably less likely to be found on the next

search because

carcasses tend to deteriorate with age. They get

covered with dust, they get covered with leaves,

they blow into a hole, they get dragged around

and hidden by scavengers partially.

Also, the easy-to-find carcasses are removed

first

and so in later searches the more difficult

carcasses remain, and so naturally the searcher

efficiency is going to decrease with each

successive search. Evidence of Absence

includes a parameter for that and it's k

parameter

we call it, and it's the factor by which searcher

efficiency changes with each search.

And k can be estimated along with p in the

search trials.

You can enter the sampling dates, what dates

you have sampled,

enter an arrival function,

the arrival function tells when the carcasses

arrive in the system (at what point in the

season).

In most normal circumstances,

the arrival function is

not important. Where might be important as if all

the carcasses arrive at the very very end of

the monitoring season or if they all arrive at the

very very beginning of the monitoring season.

All other patterns

really will not have much of an impact in most

cases. So, in that case we just pick the uniform

which is the simplest, and we can take some

shortcuts in calculations if it's uniform.

The final box for inputs is the prior distribution.

The prior distribution

is an important part of the estimation because

we base it on

Bayes' theroem, which requires a prior

distribution.

There are two big advantages for using Bayes'

theorem over classical statistics,

and the first is that it gives better accuracy when

X is small. In particular,

if X is zero or one, we can get meaningful

estimates out the other side and we cannot do

that with classical statistics.

The other big advantage is that it offers the

possibility of using

prior information

to improve the current estimates. In most cases

there won't be good prior information available,

and in that case the Bayes' estimation will be

very similar to classical statistics. But when

there is good prior information available,

the Bayes' analysis gives better estimates.

So if you don't have any prior information

available, the option is to use a uniform prior,

which gives no external prejudice to the

estimation and it just lets the data speak for

itself.

The uniform prior is easy to use and it's easy to

justify,

but it does not confer the advantage of

incorporating reliable prior information.

And, we don't have any, so in that case it's the

natural choice.

Another possibility is to use a user-defined prior

where we do have

prior distribution available

that is reliable.

The advantage again is that there's potential for

improved estimates.

But a big disadvantage is that it may be difficult

to construct properly and it may be difficult to

justify

modeling choices. This is an advanced option

that is available to advanced users.

And the third possibility is to use an informed

prior. This can be used if there are search

results from prior years at the same site.

The advantage is that you've got the improved

estimates and we've already done the work of

of creating an informed prior and programming it

into Evidence of Absence software for easy use.

A disadvantage is that it's limited in scope. We

can only use data from one particular site in

previous years for estimating

this year's take only.

And one final parameter that we enter is the

level of credibility required

to conclude that a threshold is not exceeded.

So this is

like a confidence bound in classical statistics.

We want to be 80% sure that we're not

exceeding the threshold in this case.

So, we enter all the parameters and we

plug it in and ask EoA to get to work for us

and it begins with calculating the detection

probability with this

very messy looking formula that

Manuela showed a little bit earlier. It takes

account of the arrival,

the persistence, the detection probability,

the decrease in deduction probability,

a factor to make sure that we don't

double count

carcasses, and it sums over all the potential

landing times for carcasses and all the potential

search times.

So once we have that, we combine that into a

Stump the Statistician game, all pre-

programmed

into a binomial just like Manuela was showing

earlier, and then it takes another step of

combining

the overall detection probability

with the Stump the Statistician game

with a prior information

and constructs a posterior.

And then when it's all done

it gives us a nice graph that tells us everything

we need to know.

In the upper right-hand corner, we

can see that the overall detection

probability

is 0.167 or about 1 in 6.

This is a little greater than the

rough estimate of 0.13 that we got

earlier by multiplying the coverage

× persistence × searcher

efficiency.

The difference is that the EoA

model properly accounts for the

possibility

that if you miss a carcass in one

search, you could find it later on.

That rough estimate we made

earlier does not account for that

possibility.

so it tends to have lower detection

probability estimates

than the more accurate EoA

model.

Below that, we see that the

probability that the number of

fatalities is less than or equal to

8 is greater than 80%.

In other words, we can be 80%

sure

that there were no more than 8

fatalities.

Overall, what the graph shows us,

on the X-axis

is the total number of fatalities that

may have occurred

and on the Y-axis,

the probability that the number of

fatalities in acutality exceeded the

number on the X-axis.

For example, at m = 0,

the probability that the actual

number of fatalities exceeded

zero, given our observed data

or seeing zero carcasses in this

case,

the probability that the number of

fatalities exceeded zero or more is

100%.

On the far right hand side of the

graph, we see that it's almost

impossible, or a zero% probability

that there were greater than 35

fatalities, it's very unlikely that

there were more than 20 fatalities

about 2%, and, in the edge of the

red region, we see that

there's less than 20% chance that

there are greater than or equal to

9 fatalities

or we can assert with 80%

credibility that the actual number

of fatalities is somewhere in the

red region between 0 and 8.

In other words, the red region

gives us an interval estimate of the

number of fatalities.

Given the strength of the search

protocol and the fact that we

found zero carcasses,

we can rule out the black bars

with 80% credibility.

It may be natural to also ask what

the single best point estimate of

the number of fatalities is, but

there are a number of difficulties

with this question when zero or

few carcasses are observed.

A naïve, rule-of-thumb point

estimate that works well when

there are a lot of carcasses

observed

is just to divide the observed

number of carcasses by the

detection probability

or X/ĝ.

But when X = 0, and no

carcasses are found,

this estimate makes no distinction

between the case where detection

probability is 95%,

and we could be fairly certain that

there were no fatalities,

and the case where detection

probability is 5%,

where we can't rule out that there

were 20, or 30, or even 40

fatalities

and we just happen to miss all of

them.

An estimator that makes no

distinction between those very

different cases

has some difficulties...and there

are additional problems as well.

When g is small, X/ĝ is biased,

and when X is small

X/ĝ is unreliable

because the uncertainty is very

high compared to the mean.

So, for example, if we want to

demonstrate with 80% credibility

that take was no more than tao=5,

when no carcasses are observed,

we can use the software

to come up with a search plan that

will get us the desired level of

credibility.

and we can take into account the

searcher efficiency, the search

coverage, and the search interval

can what combination of those

would give us the required

detection probability

to demonstrate with 80%

credibility.

and the solution is to use the

design tradeoffs module to explore

the possibilities.

So within the software, we enter

the threshold

or the number of fatalities that we

want to demonstrate we have not

exceeded,

along with the credibility level, and

in this case it's 0.8

or we want to be 80% sure that we

have not killed more than 8

assuming that we have found zero

carcasses overall.

We enter the persistance

distribution and the k

just as we did before in the prior

and the arrival function

and then we can look at the trade-

offs between searcher efficiency,

coverage and search interval

to find out what combination of

these parameters

will give us the best optimal way

of attaining the credibility level that

we require.

So in this particular example,

we're going to go with a searcher

efficiency of 0.5

and then we're going to compare

coverages from 25% up to 100%

and compare search intervals

from 1 to 14 days to see which

one give us the best results.

We'll ask Evidence of Absence to

draw the graph, and what we see

is

in color, the probability that the

detection

the probablity that the number of

fatalities exceeded the threshold,

given that we've counted zero

carcasses.

So let's start with the blue. The

blue tells us that it is very unlikely

that we have exceeded the

threshold if we are within search

coverage of 90-100%

and we're searching on an interval

of 1 or 2 days.

Under those conditions, we would

be very unlikely to see

zero carcasses and so that's a

strong sign that the mortality

does not exceed 5.

On the other hand, if we only

searched 30% of the carcasses,

and our search interval were 10

days, we'd be way up here in the

yellow region

which would say we had very little

evidence that the fatality did not

exceed 5.

So what we're trying to do is

find the set of parameters that will

get us

as far into this blue region as we

can

and, what we've said,

is the real target is to get us to

80% credibility

which is only 20% probability

that we are in excess of the

threshold.

So we're shooting for this line,

essentially, this 20% line.

And there are a couple of ways

we can get to that 20% line.

The first is if we have 100%

coverage, and we're searching

once every 9 days. Another

possibility is to have 50%

coverage and search everyday.

There are actually a few ways we

can get those parameters. One is

if we search all turbines out to

a long search radius, far enough

out so that we're sure that we'll get

all the carcasses

and we search 100% of the area

within each of those

then we can get coverage of 1. If

we do that, we only have to

search once every 9 days.

On the other hand, it's very

difficult to search out

to search all the turbines, so

maybe we can search half the

turbines

and get a search coverage of

50%. In that case, we'd need to

search everyday

and those are 2 ways to get the

same results.

But the top way is preferable. We

have to search twice as many

turbines, but

we're only searching them once

every 9 days instead of

everyday. 22% as much effort,

but we get the same results

of 80% credibility.

Another possibility

is that we search all of the

turbines, and we have a choice

are we going to search them out

to a long, long radius?

Far enough so that we have 100%

coverage, and

in that case, we'd have to search

once every 9 days.

Or

we can search every turbine, but

only go out to 30 meters

which might cover half the

carcasses, so our coverage would

be 50%

but if we did that, we'd have to go

out every day.

In this case, the area, or the ratio

of the areas,

would be 30 squared or 100

squared, looking at the area within

30 or 100 meters

but we'd have to do 9 times as

many searches, because we're

searching everyday

instead of once every 9 days. But

in that case, we'd end up with

81% as much effort as the other

option.

So we can use the software

to help design what set of

parameters would be the most

efficient to get the results

that are required.

Another trade-off we might

consider is searcher efficiency

vs. the sampling coverage.

If we have higher searcher

efficiency and greater coverage,

well that puts us more into the blue

range. If we have low searcher

efficiency

and low coverage, that gets us

into the yellow range. Not good.

To get on to the target, what's the

best way to do that?

One way this trade-off can work in

practice is

to compare road and pad

searches vs. cleared plots.

The road and pad searches, you

can get great searcher efficiency

for bats, you can be in the

neighborhood of 80%, but the

roads and pads only cover

a small part of the area

that usually comprises around

15% or less of the carcasses.

You can increase the coverage by

searching on cleared plots,

but the search conditions tend to

be more difficult on cleared plots

and so the searcher efficiency

might be only 30%, but

the amount of coverage can go up

to 80, 90, 100%. Expensive, but it

can be done.

So if we translate those ideas over

to our graph, the road and pad

search

we can get the high searcher

efficiency, 80-90%,

but we can only get a coverage of

15%, roughly,

which brings us well short of the

target 20% line.

So to move up to that line, it's

going to be tough to get more

searcher efficiency

but if we get higher coverage, we

can move up to that line, as long

as

our searcher efficiency doesn't

get too low.

What we can do is, add some

cleared plots, enough to boost the

coverage to 50%,

and as along as our searcher

efficiency doesn't drop below, say

40%

we'll be at the target of 0.2.

A third feature of Evidence of

Absence that will come in very

handy is

estimation of a multiple year total.

So for example, we have several

years of search data

and we want to estimate not the

total in any particular year,

but the total that have accumulated

throughout a number of years

through years 1, 2, 3, and so on.

The software is used to calculate

the detection probability for each

of the years

and then, the series of data is

entered into a multiple years

page.

That looks something like this.

So in the first year, we had a

detection probability of 0.3,

within some confidence interval,

and didn't find any carcasses.

Didn't find any the second year

detection probability was slightly

lower,

and in the third year, we did find a

carcass, and at the same time,

we changed the search protocol

to only do roads and pads and

only got 7%

detection probability in that year

instead of the 25-30% we had in

the previous years.

Combining all this data, we can

ask Evidence of Absence to

estimate the total number of

fatalities

and again

it will give us the posterior

distribution for the total fatalities

over the 3 years. In this case, we

found one carcass

the detection probability averaged

20% over the 3 years,

and we can rule out fatalities

exceeding 15

but 15 or fewer, it could well be.

As a quick sanity check, let's

divide our count

which was 1, we found 1 in the

third year, divide that by the

detection probability of 0.21

and that is 5, roughly. So if we go

up to 5, well that puts us in about

the 50, 60% zone so there is a

reasonable chance that we have

more or less than 5, that's kind of

in the center of the distribution.

But if we want to be sure that we

have fewer than a certian number

we run up to the roughly 80%

credibility level or confidance level

in classical statistics.

A final use of the Evidence of

Absence software that we're going

to discuss today is

in the context of a long-term

permit. For example, suppose a

30-year permit

allows an average take of 2 per

year.

In other words, we can allow 60

total over 30 years.

Ideally, what we could do is track

the true number of fatalities

and when that true number of

cumulative fatalities exceeds the

threshold of 60

that we're allowed over 30 years

Then we impliment

some sort of adaptive

management action because the

fatalities have exceeded the

threshold and we're not in

compliance with the permit

anymore.

Unfortunately, we never know for

sure the true number of fatalities

and we need to estimate.

The simpliest, most obvious

approach would be to use

Evidence of Absence and track

the cumulative fatality though time.

So we can do that, and end up

with a graph of the estimated

number of fatalities

and we can see that the total

estimated fatalities exceeds

the premitted threshold of 60 at

year 26, which is 1 year after the

true fatality exceeded that 26.

It could be a year or 2 late, or it

could detect it a year or 2 or 3

early

there's a lot of uncertainty about

when it will kick in, but ideally on

average it'll be pretty close.

But there are some additional

issues. First, the population my

well be able to sustain a take of

T spread out over 30 years, but

the long-term trigger will not

preclude

that take from happening over the

period of just a few years.

And if that take is all

concentrated in the first few years

of a project, say

it may be difficult for the

population to recover.

Another issue is that the acutal

take rate

which we're going to call lambda,

may not be at all in line

with the premitted take rate, which

we'll call tau

and we may get a signal of that

very early in the project and have

warning that we can do something

to get the project on track.

So if take rate does look like it's

higher than the premitted rate,

we can implement some sort of

adaptive management

to reduce that rate to avoid the

final take

rate exceeding what's allowed

from the permit in the end

or in other cases, we might adjust

the permitted take level to match

what's really going on in the field.

So if we could design a

secondary trigger that will fire to

give warning that our

take rate is sustainable under the

permit, that could be very helpful

in

bringing down the take rate to

acceptable levels and with

maintaining compliance

with the permit. So the goal is to

define a short-term trigger to

signal when

the actual take rate is out of line

with expectations, tau.

take of 60 in 30 years is much more strict than

take within 30 years

So to do this, the first thing to note

is that an annual take

of 2 is much more strict than

permitting a total of 60 over 30

years.

For example,

this is a graph of the exact same

data that I showed you earlier

with the cumulative take

approaching 60 over the course

of the permit

but instead of looking at

cumulative take, it's the year by

year take.

This data was all generated from a

process

that has a mean of 2 per year,

but about half the time,

you're getting take that exceeds 2,

half the time you're getting take

that is less than 2.

So if the idea is that your take can

never exceed 2

Well, the first year of the project,

you've exceeded 2

and you're in violation of the

permit if

you define the permit in that way.

So the point is that allowing a total

take of 60

in 30 years is much less strict

than

requiring take to be less than or

equal to 2 each and every year.

So we'll step back and think about

this for a moment

if the primary concern is the

effect on viability of a population

or whether actual take is in line

with expectations,

a short-term trigger to test whether

the average take rate is

compatible with the data that we're

seeing, we can define a short-

term trigger to test whether the

probability that the actual rate

exceeds the threshold

is too high.

So more intuitively, we can look at

this

annual take over the course of the

30 years

and see if this pattern of take

is compatible with an average take

rate of 2 per year.

So if you think about it for a

second,

if the average take were 1 per

year, it would be pretty

strange to see a pattern like this

because almost every year,

it exceeds 1, some years it's

below, but typically it's well above

1.

So an average take rate of 1,

highly unlikely with data like that.

On the other hand, an average

take rate of 4

also highly unlikely

usually we're well below 4

sometimes we're above 4, but it's

almost implausible to have a take

rate

of 4 if that's our data. So what we

can do is draw a 99% confidance

interval

around the average take rate and

it's going to be somewhere

between 1.33 and 3.

Of course, for a short-term

trigger, we don't want to go the

whole length of the project before

we run our test, we want to check

it over the short term.

So those first 3 years, what is a

plausible range of actual take

average rate over these first 3

years?

Well, the most likely is a take rate

of around 3 on average

but if it's 4

is this a plausible pattern to see?

Yeah, it doesn't look bad.

Or if the rate were 8, it would be

surprising to see that many counts

so much smaller than 8.

Or if it were 1, it would be

surprising to see that many counts

so much greater than 1.

So what range of plausibility is

there on that average take rate

over that period of time

we can calcluate a 99% credibility

interval

using Evidence of Absence and

we come up with a range

somewhere between 1.61 and

6.78

is the plausible range for take in

that

particular 3-year span.

The most extreme 3-year span in

this 30-year period is right in this

case where they're all above 2

and the total is 4 + 5, 9, 11,

12...12 in a 3-year span

and it turns out that creating a

99% credible interval for the take

rate in that period

we come up with 2.06 to 7.68,

which excludes 2.

So the permitted take level is

outside the plausible range

of what the take actually is. In that

case, the short-term trigger fires.

So with our short-term trigger,

we're going to look at 3-year

moving averages

and test whether the data from the

3 years is compatible, or not, with

a permitted take of 2 per year.

So we check in that 3 year

period, and we do that throughout

the whole period of

the permit and at one point, the

data were

not very compatible with an

average rate of 2 per year.

Ok, so the short-term trigger has

indicated that the actual take does

not

line up with the expected take or

the threshold.

And so

there are a couple of scenarios

where that can happen. One, is

that the average take rate is

clearly higher than anticipated. In

that case, there are a couple of

options.

One is to implement some sort of

minimization and monitoring. It

could be, in the case of bats,

curtailment at lower wind speeds,

increased monitoring to make

sure that

our rate really is in line with

expectations.

Or another option would be to

reset the take limit based on the

projected take

but require some sort of mitigation

offset.

Another scenario would be that

the observed take rate is

significantly lower than anticipated

at the start of the project.

For example, with bats in the

Midwest, there's new discussion

about permitting

with a prophylactic curtailment

required at 5 meters per second

windspeed at the start of projects

but if the average take rate is

clearly lower than anticipated,

one option might be to loosen that

5 meter per second requirement

and allow either free operation or

curtailment at

not quite as extreme levels.

However, to demonstrate that

average take rate is lower than a

threshold,

is more difficult than

demonstrating actual take in a

given year is lower than threshold.

So this turns out to be a multiple

year proposition, but

over the course of several years,

it could turn out to be

an important aspect of fatality and

wildlife management.

Thank you, Dan. So in summary, I

want to emphasize

that it's the combination of what

was actually killed and our

probability of detection

that results in what we observe.

If actual fatality is very small,

we're quite likely to find none.

But if actual fatality is large, and

our detection probability is low,

then we are also quite likely to find

none.

So when we're seeking evidence

of compliance with an incidental

take permit

it's quite important for us to

distinguish these two cases.

And the current Horvitz-Thompson

based estimators that we have

available to us

and there's several

can't provide the answers that we

need.

The optimal monitoring protocol

may be very different

if your objective is to estimate

fatality of general groups of

species, like bats or passerines,

than when your objective is to

provide evidence of compliance

with an ITP for a rare species.

The target probability of detection

is determined by the incidental

take permit itself

by the limit that's set.

And it won't differ by the species.

What will differ is the cost of

achieving it.

As generally, large species are

easier to detect than small ones.

But the Evidence of Absence

software provides tools for

optimizing design

under site-specific conditions.

The protocol can be quite flexible

and can trade off search area,

search interval, and sampling

fraction.

The new Evidence of Absence

modules that Dan talked about

earlier

take advantage of continuous,

low-level monitoring over the life of

the project to identify,

on a short-term basis, when

compliance has not been met.

and on a long-term basis when the

total take exceeds the overall

permitted take.

This approach, based on Bayes'

theorem, can be used to provide

feedback

to pre-construction risk models. It

can inform post-construction

monitoring design,

and can be used to inform

management decisions.

The calculations are not simple,

but we've packaged them into a

user friendly, peer-reviewed

package called

Evidence of Absence, the

software which is publicly

available at this website.

It's available for download, it has a

users' guide with it

and as always, we are very very

pleased to hear feedback from

our users.

So with that, I want to again

acknowledge the financial support

of the Fish and Wildlife Service

and the USGS, as well as our co-

authors David Dail, Lisa Madsen,

and Jessica Tapley

for their contributions. And of

course, thank you for your

interest.