Landsat's Calibration and Validation

Video Transcript
Download Video
Right-click and save to download

Detailed Description

Engineers and scientists from both Landsat and Sentinel missions are working together to calibrate observation data and validate its quality to improve the science using these resources.
 

Details

Image Dimensions: 1280 x 720

Date Taken:

Length: 00:04:37

Location Taken: Sioux Falls, SD, US

Video Credits

Producer, Steve Young

Transcript

DAVID ROY: Frankly the
biggest limitation is the

lack of understanding.
The engineers speak one

language and the users
speak a different language

very often.

BRIAN MARKHAM: I don't
the users necessarily have

a good understanding of
what the instruments they're

working with are actually
measuring. Not blaming this

on the users, but just that
either from our side in Cal/Val

don't necessarily communicate
it well, or in terms that a

typical user can understand it.
Cal/Val people tend to be

engineers and users vary all
over the place in terms of their

understanding of the physics
of what's actually measured

by the instruments.

DAVID: The way I explain
calibration to students is

I say it's what the engineers
do to make the relationship

between the digital numbers,
which the sensor actually records,

and radiance, which we then
convert to reflectance and so

that's actually critical to have a
stable relationship between what

we see on the ground and what the
sensor sees both in space and time.

RON MORFITT: If we do a poor 
job on the calibration side, 

it's either less accurate data
or more noisy data and then

the application side ends up
either characterizing things

 that aren't there due to
mis-calibrations or not

being able to characterize
the things they're actually after.

JEFF MASEK: Especially if you
look at long term trends for

example, something that we're
very interested in, how are

ecosystems changing through
time, you're trying to put together

observations from multiple
sensors throughout the

Landsat archive, if they're not
all cross-calibrated and

well calibrated, then you really
don't know if you're looking at

real changes, or just changes
in the instrumentation.

BRIAN: We want to make sure
that the products the user gets

are consistent. They don't have
to as a user try to figure out

something to normalize it
to the previous product

or make it more
consistent.

DAVID: We think about
calibration as what the

engineers do to get the
digital numbers to radiance

and reflectance correct, we
think about validation in terms

of what we do, which is to say
how good a product we're

making, if we make a tree
cover map or a map of

cities or a map of vegetation
change. We want to independently

say how good it is. As scientists
if we just make maps or 

datasets, and then we can't
say how accurate it is, how

reliable it is, then we're
going to have a problem.

we're not really very plausible
as scientist. But if some of

our research becomes
policy specific, then it's very

important we can say
how good it is.

JEFF: I think the big thing
that application scientists are

looking for is quantitative
estimates of uncertainty

about the observations. If
you have a measurement error

then you can tell whether a
trend or pattern is statistically

significant, statistically valid.
If you don't have that, then

you're sort of lost.

BRIAN: There's some fundamental
differences between the two

instruments. They're not
necessarily large, there are

we don't have exactly the same
spectral bands on the two

instruments so you can't make
them agree because they're not

measuring exactly the same thing.

RON: They're actually pretty close.
They're within the 2-3% that we

believe each sensor is accurate to
anyway, so I think it's more just

the validation that we are
producing products that

are close together already.

SEBASTIEN CLERC: In some
cases you can just take the raw

data from Landsat and Sentinel 2,
put them on the same

time series and you'd hardly
notice the difference

between the two sensors.

RON: I think the big thing is
to continue what we've

already started, working closer
together with the Sentinel team

so that we come to an agreement
on how we should change one

sensor or the other, or both
to be a more consistent

product to improve
the interoperability.

JIM STOREY: The hope here
is this is going to be sort of a

test case or prototype
of how to do this, how to

make two sensors that
were developed separately,

they were both developed to
their own requirements, 

developed through separate
processes, and we're going to

see if we can bring them
together in a way that makes

them essentially interoperable.
And if we can show that that

can be done then that paves
a path to do that for other sensors.

DAVID: It's a fantastic time to
be a user of satellite data

if you're in that moderate
resolution domain, this is

amazing. The Sentinels and
Landsat together is going to be

really a game changer.
It truly is.