PRMS Time Series

Video Transcript
Download Video
Right-click and save to download

Detailed Description

Instructions for time series data preparation when using the USGS Precipitation Runoff Modeling System (PRMS).

Details

Image Dimensions: 480 x 360

Date Taken:

Length: 00:04:47

Location Taken: Lakewood, CO, US

Transcript

Steve Markstrom: Welcome to the PRMS training
video on the time series data process.

This is not really a hydrologic process but
more of administrative task that PRMS does.

Basically it's got to read data from a data
file, and we do have a module...PRMS has a

single module that does that.

It's called the OBS Module.

Here you can see a cut-out of table one four
from the PRMS users menu, and this part of

the table shows all the different input variables
that could be included in the data file.

You can see there's quite a bit of them.

The normal values that are in the data file
are precipitation, stream flow data, daily

maximum and minimum temperature.

Those could be typically included in the data
file.

It's going to be discussed much more in the
climate process presentation.

Some of the rules about dealing with this
stuff is that there must be a data file because

PRMS uses the dates, the time steps in the
data file, to actually drive the time loop

so you have to have a data file.

The data file must include time steps that
include the period that you want to run the

model for.

You can't run the model outside of time steps
that are included in the data file.

The data file typically contains a station
data and the climate-by-HRU file or the CBH

file, which again will be discussed later,
typically contains time series data by HRU.

That's kind of the distinction.

Both the data file and the CBH files can contain
climate data.

OK, here is a little example of the PRMS data
file.

Again this comes from, this image here comes
from the PRMS user's manual, but what I think

I'm going to do here is just pop open a text
editor, and here you can see I've actually

loaded in the ACF data file from the example.

Here we can see this very first line is a
comment or is the tag describing the source

of the data, in this case created by downsizer.

Then any line that starts with two slashes
like that is a comment, and in this case what

we've done is comment the source of the data
down below.

This is a station data, you can see the station
ID and then the latitude, the longitude, and

the elevation of the station.

In this case it says, air temperature maximum
so you can see we've got quite a few values

of maximum air temperature, then it's air
temperature minimum.

Again, these are comments.

Then we've got stream flow data in here, so
this is the gauge ID and latitude and longitude.

This stuff here basically tells you how many
columns of each value type you're going to

have.

OK, so this says we have 79 TMAX, 79 TMIN,
79 precipitation, and then 58 stream flow

values.

This line here delineates the header part
from the actual data down here, and then you

could see the data starts.

And here's the time stamp: year month day,
hour, minute, second.

You can see the data for that day goes out
on the line.

You can see this is a pretty long lines, -999
is our missing value.

This goes from the end of 1949 out to sometime
in 2009.

OK, so that is a typical data file, and with
that I will conclude this presentation.