tue, 21-apr-2015, 17:33

Abstract

The following is a document-style version of a presentation I gave at work a couple weeks ago. It's a little less useful for a general audience because you don't have access to the same database I have, but I figured it might be useful for someone who is looking at using dplyr or in manipulating the GHCND data from NCDC.

Introduction

Today we’re going to briefly take a look at the GHCND climate database and a couple new R packages (dplyr and tidyr) that make data import and manipulation a lot easier than using the standard library.

For further reading, consult the vignettes for dplyr and tidyr, and download the cheat sheet:

GHCND database

The GHCND database contains daily observation data from locations around the world. The README linked above describes the data set and the way the data is formatted. I have written scripts that process the station data and the yearly download files and insert it into a PostgreSQL database (noaa).

The script for inserting a yearly file (downloaded from http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/) is here: ghcn-daily-by_year_process.py

“Tidy” data

Without going into too much detail on the subject (read Hadley Wickham’s paper) for more information, but the basic idea is that it is much easier to analyze data when it is in a particular, “tidy”, form. A Tidy dataset has a single table for each type of real world object or type of data, and each table has one column per variable measured and one row per observation.

For example, here’s a tidy table representing daily weather observations with station × date as rows and the various variables as columns.

Station Date tmin_c tmax_c prcp snow ...
PAFA 2014-01-01 12 24 0.00 0.0 ...
PAFA 2014-01-01 8 11 0.02 0.2 ...
... ... ... ... ... ... ...

Getting raw data into this format is what we’ll look at today.

R libraries & data import

First, let’s load the libraries we’ll need:

library(dplyr)      # data import
library(tidyr)      # column / row manipulation
library(knitr)      # tabular export
library(ggplot2)    # plotting
library(scales)     # “pretty” scaling
library(lubridate)  # date / time manipulations

dplyr and tidyr are the data import and manipulation libraries we will use, knitr is used to produce tabular data in report-quality forms, ggplot2 and scales are plotting libraries, and lubridate is a library that makes date and time manipulation easier.

Also note the warnings about how several R functions have been “masked” when we imported dplyr. This just means we'll be getting the dplyr versions instead of those we might be used to. In cases where we need both, you can preface the function with it's package: base::filter would us the normal filter function instead of the one from dplyr.

Next, connect to the database and the three tables we will need:

noaa_db <- src_postgres(host="mason",
                        dbname="noaa")
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")

The first statement connects us to the database and the next two create table links to the observation table and the variables table.

Here’s what those two tables look like:

glimpse(ghcnd_obs)
## Observations: 29404870
## Variables:
## $ station_id  (chr) "USW00027502", "USW00027502", "USW00027502", "USW0...
## $ dte         (date) 2011-05-01, 2011-05-01, 2011-05-01, 2011-05-01, 2...
## $ variable    (chr) "AWND", "FMTM", "PRCP", "SNOW", "SNWD", "TMAX", "T...
## $ raw_value   (dbl) 32, 631, 0, 0, 229, -100, -156, 90, 90, 54, 67, 1,...
## $ meas_flag   (chr) "", "", "T", "T", "", "", "", "", "", "", "", "", ...
## $ qual_flag   (chr) "", "", "", "", "", "", "", "", "", "", "", "", ""...
## $ source_flag (chr) "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", ...
## $ time_of_obs (int) NA, NA, 0, NA, NA, 0, 0, NA, NA, NA, NA, NA, NA, N...
glimpse(ghcnd_vars)
## Observations: 82
## Variables:
## $ variable       (chr) "AWND", "EVAP", "MDEV", "MDPR", "MNPN", "MXPN",...
## $ description    (chr) "Average daily wind speed (tenths of meters per...
## $ raw_multiplier (dbl) 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0....

Each row in the observation table rows contain the station_id, date, a variable code, the raw value for that variable, and a series of flags indicating data quality, source, and special measurements such as the “trace” value used for precipitation under the minimum measurable value.

Each row in the variables table contains a variable code, description and the multiplier used to convert the raw value from the observation table into an actual value.

This is an example of completely “normalized” data, and it’s stored this way because not all weather stations record all possible variables, and rather than having a single row for each station × date with a whole bunch of empty columns for those variables not measured, each row contains the station × data × variable data.

We are also missing information about the stations, so let’s load that data:

fai_stations <-
    tbl(noaa_db, "ghcnd_stations") %>%
    filter(station_name %in% c("FAIRBANKS INTL AP",
                               "UNIVERSITY EXP STN",
                               "COLLEGE OBSY"))
glimpse(fai_stations)
## Observations: 3
## Variables:
## $ station_id   (chr) "USC00502107", "USW00026411", "USC00509641"
## $ station_name (chr) "COLLEGE OBSY", "FAIRBANKS INTL AP", "UNIVERSITY ...
## $ latitude     (dbl) 64.86030, 64.80389, 64.85690
## $ longitude    (dbl) -147.8484, -147.8761, -147.8610
## $ elevation    (dbl) 181.9656, 131.6736, 144.7800
## $ coverage     (dbl) 0.96, 1.00, 0.98
## $ start_date   (date) 1948-05-16, 1904-09-04, 1904-09-01
## $ end_date     (date) 2015-04-03, 2015-04-02, 2015-03-13
## $ variables    (chr) "TMIN TOBS WT11 SNWD SNOW WT04 WT14 TMAX WT05 DAP...
## $ the_geom     (chr) "0101000020E6100000A5BDC117267B62C0EC2FBB270F3750...

The first part is the same as before, loading the ghcnd_stations table, but we are filtering that data down to just the Fairbanks area stations with long term records. To do this, we use the pipe operator %>% which takes the data from the left side and passes it to the function on the right side, the filter function in this case.

filter requires one or more conditional statements with variable names on the left side and the condition on the right. Multiple conditions can be separated by commas if you want all the conditions to be required (AND) or separated by a logic operator (& for AND, | for OR). For example: filter(latitude > 70, longitude < -140).

When used on database tables, filter can also use conditionals that are built into the database which are passed directly as part of a WHERE clause. In our code above, we’re using the %in% operator here to select the stations from a list.

Now we have the station_ids we need to get just the data we want from the observation table and combine it with the other tables.

Combining data

Here’s how we do it:

fai_raw <-
    ghcnd_obs %>%
    inner_join(fai_stations, by="station_id") %>%
    inner_join(ghcnd_vars, by="variable") %>%
    mutate(value=raw_value*raw_multiplier) %>%
    filter(qual_flag=='') %>%
    select(station_name, dte, variable, value) %>%
    collect()
glimpse(fai_raw)

In order, here’s what we’re doing:

  • Assign the result to fai_raw
  • Join the observation table with the filtered station data, using station_id as the variable to combine against. Because this is an “inner” join, we only get results where station_id matches in both the observation and the filtered station data. At this point we only have observation data from our long-term Fairbanks stations.
  • Join the variable table with the Fairbanks area observation data, using variable to link the tables.
  • Add a new variable called value which is calculated by multiplying raw_value (coming from the observation table) by raw_multiplier (coming from the variable table).
  • Remove rows where the quality flag is not an empty space.
  • Select only the station name, date, variable and actual value columns from the data. Before we did this, each row would contain every column from all three tables, and most of that information is not necessary.
  • Finally, we “collect” the results. dplyr doesn’t actually perform the full SQL until it absolutely has to. Instead it’s retrieving a small subset so that we can test our operations quickly. When we are happy with the results, we use collect() to grab the full data.

De-normalize it

The data is still in a format that makes it difficult to analyze, with each row in the result containing a single station × date × variable observation. A tidy version of this data requires each variable be a column in the table, each row being a single date at each station.

To “pivot” the data, we use the spread function, and we'll also calculate a new variable and reduce the number of columns in the result.

fai_pivot <-
    fai_raw %>%
    spread(variable, value) %>%
    mutate(TAVG=(TMIN+TMAX)/2.0) %>%
    select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
           WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)
head(fai_pivot)
## Source: local data frame [6 x 18]
##
##   station_name        dte  TAVG TMIN TMAX TOBS PRCP SNOW SNWD WSF1 WDF1
## 1 COLLEGE OBSY 1948-05-16 11.70  5.6 17.8 16.1   NA   NA   NA   NA   NA
## 2 COLLEGE OBSY 1948-05-17 15.55 12.2 18.9 17.8   NA   NA   NA   NA   NA
## 3 COLLEGE OBSY 1948-05-18 14.40  9.4 19.4 16.1   NA   NA   NA   NA   NA
## 4 COLLEGE OBSY 1948-05-19 14.15  9.4 18.9 12.2   NA   NA   NA   NA   NA
## 5 COLLEGE OBSY 1948-05-20 10.25  6.1 14.4 14.4   NA   NA   NA   NA   NA
## 6 COLLEGE OBSY 1948-05-21  9.75  1.7 17.8 17.8   NA   NA   NA   NA   NA
## Variables not shown: WSF2 (dbl), WDF2 (dbl), WSF5 (dbl), WDF5 (dbl), WSFG
##   (dbl), WDFG (dbl), TSUN (dbl)

spread takes two parameters, the variable we want to spread across the columns, and the variable we want to use as the data value for each row × column intersection.

Examples

Now that we've got the data in a format we can work with, let's look at a few examples.

Find the coldest temperatures by winter year

First, let’s find the coldest winter temperatures from each station, by winter year. “Winter year” is just a way of grouping winters into a single value. Instead of the 2014–2015 winter, it’s the 2014 winter year. We get this by subtracting 92 days (the days in January, February, March) from the date, then pulling off the year.

Here’s the code.

fai_winter_year_minimum <-
    fai_pivot %>%
        mutate(winter_year=year(dte - days(92))) %>%
        filter(winter_year < 2014) %>%
        group_by(station_name, winter_year) %>%
        select(station_name, winter_year, TMIN) %>%
        summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
        filter(n>350) %>%
        select(station_name, winter_year, tmin) %>%
        spread(station_name, tmin)

last_twenty <-
    fai_winter_year_minimum %>%
        filter(winter_year > 1993)

last_twenty
## Source: local data frame [20 x 4]
##
##    winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN
## 1         1994       -43.96            -47.92             -47.92
## 2         1995       -45.04            -45.04             -47.92
## 3         1996       -50.98            -50.98             -54.04
## 4         1997       -43.96            -47.92             -47.92
## 5         1998       -52.06            -54.94             -54.04
## 6         1999       -50.08            -52.96             -50.98
## 7         2000       -27.94            -36.04             -27.04
## 8         2001       -40.00            -43.06             -36.04
## 9         2002       -34.96            -38.92             -34.06
## 10        2003       -45.94            -45.94                 NA
## 11        2004           NA            -47.02             -49.00
## 12        2005       -47.92            -50.98             -49.00
## 13        2006           NA            -43.96             -41.98
## 14        2007       -38.92            -47.92             -45.94
## 15        2008       -47.02            -47.02             -49.00
## 16        2009       -32.98            -41.08             -41.08
## 17        2010       -36.94            -43.96             -38.02
## 18        2011       -47.92            -50.98             -52.06
## 19        2012       -43.96            -47.92             -45.04
## 20        2013       -36.94            -40.90                 NA

See if you can follow the code above. The pipe operator makes is easy to see each operation performed along the way.

There are a couple new functions here, group_by and summarize. group_by indicates at what level we want to group the data, and summarize uses those groupings to perform summary calculations using aggregate functions. We group by station and winter year, then we use the minimum and n functions to get the minimum temperature and number of days in each year where temperature data was available. You can see we are using n to remove winter years where more than two weeks of data are missing.

Also notice that we’re using spread again in order to make a single column for each station containing the minimum temperature data.

Here’s how we can write out the table data as a restructuredText document, which can be converted into many document formats (PDF, ODF, HTML, etc.):

sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Minimum temperatures by winter year, station
winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN
1994 -43.96 -47.92 -47.92
1995 -45.04 -45.04 -47.92
1996 -50.98 -50.98 -54.04
1997 -43.96 -47.92 -47.92
1998 -52.06 -54.94 -54.04
1999 -50.08 -52.96 -50.98
2000 -27.94 -36.04 -27.04
2001 -40.00 -43.06 -36.04
2002 -34.96 -38.92 -34.06
2003 -45.94 -45.94 NA
2004 NA -47.02 -49.00
2005 -47.92 -50.98 -49.00
2006 NA -43.96 -41.98
2007 -38.92 -47.92 -45.94
2008 -47.02 -47.02 -49.00
2009 -32.98 -41.08 -41.08
2010 -36.94 -43.96 -38.02
2011 -47.92 -50.98 -52.06
2012 -43.96 -47.92 -45.04
2013 -36.94 -40.90 NA

Plotting

Finally, let’s plot the minimum temperatures for all three stations.

q <-
    fai_winter_year_minimum %>%
        gather(station_name, tmin, -winter_year) %>%
        arrange(winter_year) %>%
        ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Minimum temperature (degrees F)", breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                               labels=c("College Observatory",
                                        "Fairbanks Airport",
                                        "University Exp. Station"),
                               values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            # theme(legend.position = c(0.150, 0.850)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

print(q)
//media.swingleydev.com/img/blog/2015/04/min_temp_winter_year_fai_stations.svg

To plot the data, we need the data in a slightly different format with each row containing winter year, station name and the minimum temperature. We’re plotting minimum temperature against winter year, coloring the points and trendlines using the station name. That means all three of those variables need to be on the same row.

To do that we use gather. The first parameter is the name of variable the columns will be moved into (the station names, which are currently columns, will become values in a row named station_name). The second is the name of the column that stores the observations (tmin) and the parameters after that are the list of columns to gather together. In our case, rather than specifying the names of the columns, we're specifying the inverse: all the columns except winter_year.

The result of the gather looks like this:

fai_winter_year_minimum %>%
    gather(station_name, tmin, -winter_year)
## Source: local data frame [321 x 3]
##
##    winter_year station_name tmin
## 1         1905 COLLEGE OBSY   NA
## 2         1907 COLLEGE OBSY   NA
## 3         1908 COLLEGE OBSY   NA
## 4         1909 COLLEGE OBSY   NA
## 5         1910 COLLEGE OBSY   NA
## 6         1911 COLLEGE OBSY   NA
## 7         1912 COLLEGE OBSY   NA
## 8         1913 COLLEGE OBSY   NA
## 9         1915 COLLEGE OBSY   NA
## 10        1916 COLLEGE OBSY   NA
## ..         ...          ...  ...

ggplot2

The plot is produced using ggplot2. A full introduction would be a seminar by itself, but the basics of our plot can be summarized as follows.

ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +

aes defines variables and grouping.

geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(method="lm", se=FALSE) +

geom_point draws points, geom_smooth draws fitted lines.

scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Minimum temperature (degrees F)",
                    breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
                    labels=c("College Observatory", "Fairbanks Airport",
                            "University Exp. Station"),
                    values=c("darkorange", "blue", "darkcyan")) +

Scale functions define how the data is scaled into a plot and controls labelling.

theme_bw() +
theme(axis.text.x = element_text(angle=45, hjust=1))

Theme functions controls the style.

For more information:

Linear regression, winter year and minimum temperature

Finally let’s look at the significance of those regression lines:

summary(lm(data=fai_winter_year_minimum, `COLLEGE OBSY` ~ winter_year))
##
## Call:
## lm(formula = `COLLEGE OBSY` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -19.0748  -5.8204   0.1907   3.8042  17.1599
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -275.01062  105.20884  -2.614   0.0114 *
## winter_year    0.11635    0.05311   2.191   0.0325 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.599 on 58 degrees of freedom
##   (47 observations deleted due to missingness)
## Multiple R-squared:  0.07643,    Adjusted R-squared:  0.06051
## F-statistic:   4.8 on 1 and 58 DF,  p-value: 0.03249
summary(lm(data=fai_winter_year_minimum, `FAIRBANKS INTL AP` ~ winter_year))
##
## Call:
## lm(formula = `FAIRBANKS INTL AP` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -15.529  -4.605  -1.025   4.007  19.764
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -171.19553   43.55177  -3.931 0.000153 ***
## winter_year    0.06250    0.02221   2.813 0.005861 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.037 on 104 degrees of freedom
##   (1 observation deleted due to missingness)
## Multiple R-squared:  0.07073,    Adjusted R-squared:  0.06179
## F-statistic: 7.916 on 1 and 104 DF,  p-value: 0.005861
summary(lm(data=fai_winter_year_minimum, `UNIVERSITY EXP STN` ~ winter_year))
##
## Call:
## lm(formula = `UNIVERSITY EXP STN` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -15.579  -5.818  -1.283   6.029  19.977
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -158.41837   51.03809  -3.104  0.00248 **
## winter_year    0.05638    0.02605   2.164  0.03283 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 8.119 on 100 degrees of freedom
##   (5 observations deleted due to missingness)
## Multiple R-squared:  0.04474,    Adjusted R-squared:  0.03519
## F-statistic: 4.684 on 1 and 100 DF,  p-value: 0.03283

Essentially, all the models show a significant increase in minimum temperature over time, but none of them explain very much of the variation in minimum temperature.

RMarkdown

This presentation was produced with the RMarkdown package. Allows you to mix text and R code, which is then run through R to produce documents in Word, PDF, HTML, and presentation formats.

sun, 12-apr-2015, 16:38

Introduction

Last week I gave a presentation at work about the National Climate Data Center’s GHCND climate database and methods to import and manipulate the data using the dplyr and tidyr R packages (a report-style version of it is here). Along the way, I used this function to calculate the average daily temperature from the minimum and maximum daily temperatures:

mutate(TAVG=(TMIN+TMAX)/2.0))

One of the people in the audience asked why the Weather Service would calculate average daily temperature this way, rather than by averaging the continuous or hourly temperatures at each station. The answer is that many, perhaps most, of the official stations in the GHCND data set are COOP stations which only report minimum and maximum temperature, and the original instrument provided to COOP observers was likely a mercury minimum / maximum thermometer. Now that these instruments are digital, they could conceivably calculate average temperature internally, and observers could report minimum, maximum and average as calculated from the device. But that’s not how it’s done.

In this analysis, I look at the difference between calculating average daily temperature using the mean of all daily temperature observations, and using the average of the minimum and maximum reported temperature each day. I’ll use five years of data collected at our house using our Arduino-based weather station.

Methods

Our weather station records temperature every few seconds, averages this data every five minutes and stores these five minute observations in a database. For our analysis, I’ll group the data by day and calculate the average daily temperature using the mean of all the five minute observations, and using the average of the minimum and maximum daily temperature. I’ll use R to perform the analysis.

Libraries

Load the libraries we need:

library(dplyr)
library(lubridate)
library(ggplot2)
library(scales)
library(readr)

Retrieve the data

Connect to the database and retrieve the data. We’re using build_sql because the data table we’re interested in is a view (sort of like a stored SQL query), not a table, and dplyr::tbl can’t currently read from a view:

dw1454 <- src_postgres(dbname="goldstream_creek_wx",
                       user="readonly")

raw_data <- tbl(dw1454, build_sql("SELECT * FROM arduino_west"))

The raw data contains the timestamp for each five minute observation, and the temperature, in degrees Fahrenheit for that observation. The following series of functions aggregates the data to daily data and calculates the average daily temperature using the two methods.

daily_average <-
    raw_data %>%
    filter(obs_dt>'2009-12-31 23:59:59') %>%
    mutate(date=date(obs_dt)) %>%
    select(date, wtemp) %>%
    group_by(date) %>%
    summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0,
              h_avg=mean(wtemp), n=n()) %>%
    filter(n==24*60/5) %>%   # 5 minute obs
    collect()

All these steps are joined together using the “pipe” or “then” operator %>% as follows:

  • daily_average <-: assign the result of all the operations to daily_average.
  • raw_data %>%: start with the data from our database query (all the temperature observations).
  • filter(obs_dt>'2009-12-31 23:59:59') %>%: use data from 2010 and after.
  • mutate(date=date(obs_dt)) %>%: calculate the data from the timestamp.
  • select(date, wtemp) %>%: reduce the columns to our newly calculated date variable and the temperatures.
  • group_by(date) %>%: group the data by date.
  • summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0) %>%: summarize the data grouped by date, calculate daily average from the average of the minimum and maximum temperature.
  • summarize(h_avg=mean(wtemp), n=n()) %>%: calculate another daily average from the mean of the temperaures. Also calculate the number of observations on each date.
  • filter(n==24*60/5) %>%: Only include dates where we have a complete set of five minute observations. We don’t want data with too few or too many observations because those would skew the averages.
  • collect(): This function retrieves the data from the database. Without collect(), the query is run on the database server, producing a subset of the full results. This allows us to tweak the query until it’s exactly what we want without having to wait to retrieve everything at each iteration.

Now we’ve got a table with one row for each date in the database where we had exactly 288 observations on that date. Columns include the average temperature calculated using the two methods and the number of observations on each date.

Save the data so we don’t have to do these calculations again:

write_csv(daily_average, "daily_average.csv")
save(daily_average, file="daily_average.rdata", compress=TRUE)

Calculate anomalies

How does the min/max method of calculating average daily temperature compare against the true mean of all observed temperatures in a day? We calculate the difference between the methods, the anomaly, as the mean temperature subtracted from the average of minimum and maximum. When this anomaly is positive, the min/max method is higher than the actual mean, and when it’s negative, it’s lower.

anomaly <-
    daily_average %>%
    mutate(month=month(date),
           anomaly=mm_avg-h_avg) %>%
    ungroup() %>%
    arrange(date)

We also populate a column with the month of each date so we can look at the seasonality of the anomalies.

Results

This is what the results look like:

summary(anomaly$anomaly)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
## -6.8600 -1.5110 -0.1711 -0.1341  1.0740  9.3570

The average anomaly is very close to zero (-0.13), and I suspect it would be even closer to zero as more data is included. Half the data is between -1.5 and 1.1 degrees and the full range is -6.86 to +9.36°F.

Plots

Let’s take a look at some plots of the anomalies.

Raw anomaly data

The first plot shows the raw anomaly data, with positive anomalies (min/max calculate average is higher than the mean daily average) colored red and negative anomalies in blue.

# All anomalies
q <- ggplot(data=anomaly,
            aes(x=date, ymin=0, ymax=anomaly, colour=anomaly<0)) +
    geom_linerange(alpha=0.5) +
    theme_bw() +
    scale_colour_manual(values=c("red", "blue"), guide=FALSE) +
    scale_x_date(name="") +
    scale_y_continuous(name="Difference between min/max and hourly aggregation")

print(q)
//media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily.svg

I don't see much in the way of trends in this data, but there are short periods where all the anomalies are in one direction or another. If there is a seasonal pattern, it's hard to see it when the data is presented this way.

Monthly boxplots

To examine the seasonality of the anomalies, let’s look at some boxplots, grouped by the “month” variable we calculated when calculating the anomalies.

mean_anomaly <- mean(anomaly$anomaly)

# seasonal pattern of anomaly
q <- ggplot(data=anomaly,
            aes(x=as.factor(month), y=anomaly)) +
    geom_hline(data=NULL, aes(yintercept=mean_anomaly), colour="darkorange") +
    geom_boxplot() +
    scale_x_discrete(name="",
                     labels=c("Jan", "Feb", "Mar", "Apr",
                              "May", "Jun", "Jul", "Aug",
                              "Sep", "Oct", "Nov", "Dec")) +
    scale_y_continuous(name="Difference between min/max and hourly aggregation") +
    theme_bw()

print(q)
//media.swingleydev.com/img/blog/2015/04/diff_mm_hourly_aggregation_to_daily_boxplot.svg

There does seem to be a slight seasonal pattern to the anomalies, with spring and summer daily average underestimated when using the min/max calculation (the actual daily average temperature is warmer than was calculated using minimum and maximum temperatures) and slightly overestimated in fall and late winter. The boxes in a boxplot show the range where half the observations fall, and in all months but April and May these ranges include zero, so there's a good chance that the pattern isn't statistically significant. The orange line under the boxplots show the overall average anomaly, close to zero.

Cumulative frequency distribution

Finally, we plot the cumulative frequency distribution of the absolute value of the anomalies. These plots have the variable of interest on the x-axis and the cumulative frequency of all values to the left on the y-axis. It’s a good way of seeing how much of the data falls into certain ranges.

# distribution of anomalies
q <- ggplot(data=anomaly,
            aes(x=abs(anomaly))) +
    stat_ecdf() +
    scale_x_discrete(name="Absolute value of anomaly (+/- degrees F)",
                     breaks=0:11,
                     labels=0:11,
                     expand=c(0, 0)) +
    scale_y_continuous(name="Cumulative frequency",
                       labels=percent,
                       breaks=pretty_breaks(n=10),
                       limits=c(0,1)) +
    annotate("rect", xmin=-1, xmax=1, ymin=0, ymax=0.4, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=2, ymin=0, ymax=0.67, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=3, ymin=0, ymax=0.85, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=4, ymin=0, ymax=0.94, alpha=0.1, fill="darkcyan") +
    annotate("rect", xmin=-1, xmax=5, ymin=0, ymax=0.975, alpha=0.1, fill="darkcyan") +
    theme_bw()

print(q)
//media.swingleydev.com/img/blog/2015/04/cum_freq_distribution.svg

The overlapping rectangles on the plot show what percentages of anomalies fall in certain ranges. Starting from the innermost and darkest rectangle, 40% of the temperatures calculated using minimum and maximum are within a degree of the actual temperature. Sixty-seven percent are within two degrees, 85% within three degrees, 94% are within four degrees, and more than 97% are within five degrees of the actual value. There's probably a way to get R to calculate these intersections along the curve for you, but I looked at the plot and manually added the annotations.

Conclusion

We looked at more than five years of data from our weather station in the Goldstream Valley, comparing daily average temperature calculated from the mean of all five minute temperature observations and those calculated using the average minimum and maximum daily temperature, which is the method the National Weather Service uses for it’s daily data. The results show that the difference between these methods average to zero, which means that on an annual (or greater) basis, there doesn't appear to be any bias in the method.

Two thirds of all daily average temperatures are within two degrees of the actual daily average, and with a few exceptions, the error is always below five degrees.

There is some evidence that there’s a seasonal pattern to the error, however, with April and May daily averages particularly low. If those seasonal patterns are real, this would indicate an important bias in this method of calculating average daily temperature.

tags: R  temperature  dplyr  climate  GHCND 
sun, 08-feb-2015, 14:13

Whenever we’re in the middle of a cold snap, as we are right now, I’m tempted to see how the current snap compares to those in the past. The one we’re in right now isn’t all that bad: sixteen days in a row where the minimum temperature is colder than −20°F. In some years, such a threshold wouldn’t even qualify as the definition of a “cold snap,” but right now, it feels like one.

Getting the length of consecutive things in a database isn’t simple. What we’ll do is get a list of all the days where the minimum daily temperature was warmer than −20°F. Then go through each record and count the number of days between the current row and the next one. Most of these will be one, but when the number of days is greater than one, that means there’s one or more observations in between the “warm” days where the minimum temperature was colder than −20°F (or there was missing data).

For example, given this set of dates and temperatures from earlier this year:

date tmin_f
2015‑01‑02 −15
2015‑01‑03 −20
2015‑01‑04 −26
2015‑01‑05 −30
2015‑01‑06 −30
2015‑01‑07 −26
2015‑01‑08 −17

Once we select for rows where the temperature is above −20°F we get this:

date tmin_f
2015‑01‑02 −15
2015‑01‑08 −17

Now we can grab the start and end of the period (January 2nd + one day and January 8th - one day) and get the length of the cold snap. You can see why missing data would be a problem, since it would create a gap that isn’t necessarily due to cold temperatures.

I couldn't figure out how to get the time periods and check them for validity all in one step, so I wrote a simple function that counts the days with valid data between two dates, then used this function in the real query. Only periods with non-null data on each day during the cold snap were included.

CREATE FUNCTION valid_n(date, date)
RETURNS bigint AS
  'SELECT count(*)
   FROM ghcnd_pivot
   WHERE station_name = ''FAIRBANKS INTL AP''
      AND dte BETWEEN $1 AND $2
      AND tmin_c IS NOT NULL'
LANGUAGE SQL
RETURNS NULL ON NULL INPUT;

Here we go:

SELECT rank() OVER (ORDER BY days DESC) AS rank,
       start, "end", days FROM (
   SELECT start + interval '1 day' AS start,
         "end" - interval '1 day' AS end,
         interv - 1 AS days,
         valid_n(date(start + interval '1 day'),
                  date("end" - interval '1 day')) as valid_n
   FROM (
      SELECT dte AS start,
            lead(dte) OVER (ORDER BY dte) AS end,
            lead(dte) OVER (ORDER BY dte) - dte AS interv
      FROM (
         SELECT dte
         FROM ghcnd_pivot
         WHERE station_name = 'FAIRBANKS INTL AP'
            AND tmin_c > f_to_c(-20)
      ) AS foo
   ) AS bar
   WHERE interv >= 17
) AS f
WHERE days = valid_n
ORDER BY days DESC;

And the top 10:

Top ten longest cold snaps (−20°F or colder minimum temp)
rank start end days
1 1917‑11‑26 1918‑01‑01 37
2 1909‑01‑13 1909‑02‑12 31
3 1948‑11‑17 1948‑12‑13 27
4 1925‑01‑16 1925‑02‑10 26
4 1947‑01‑12 1947‑02‑06 26
4 1943‑01‑02 1943‑01‑27 26
4 1968‑12‑26 1969‑01‑20 26
4 1979‑02‑01 1979‑02‑26 26
9 1980‑12‑06 1980‑12‑30 25
9 1930‑01‑28 1930‑02‑21 25

There have been seven cold snaps that lasted 16 days (including the one we’re currently in), tied for 45th place.

Keep in mind that defining days where the daily minimum is −20°F or colder is a pretty generous definition of a cold snap. If we require the minimum temperatures be below −40° the lengths are considerably shorter:

Top ten longest cold snaps (−40° or colder minimum temp)
rank start end days
1 1964‑12‑25 1965‑01‑11 18
2 1973‑01‑12 1973‑01‑26 15
2 1961‑12‑16 1961‑12‑30 15
2 2008‑12‑28 2009‑01‑11 15
5 1950‑02‑04 1950‑02‑17 14
5 1989‑01‑18 1989‑01‑31 14
5 1979‑02‑03 1979‑02‑16 14
5 1947‑01‑23 1947‑02‑05 14
9 1909‑01‑14 1909‑01‑25 12
9 1942‑12‑15 1942‑12‑26 12
9 1932‑02‑18 1932‑02‑29 12
9 1935‑12‑02 1935‑12‑13 12
9 1951‑01‑14 1951‑01‑25 12

I think it’s also interesting that only three (marked with a grey background) of the top ten cold snaps defined at −20°F appear in those that have a −40° threshold.

tue, 07-jan-2014, 15:27
Equinox Marathon finish

Equinox Marathon finish

It’s the beginning of a new year and time for me to look back at what I learned last year. Rather than a long narrative, let’s focus on the data. The local newspaper did a “community profile” of me this year and it was focused on my curiosity about the world around us and how we can measure and analyze it to better understand our lives. This post is a brief summary of that sort of analysis for my small corner of the world in the year that was 2013.

Exercise

2013 was the year I decided to, and did, run the Equinox Marathon, so I spent a lot of time running this year and a lot less time bicycling. Since the race, I’ve been having hip problems that have kept me from skiing or running much at all. The roads aren’t cleared well enough to bicycle on them in the winter so I got a fat bike to commute on the trails I’d normally ski.

Here are my totals in tabular form:

2013 Exercise Totals
type miles hours calories
Running 529 89 61,831
Bicycling 1,018 82 54,677
Skiing 475 81 49,815
Hiking 90 43 18,208
TOTAL 2,113 296 184,531

I spent just about the same amount of time running, bicycling and skiing this year, and much less time hiking around on the trails than in the past. Because of all the running, and my hip injury, I didn’t manage to commute to work with non-motorized transport quite as much this year (55% of work days instead of 63% in 2012), but the exercise totals are all higher.

One new addition this year is a heart rate monitor, which allows me to much more accurately estimate energy consumption than formulas based on the type of activity, speed, and time. Riding my fat bike, it’s pretty clear that this form of travel is so much less efficient than a road bike with smooth tires that it can barely be called “bicycling,” at least in terms of how much energy it takes to travel over a certain distance.

Here’s the equations from Keytel LR, Goedecke JH, Noakes TD, Hiiloskorpi H, Laukkanen R, van der Merwe L, Lambert EV. 2005. Prediction of energy expenditure from heart rate monitoring during submaximal exercise. J Sports Sci. 23(3):289-97.

Male : ( − 55.0969 + (0.6309hr) + (0.0901w) + (0.2017a))/(4.184)60t
Female : ( − 20.4022 + (0.4472hr) − (0.0901w) + (0.074a))/(4.184)60t

where

  • hr = Heart rate (in beats/minute)
  • w = Weight (in pounds)
  • a = Age (in years)
  • t = Exercise duration time (in hours)

And a SQL function that implements the version for men (to use it, you’d replace the nnn and yyyy-mm-dd with the appropriate values for you):

--- Kcalories burned based on average heart rate and number
--- of hours at that rate.
CREATE OR REPLACE FUNCTION kcal_from_hr(hr numeric, hours numeric)
RETURNS numeric
LANGUAGE plpgsql
AS $$
DECLARE
    weight_lb numeric := nnn;
    resting_hr numeric := nn;
    birthday date := 'yyyy-mm-dd';
    resting_kcal numeric;
    exercise_kcal numeric;
BEGIN
    resting_kcal := ((-55.0969+(0.6309*(resting_hr))+
                    (0.0901*weight_lb)+
                    (0.2017*(extract(epoch from now()-birthday)/
                    (365.242*24*60*60))))/4.184)*60*hours;
    exercise_kcal := ((-55.0969+(0.6309*(hr))+
                     (0.0901*weight_lb)+
                     (0.2017*(extract(epoch from now()-birthday)/
                     (365.242*24*60*60))))/4.184)*60*hours;
    RETURN exercise_kcal - resting_kcal;
END;
$$;

Here’s a graphical comparison of my exercise data over the past four years:

It was a pretty remarkable year, although the drop in exercise this fall is disappointing.

Another way to visualize the 2013 data is in the form of a heatmap, where each block represents a day on the calendar, and the color is how many calories I burned on that day. During the summer you can see my long runs on the weekends showing up in red. Equinox was on September 21st, the last deep red day of the year.

Weather

2013 was quite remarkable for the number of days where the daily temperature was dramatically different from the 30-year average. The heatmap below shows each day in 2013, and the color indicates how many standard deviations that day’s temperature was from the 30-year average. To put the numbers in perspective, approximately 95.5% of all observations will fall within two standard deviations from the mean, and 99.7% will be within three standard deviations. So the very dark red or dark blue squares on the plot below indicate temperature anomalies that happen less than 1% of the time. Of course, in a full year, you’d expect to see a few of these remarkable differences, but 2013 had a lot of remarkable differences.

2013 saw 45 days where the temperature was more than 2 standard deviations from the mean (19 that were colder than normal and 26 that were warmer), something that should only happen 16 days out of a normal year [ 365.25(1 − 0.9545) ]. There were four days ouside of 3 standard deviations from the mean anomaly. Normally there’d only be a single day [ 365.25(1 − 0.9973) ] with such a remarkably cold or warm temperature.

April and most of May were remarkably cold, resulting in many people skiing long past what is normal in Fairbanks. On May first, Fairbanks still had 17 inches of snow on the ground. Late May, almost all of June and the month of October were abnormally warm, including what may be the warmest week on record in Alaska from June 23rd to the 29th. Although it wasn’t exceptional, you can see the brief cold snap preceding and including the Equinox Marathon on September 21st this year. The result was bitter cold temperatures on race day (my hands and feet didn’t get warm until I was climbing Ester Dome Road an hour into the race), as well as an inch or two of snow on most of the trail sections of the course above 1,000 feet.

Most memorable was the ice and wind storm on November 13th and 14th that dumped several inches of snow and instantly freezing rain, followed by record high winds that knocked power out for 14,000 residents of the area, and then a drop in temperatures to colder than ‒20°F. My office didn’t get power restored for four days.

git

I’m moving more and more of my work into git, which is a distributed revision control system (or put another way, it’s a system that stores stuff and keeps track of all the changes). Because it’s distributed, anything I have on my computer at home can be easily replicated to my computer at work or anywhere else, and any changes that I make to these files on any system, are easy to recover anywhere else. And it’s all backed up on the master repository, and all changes are recorded. If I decide I’ve made a mistake, it’s easy to go back to an earlier version.

Using this sort of system for software code is pretty common, but I’m also using this for normal text files (the docs repository below), and have starting moving other things into git such as all my eBooks.

The following figure shows the number of file changes made in three of my repositories over the course of the year. I don’t know why April was such an active month for Python, but I clearly did a lot of programming that month. The large number of file changes during the summer in the docs repository is because I was keeping my running (and physical therapy) logs in that repository.

Dog Barn

The dog barn was the big summer project. It’s a seven by eleven foot building with large dog boxes inside that we keep warm. When the temperatures are too cold for the dogs to stay outside, we put them into their boxes in the dog barn and turn the heat up to 40°F. I have a real-time visualization of the conditions inside and outside the barn, and because the whole thing is run with a small Linux computer and Arduino board, I’m able to collect a lot of data about how the barn is performing.

One such analysis will be to see how much heat the dogs produce when they are in the barn. To estimate that, we need a baseline of how much heat we’re adding at various temperatures in order to keep it at temperature. I haven’t collected enough cold temperature data to really see what the relationship looks like, but here’s the pattern so far.

The graph shows the relationship between the temperature differential between the outside and inside of the barn plotted against the percentage of time the heater is on in order to maintain that differential, for all 12-hour periods where the dogs weren’t in the barn and there’s less than four missing observations. I’ve also run a linear and quadratic regression in order to predict how much heat will be required at various temperature differentials.

The two r2 values shows how much of the variation in heating is explained by the temperature differential for the linear and the quadratic regressions. I know that this isn’t a linear relationship, but that model still fits the data better than the quadratic model does. It may be that it’s some other form of non-linear relationship that’s not well expressed by a second order polynomial.

Once we can predict how much heat it should take to keep the barn warm at a particular temperature differential, we can see how much less heat we’re using when the dogs are in the barn. One complication is that the dogs produce enough moisture when they are in the barn that we need to ventilate it when they are in there. So in addition to the additive heating from the dogs themselves, there will be increased heat losses because we have to keep it better ventilated.

It’ll be an interesting data set.

Power

Power consumption is a concern now that we’ve set up the dog barn and are keeping it heated with an electric heater. It’s an oil-filled radiator-style heater, and uses around 1,100 Watts when it’s on.

This table shows our overall usage by year for the period we have data.

Yearly electricity use
year average watts total KWH
2010 551 4822
2011 493 4318
2012 433 3792
2013 418 3661

Our overall energy use continues to go down, which is a little surprising to me, actually, since we eliminated most of the devices known to use a lot electricity (incandescent light bulbs, halogen floodlights) years ago. Despite that, and bringing the dog barn on line in late November, we used less electricity in 2013 than in the prior three years.

Here’s the pattern by month, and year.

The spike in usage in November is a bit concerning, since it’s the highest overall monthly consumption for the past four years. Hopefully this was primarily due to the heavy use of the heater during the final phases of the dog barn construction. December wasn’t a particularly cold month relative to years past, but it’s good to see that our consumption was actually quite low even with the barn heater being on the entire month.

That wraps it up. Have a happy and productive 2014!

sat, 14-dec-2013, 15:37

I spent most of October and November building a dog barn for the dogs. Our two newest dogs (Lennier and Monte) don’t have sufficient winter coats to be outside when it’s colder than ‒15°F. A dog barn is a heated space with large, comfortable, locking dog boxes inside. The dogs sleep inside at night and are pretty much in the house with us when we’re home, but when we’re at work or out in town, the dogs can go into the barn to stay warm on cold days.

You can view the photos of the construction on my photolog

Along with the dog boxes we’ve got a monitoring and control system in the barn:

  • An Arduino board that monitors the temperature (DS18B20 sensor) and humidity (SHT15) in the barn and controls an electric heater through a Power Tail II.
  • A BeagleBone Black board running Linux which reads the data from the Arduino board and inserts it into a database, and can change the set temperature that the Arduino uses to turn the heater on and off (typically we leave this set at 30°F, which means the heater comes on at 28 and goes off at 32°F).
  • An old Linksys WRT-54G router (running DD-WRT) which connect to the wireless network in the house and connects to BeagleBone setup via Ethernet.

The system allows us to monitor the conditions inside the barn in real-time, and to change the temperature. It is a little less robust than the bi-metallic thermostat we were using initially, but as long as the Arduino has power, it is able to control the heat even if the BeagleBone or wireless router were to fail, and is far more accurate. It’s also a lot easier to keep track of how long the heater is on if we’re turning it on and off with our monitoring system.

Thursday we got an opportunity to see what happens when all the dogs are in there at ‒15°F. They were put into their boxes around 10 AM, and went outside at 3:30 PM. The windows were closed.

Here’s a series of plots showing what happened (PDF version)

The top plot shows the temperature in the barn. As expected, the temperature varies from 28°F, when the heater comes on, to a bit above 32°F when the heater goes off. There are obvious spikes in the plot when the heater comes on and rapidly warms the building. Interestingly, once the dogs were settled into the barn, the heater didn’t come on because the dogs were keeping the barn warm themselves. The temperature gradually rose while they were in there.

The next plot is the relative humidity. In addition to heating the barn, the dogs were filling it with moisture. It’s clear that we will need to deal with all that moisture in the future. We plan on experimenting with a home-built heat recovery ventilator (HRV) that is based on alternating sheets of Coroplast plastic. The idea is that warm air from inside travels through one set of layers to the outside, cold air from outside passes through the other set of layers and is heated on it’s way in by the exiting warm air. Until that’s done, our options are to leave the two windows cracked to allow the moisture to escape (with some of the warm air, of course) or to use a dehumidifier.

The bar chart shows the number of minutes the power was on for the interval shown. Before the dogs went into the barn the heater was coming on for about 15 minutes, then was off for 60 minutes before coming back on again. As the temperature cools outside, the interval when the heater is off decreases. Again, this plot shows the heater stopped coming on once the dogs were in the barn.

The bottom plot is the outside temperature.

So far the barn is a great addition to the property, and the dogs really seem to like it, charging into the barn and into their boxes when it’s cold outside. I’m looking forward to experimenting with the HRV and seeing what happens under similar conditions but with the windows slighly open, or when the outside temperatures are much colder.


<< 0 1 2 3 4 >>
Meta Photolog Archives