tue, 23-apr-2013, 07:01

This morning’s weather forecast includes this section:

.WEDNESDAY...CLOUDY. A CHANCE OF SNOW IN THE MORNING...THEN SNOW LIKELY IN THE AFTERNOON. SNOW ACCUMULATION OF 1 TO 2 INCHES. HIGHS AROUND 40. WEST WINDS INCREASING TO 15 TO 20 MPH.

.WEDNESDAY NIGHT...CLOUDY. SNOW LIKELY IN THE EVENING...THEN A CHANCE OF SNOW AFTER MIDNIGHT. LOWS IN THE 20S. WEST WINDS TO 20 MPH DIMINISHING.

Here’s a look at how often Fairbanks gets two or more inches of snow later than April 23rd:

Late spring snowfall amounts, Fairbanks Airport
Date Snow (in) Date Snow (in)
1915‑04‑27 2.0 1964‑05‑13 4.5
1916‑05‑03 2.0 1968‑05‑11 2.7
1918‑04‑26 4.1 1982‑04‑30 2.8
1918‑05‑15 2.0 1992‑05‑12 9.4
1923‑05‑03 3.0 2001‑05‑04 3.2
1931‑05‑06 2.0 2001‑05‑05 2.9
1948‑04‑26 4.0 2002‑04‑25 2.0
1952‑05‑05 2.8 2002‑04‑26 4.4
1962‑05‑07 2.0 2008‑04‑30 3.4

It’s not all that frequent, with only 18 occurrences in the last 98 years, and two of those 18 coming two days in a row. The pattern is also curious, with several in the early 1900s, one or two in each decade until the 2000s when there were several events.

In any case, I’m not looking forward to it. We’ve still got a lot of hardpack on the road from the 5+ inches we got a couple weeks ago and I’ve just started riding my bicycle to work every day. If we do get 2 inches of snow, that’ll slow breakup even more, and mess up the shoulders of the road for a few days.

tags: snow  weather 
sun, 07-apr-2013, 15:50
Cold November

Cold November

Several years ago I showed some R code to make a heatmap showing the rank of the Oakland A’s players for various hitting and pitching statistics.

Last week I used this same style of plot to make a new weather visualization on my web site: a calendar heatmap of the difference between daily average temperature and the “climate normal” daily temperature for all dates in the last ten years. “Climate normals” are generated every ten years and are the averages for a variety of statistics for the previous 30-year period, currently 1981—2010.

A calendar heatmap looks like a normal calendar, except that each date box is colored according to the statistic of interest, in this case the difference in temperature between the temperature on that date and the climate normal temperature for that date. I also created a normalized version based on the standard deviations of temperature on each date.

Here’s the temperature anomaly plot showing all the temperature differences for the last ten years:

It’s a pretty incredible way to look at a lot of data at the same time, and it makes it really easy to pick out anomalous events such as the cold November and December of 2012. One thing you can see in this plot is that the more dramatic temperature differences are always in the winter; summer anomalies are generally smaller. This is because the range of likely temperatures is much larger in winter, and in order to equalize that difference, we need to normalize the anomalies by this range.

One way to do that is to divide the actual temperature difference by the standard deviation of the 30-year climate normal mean temperature. Because of the nature of the distribution standard deviations are based on, approximately 66% of the variation occurrs within -1 and 1 standard deviation, 95% between -2 and 2, and 99% between -3 and 3 standard deviations. That means that deep red or blue dates, those outside of -3 and 3, in the normalized calendar plot are fairly rare occurrances.

Here’s the normalized anomalies for the last twelve months:

The tricky part in generating either of these plots is getting the temperature data into the right format. The plots are faceted by month and year (or YYYYY-MM in the twelve month plot), so each record needs to have month and year. That part is easy. Each individual plot is a single calendar month, and is organized by day of the week along the x-axis, and the inverse of week number along the y-axis (the first week in a month is at the top of the plot, the last at the bottom).

Here’s how to get the data formatted properly:

library(lubridate)
cal <- function(dt) {
    # Reads a date object and returns a tuple (weekrow, daycol)
    # where weekrow starts at 1 and daycol starts at 1 for Sunday
    year <- year(dt)
    month <- month(dt)
    day <- day(dt)
    wday_first <- wday(ymd(paste(year, month, 1, sep = '-'), quiet = TRUE))
    offset <- 7 + (wday_first - 2)
    weekrow <- ((day + offset) %/% 7) - 1
    daycol <- (day + offset) %% 7

    c(weekrow, daycol)
}
weekrow <- function(dt) {
    cal(dt)[1]
}
daycol <- function(dt) {
    cal(dt)[2]
}
vweekrow <- function(dts) {
    sapply(dts, weekrow)
}
vdaycol <- function(dts) {
    sapply(dts, daycol)
}
pafg$temp_anomaly <- pafg$mean_temp - pafg$average_mean_temp
pafg$month <- month(pafg$dt, label = TRUE, abbr = TRUE)
pafg$year <- year(pafg$dt)
pafg$weekrow <- factor(vweekrow(pafg$dt),
   levels = c(5, 4, 3, 2, 1, 0),
   labels = c('6', '5', '4', '3', '2', '1'))
pafg$daycol <- factor(vdaycol(pafg$dt),
   labels = c('u', 'm', 't', 'w', 'r', 'f', 's'))

And the plotting code:

library(ggplot2)
library(scales)
library(grid)
svg('temp_anomaly_heatmap.svg', width = 11, height = 10)
q <- ggplot(data = subset(pafg, year > max(pafg$year) - 11),
            aes(x = daycol, y = weekrow, fill = temp_anomaly)) +
    theme_bw() +
    theme(axis.text.x = element_blank(),
          axis.text.y = element_blank(),
          panel.grid.major = element_blank(),
          panel.grid.minor = element_blank(),
          axis.ticks.x = element_blank(),
          axis.ticks.y = element_blank(),
          axis.title.x = element_blank(),
          axis.title.y = element_blank(),
          legend.position = "bottom",
          legend.key.width = unit(1, "in"),
          legend.margin = unit(0, "in")) +
    geom_tile(colour = "white") +
    facet_grid(year ~ month) +
    scale_fill_gradient2(name = "Temperature anomaly (°F)",
          low = 'blue', mid = 'lightyellow', high = 'red',
          breaks = pretty_breaks(n = 10)) +
    ggtitle("Difference between daily mean temperature\
             and 30-year average mean temperature")
print(q)
dev.off()

You can find the current versions of the temperature and normalized anomaly plots at:

tags: R  temperature  weather 
wed, 27-mar-2013, 18:35

Earlier today our monitor stopped working and left us without heat when it was −35°F outside. I drove home and swapped the broken heater with our spare, but the heat was off for several hours and the temperature in the house dropped into the 50s until I got the replacement running. While I waited for the house to warm up, I took a look at the heat loss data for the building.

To do this, I experimented with the “Python scientific computing stack,”: the IPython shell (I used the notebook functionality to produce the majority of this blog post), Pandas for data wrangling, matplotlib for plotting, and NumPy in the background. Ordinarily I would have performed the entire analysis in R, but I’m much more comfortable in Python and the IPython notebook is pretty compelling. What is lacking, in my opinion, is the solid graphics provided by the ggplot2 package in R.

First, I pulled the data from the database for the period the heater was off (and probably a little extra on either side):

import psycopg2
from pandas.io import sql
con = psycopg2.connect(host = 'localhost', database = 'arduino_wx')
temps = sql.read_frame("""
    SELECT obs_dt, downstairs,
        (lead(downstairs) over (order by obs_dt) - downstairs) /
            interval_to_seconds(lead(obs_dt) over (order by obs_dt) - obs_dt)
            * 3600 as downstairs_rate,
        upstairs,
        (lead(upstairs) over (order by obs_dt) - upstairs) /
            interval_to_seconds(lead(obs_dt) over (order by obs_dt) - obs_dt)
            * 3600 as upstairs_rate,
        outside
    FROM arduino
    WHERE obs_dt between '2013-03-27 07:00:00' and '2013-03-27 12:00:00'
    ORDER BY obs_dt;""", con, index_col = 'obs_dt')

SQL window functions calculate the rate the temperature is changing from one observation to the next, and convert the units to the change in temperature per hour (Δ°F/hour).

Adding the index_col attribute in the sql.read_frame() function is very important so that the Pandas data frame doesn’t have an arbitrary numerical index. When plotting, the index column is typically used for the x-axis / independent variable.

Next, calculate the difference between the indoor and outdoor temperatures, which is important in any heat loss calculations (the greater this difference, the greater the loss):

temps['downstairs_diff'] = temps['downstairs'] - temps['outside']
temps['upstairs_diff'] = temps['upstairs'] - temps['outside']

I took a quick look at the data and it looks like the downstairs temperatures are smoother so I subset the data so it only contains the downstairs (and outside) temperature records.

temps_up = temps[['outside', 'downstairs', 'downstairs_diff', 'downstairs_rate']]
print(u"Minimum temperature loss (°f/hour) = {0}".format(
    temps_up['downstairs_rate'].min()))
temps_up.head(10)

Minimum temperature loss (deg F/hour) = -3.7823079517
obs_dt outside downstairs diff rate
2013-03-27 07:02:32 -33.09 65.60 98.70 0.897
2013-03-27 07:07:32 -33.19 65.68 98.87 0.661
2013-03-27 07:12:32 -33.26 65.73 98.99 0.239
2013-03-27 07:17:32 -33.52 65.75 99.28 -2.340
2013-03-27 07:22:32 -33.60 65.56 99.16 -3.782
2013-03-27 07:27:32 -33.61 65.24 98.85 -3.545
2013-03-27 07:32:31 -33.54 64.95 98.49 -2.930
2013-03-27 07:37:32 -33.58 64.70 98.28 -2.761
2013-03-27 07:42:32 -33.48 64.47 97.95 -3.603
2013-03-27 07:47:32 -33.28 64.17 97.46 -3.780

You can see from the first bit of data that when the heater first went off, the differential between inside and outside was almost 100 degrees, and the temperature was dropping at a rate of 3.8 degrees per hour. Starting at 65°F, we’d be below freezing in just under nine hours at this rate, but as the differential drops, the rate that the inside temperature drops will slow down. I'd guess the house would stay above freezing for more than twelve hours even with outside temperatures as cold as we had this morning.

Here’s a plot of the data. The plot looks pretty reasonable with very little code:

import matplotlib.pyplot as plt
plt.figure()
temps_up.plot(subplots = True, figsize = (8.5, 11),
    title = u"Heat loss from our house at −35°F",
    style = ['bo-', 'ro-', 'ro-', 'ro-', 'go-', 'go-', 'go-'])
plt.legend()
# plt.subplots_adjust(hspace = 0.15)
plt.savefig('downstairs_loss.pdf')
plt.savefig('downstairs_loss.svg')

You’ll notice that even before I came home and replaced the heater, the temperature in the house had started to rise. This is certainly due to solar heating as it was a clear day with more than twelve hours of sunlight.

The plot shows what looks like a relationship between the rate of change inside and the temperature differential between inside and outside, so we’ll test this hypothesis using linear regression.

First, get the data where the temperature in the house was dropping.

cooling = temps_up[temps_up['downstairs_rate'] < 0]

Now run the regression between rate of change and outside temperature:

import pandas as pd
results = pd.ols(y = cooling['downstairs_rate'], x = cooling.ix[:, 'outside'])
results
-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x> + <intercept>

Number of Observations:         38
Number of Degrees of Freedom:   2

R-squared:         0.9214
Adj R-squared:     0.9192

Rmse:              0.2807

F-stat (1, 36):   421.7806, p-value:     0.0000

Degrees of Freedom: model 1, resid 36

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
             x     0.1397     0.0068      20.54     0.0000     0.1263     0.1530
     intercept     1.3330     0.1902       7.01     0.0000     0.9603     1.7057
---------------------------------End of Summary---------------------------------

You can see there’s a very strong positive relationship between the outside temperature and the rate that the inside temperature changes. As it warms outside, the drop in inside temperature slows.

The real relationship is more likely to be related to the differential between inside and outside. In this case, the relationship isn’t quite as strong. I suspect that the heat from the sun is confounding the analysis.

results = pd.ols(y = cooling['downstairs_rate'], x = cooling.ix[:, 'downstairs_diff'])
results
-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x> + <intercept>

Number of Observations:         38
Number of Degrees of Freedom:   2

R-squared:         0.8964
Adj R-squared:     0.8935

Rmse:              0.3222

F-stat (1, 36):   311.5470, p-value:     0.0000

Degrees of Freedom: model 1, resid 36

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
             x    -0.1032     0.0058     -17.65     0.0000    -0.1146    -0.0917
     intercept     6.6537     0.5189      12.82     0.0000     5.6366     7.6707
---------------------------------End of Summary---------------------------------
con.close()

I’m not sure how much information I really got out of this, but I am pleasantly surprised that the house held it’s heat as well as it did even with the very cold temperatures. It might be interesting to intentionally turn off the heater in the middle of winter and examine these relationship for a longer period and without the influence of the sun.

And I’ve enjoyed learning a new set of tools for data analysis. Thanks to my friend Ryan for recommending them.

tags: house  weather  Python  Pandas  IPython 
tue, 05-feb-2013, 18:19
House from the slough

House from the slough

A couple days ago I got an email from a Galoot who was hoping to come north to see the aurora and wondered if March was a good time to come to Fairbanks. I know that March and September are two of my favorite months, but wanted to check to see if my perception of how sunny it is in March was because it really is sunny in March or if it’s because March is the month when winter begins to turn to spring in Fairbanks and it just seems brighter and sunnier, with longer days and white snow on the ground.

I found three sources of data for “cloudiness.” I’ve been parsing the Fairbanks Airport daily climate summary since 2002, and it has a value in it called Average Sky Cover which ranges from 0.0 (completely clear) to 1.0 (completely cloudy). I’ll call this data “pafa.”

The second source is the Global Historical Climatology - Daily for the Fairbanks Airport station. There’s a variable in there named ACMH, which is described as Cloudiness, midnight to midnight (percentage). For the Airport station, this value appears in the database from 1965 through 1997. One reassuring thing about this parameter is that it specifically says it’s from midnight to midnight, so it would include cloudiness when it was dark outside (and the aurora would be visible if it was present). This data set is named “ghcnd.”

The final source is modelled data from the North American Regional Reanalysis. This data set includes TCDC, or total cloud cover (percentage), and is available in three-hour increments over a grid covering North America. I chose the nearest grid point to the Fairbanks Airport and retrieved the daily mean of total cloud cover for the period of the database I have downloaded (1979—2012). In the plots that follow, this is named “narr.”

After reading the data and merging the three data sets together, I generate monthly means of cloud cover (scaled to percentages from 0 to 100) in each of the data sets, in R:

library(plyr)
cloud_cover <- merge(pafa, ghcnd, by = 'date', all = TRUE)
cloud_cover <- merge(cloud_cover, narr, by = 'date', all = TRUE)
cloud_cover$month <- month(cloud_cover$date)

by_month_mean <- ddply(
    subset(cloud_cover,
           select = c('month', 'pafa', 'ghcnd', 'narr')),
   .(month),
   summarise,
   pafa = mean(pafa, na.rm = TRUE),
   ghcnd = mean(ghcnd, na.rm = TRUE),
   narr = mean(narr, na.rm = TRUE))
by_month_mean$mon <- factor(by_month_mean$month,
                            labels = c('jan', 'feb', 'mar',
                                       'apr', 'may', 'jun',
                                       'jul', 'aug', 'sep',
                                       'oct', 'nov', 'dec'))

In order to plot it, I generate text labels for the year range of each data set and melt the data so it can be faceted:

library(lubridate)
library(reshape2)
text_labels <- rbind(
    data.frame(variable = 'pafa',
        str = paste(min(year(pafa$date)), '-', max(year(pafa$date)))),
    data.frame(variable = 'ghcnd',
        str = paste(min(year(ghcnd$date)), '-', max(year(ghcnd$date)))),
    data.frame(variable = 'narr',
        str = paste(min(year(narr$date)), '-', max(year(narr$date)))))

mean_melted <- melt(by_month_mean,
                    id.vars = 'mon',
                    measure.vars = c('pafa', 'ghcnd', 'narr'))

Finally, the plotting:

library(ggplot2)
q <- ggplot(data = mean_melted, aes(x = mon, y = value))
q +
    theme_bw() +
    geom_bar(stat = 'identity', colour = "darkred", fill = "darkorange") +
    facet_wrap(~ variable, ncol = 1) +
    scale_x_discrete(name = "Month") +
    scale_y_continuous(name = "Mean cloud cover") +
    ggtitle('Cloud cover data for Fairbanks Airport Station') +
    geom_text(data = text_labels, aes(x = 'feb', y = 70, label = str), size = 4) +
    geom_text(aes(label = round(value, digits = 1)), vjust = 1.5, size = 3)

The good news for the guy coming to see the northern lights is that March is indeed the least cloudy month in Fairbanks, and all three data sources show similar patterns, although the NARR dataset has September and October as the cloudiest months, and anyone who has lived in Fairbanks knows that August is the rainiest (and probably cloudiest) month. PAFA and GHCND have a late summer pattern that seems more like what I recall.

Another way to slice the data is to get the average number of days in a month with less than 20% cloud cover; a measure of the clearest days. This is a pretty easy calculation:

by_month_less_than_20 <- ddply(
    subset(cloud_cover,
           select = c('month', 'pafa', 'ghcnd', 'narr')),
    .(month),
    summarise,
    pafa = sum(pafa < 20, na.rm = TRUE) / sum(!is.na(pafa)) * 100,
    ghcnd = sum(ghcnd < 20, na.rm = TRUE) / sum(!is.na(ghcnd)) * 100,
    narr = sum(narr < 20, na.rm = TRUE) / sum(!is.na(narr)) * 100);

And the results:

We see the same pattern as in the mean cloudiness plot. March is the month with the greatest number of days with less that 20% cloud cover. Depending on the data set, between 17 and 24 percent of March days are quite clear. In contrast, the summer months rarely see days with no cloud cover. In June and July, the days are long and convection often builds large clouds in the late afternoon, and by August, the rain has started. Just like in the previous plot, NARR has September as the month with the fewest clear days, which doesn’t match my experience.

tags: Fairbanks  R  weather  cloud cover 
tue, 15-jan-2013, 08:48

Over the past couple days in Fairbanks, there has been a strong flow of warm, moist air from the Pacific which culminated in a record (for January 14th) 0.22 inches of precipitation, most of which fell as rain. Nasty. Similar events happened in 2011 and in November 2010, which everyone will remember for the inch or more of ice that glazed the roads for the rest of the winter that year.

The question people always ask after a series of events like this is whether this is a new weather pattern (let’s hope not!) and whether it may be the result of global climate change (which I probably can’t answer).

To look at this, I examined the historical record for Fairbanks, searching for dates that met the following criteria:

  • At least six inches of snow on the ground
  • During the winter months (October through February)
  • Daily high temperature above freezing
  • Precipitation falling as rain

The last criteria isn’t part of the historical record, but we can guess the amount of rain by comparing the amount of snow (measured each day on a snow board that is cleared after measurement) with the amount of liquid precipitation gathered in a tube and melted, if necessary. In my experience, the ratio of snow to liquid precipitation is almost always less than 10 to 1 (meaning that 10 inches of snow melts down to less than an inch of liquid), so I’m looking for dates where the precipitation amount is greater than 10 times the snowfall for that date. I’m also estimating the amount of rain by subtracting (snow × 10) from the precipitation total.

Here’s the query:

SELECT dte, tmin_f, tmax_f, prcp_in, snow_in, rain_in,
       row_number() OVER (ORDER BY rain_in desc) AS rank
FROM (
    SELECT to_char(dte, 'YYYY-MM') AS dte, round(avg(tmin_f), 1) AS tmin_f,
           round(avg(tmax_f), 1) AS tmax_f, sum(prcp_in) AS prcp_in,
           sum(snow_in) AS snow_in, sum(rain_in) AS rain_in
    FROM (
        SELECT dte, tmin_f, tmax_f, prcp_in, snow_in, snwd_in,
               round(prcp_in - (snow_in / 10.0), 2) AS rain_in
        FROM get_ghcnd('Fairbanks Intl Ap')
        WHERE extract(month from dte) IN (10, 11, 12, 1, 2)
            AND snwd_in > 6
            AND tmax_f > 32
            AND prcp_in * 10 > snow_in
        ORDER BY dte
    ) AS foo
    GROUP BY to_char(dte, 'YYYY-MM')
) AS bar
ORDER BY dte;

And the results, ordered by the year and month of the event. None of the winter rain events stretched across a month boundary, so it was convenient to aggregate them this way (although 1937 is problematic as I mention below).

Winter rains, Fairbanks Airport station
Date Min Temp (°F) Max Temp (°F) Precip (in) Snow (in) “Rain” (in) Rank
1920-02 27.7 38.4 0.26 0.9 0.17 11
1931-01 12.0 33.1 0.13 0.0 0.13 12
1932-02 7.0 33.1 0.77 7.1 0.06 16
1933-11 25.0 41.0 0.11 0.0 0.11 14
1935-11 30.4 37.2 1.51 3.2 1.19 2
1936-11 30.0 37.0 0.44 0.0 0.44 5
1937-01 24.3 36.2 2.83 16.1 1.22 1
1941-02 28.0 42.1 0.02 0.0 0.02 23
1941-11 -2.9 33.1 0.20 0.9 0.11 15
1943-02 30.5 41.0 0.12 0.0 0.12 13
1944-02 21.5 36.5 0.65 2.9 0.36 7
1948-01 7.0 33.1 0.01 0.0 0.01 26
1957-01 30.9 35.1 0.03 0.0 0.03 22
1961-01 17.1 33.1 0.04 0.0 0.04 20
1963-01 22.5 35.1 0.56 0.7 0.49 4
1967-12 20.0 33.1 0.43 0.5 0.38 6
1970-02 10.9 43.0 0.05 0.0 0.05 17
1970-10 28.0 44.1 0.04 0.0 0.04 19
1970-12 5.0 36.0 0.43 2.4 0.19 9
1986-02 10.9 37.9 0.03 0.0 0.03 21
1989-02 24.1 37.0 0.40 3.8 0.02 24
2003-02 27.0 35.0 0.29 0.0 0.29 8
2006-02 17.1 42.1 0.06 0.1 0.05 18
2010-11 26.1 34.3 0.95 0.1 0.94 3
2011-12 26.1 46.9 0.03 0.2 0.01 25
2013-01 24.0 37.0 0.22 0.4 0.18 10

The 2010 event was had the third highest rainfall in the historical record; yesterday’s rain was the tenth highest. The January 1937 event is actually two events, one on the 10th and 11th and one on the 20th and 21st. If we split them up into two events, the 2010 rainfall amount is the second largest amount and the two January 1937 rainfalls come in third and tied for fifth, with November 1935 holding the record.

Grouping the events into decades, we get the following:

Winter rains by decade
Decade Rain events
1920s 1
1930s 6
1940s 5
1950s 1
1960s 3
1970s 3
1980s 2
1990s 0
2000s 2
2010s 3

Here’s a visualization of the same data:

I don’t think there’s evidence that what we’ve seen in the last few years is exceptional in the historical record, but it does seem like the frequency of winter rainfall does come in cycles, with a peak in the 30s and 40s, and something of a decline in the 80s and 90s. That we’ve already had three events in this decade, in just over two years, seems like a bad sign to me. I wonder if there are larger scale climatological phenomena that could help to explain the pattern shown here?

tags: SQL  weather  winter  rain 

<< 0 1 2 3 4 5 6 7 8 9 10 11 12 13 >>
Meta Photolog Archives