sun, 22-feb-2015, 11:33

Last night we got a quarter of an inch of rain at our house, making roads “impassable” according to the Fairbanks Police Department, and turning the dog yard, deck, and driveway into an icy mess. There are videos floating around Facebook showing Fairbanks residents playing hockey in the street in front of their houses, and a reported seven vehicles off the road on Ballaine Hill.

Here’s a video of a group of Goldstream Valley musicians ice skating on Golstream Road: http://youtu.be/_afC7UF0NXk

Let’s check out the weather database and take a look at how often Fairbanks experiences this type of event, and when they usually happen. I’m going to skip the parts of the code showing how we get pivoted daily data from the database, but they’re in this post.

Starting with pivoted data we want to look for dates from November through March with more than a tenth of an inch of precipitation, snowfall less than two tenths of an inch and a daily high temperature above 20°F. Then we group by the winter year and month, and aggregate the rain events into a single event. These occurrences are rare enough that this aggregation shoudln’t combine events from different parts of the month.

Here’s the R code:

winter_rain <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92)),
               wdoy=yday(dte + days(61)),
               month=month(dte),
               SNOW=ifelse(is.na(SNOW), 0, SNOW),
               TMAX=TMAX*9/5+32,
               TAVG=TAVG*9/5+32,
               TMIN=TMIN*9/5+32,
               PRCP=PRCP/25.4,
               SNOW=SNOW/25.4) %>%
      filter(station_name == 'FAIRBANKS INTL AP',
               winter_year < 2014,
               month %in% c(11, 12, 1, 2, 3),
               TMAX > 20,
               PRCP > 0.1,
               SNOW < 0.2) %>%
      group_by(winter_year, month) %>%
      summarize(date=min(dte), tmax=mean(TMAX),
                prcp=sum(PRCP), days=n()) %>%
      ungroup() %>%
      mutate(month=month(date)) %>%
      select(date, month, tmax, prcp, days) %>%
      arrange(date)

And the results:

List of winter rain events, Fairbanks Airport
Date Month Max temp (°F) Rain (inches) Days
1921-03-07 3 44.06 0.338 1
1923-02-06 2 33.98 0.252 1
1926-01-12 1 35.96 0.142 1
1928-03-02 3 39.02 0.110 1
1931-01-19 1 33.08 0.130 1
1933-11-03 11 41.00 0.110 1
1935-11-02 11 38.30 0.752 3
1936-11-24 11 37.04 0.441 1
1937-01-10 1 32.96 1.362 3
1948-11-10 11 48.02 0.181 1
1963-01-19 1 35.06 0.441 1
1965-03-29 3 35.96 0.118 1
1979-11-11 11 35.96 0.201 1
2003-02-08 2 34.97 0.291 2
2003-11-02 11 34.97 0.268 2
2010-11-22 11 34.34 0.949 3

This year’s event doesn’t compare to 2010 when almost and inch of rain fell over the course of three days in November, but it does look like it comes at an unusual part of the year.

Here’s the counts and frequency of winter rainfall events by month:

by_month <-
   winter_rain %>%
      group_by(month) %>%
      summarize(n=n()) %>%
      mutate(freq=n/sum(n)*100)
Winter rain events by month
Month n Freq
1 4 25.00
2 2 12.50
3 3 18.75
11 7 43.75

There haven’t been any rain events in December, which is a little surprising, but next to that, February rains are the least common.

I looked at this two years ago (Winter freezing rain) using slightly different criteria. At the bottom of that post I looked at the frequency of rain events over time and concluded that they seem to come in cycles, but that the three events in this decade was a bad sign. Now we can add another rain event to the total for the 2010s.

tags: R  weather  winter  rain  dplyr  climate 
sun, 08-feb-2015, 14:13

Whenever we’re in the middle of a cold snap, as we are right now, I’m tempted to see how the current snap compares to those in the past. The one we’re in right now isn’t all that bad: sixteen days in a row where the minimum temperature is colder than −20°F. In some years, such a threshold wouldn’t even qualify as the definition of a “cold snap,” but right now, it feels like one.

Getting the length of consecutive things in a database isn’t simple. What we’ll do is get a list of all the days where the minimum daily temperature was warmer than −20°F. Then go through each record and count the number of days between the current row and the next one. Most of these will be one, but when the number of days is greater than one, that means there’s one or more observations in between the “warm” days where the minimum temperature was colder than −20°F (or there was missing data).

For example, given this set of dates and temperatures from earlier this year:

date tmin_f
2015‑01‑02 −15
2015‑01‑03 −20
2015‑01‑04 −26
2015‑01‑05 −30
2015‑01‑06 −30
2015‑01‑07 −26
2015‑01‑08 −17

Once we select for rows where the temperature is above −20°F we get this:

date tmin_f
2015‑01‑02 −15
2015‑01‑08 −17

Now we can grab the start and end of the period (January 2nd + one day and January 8th - one day) and get the length of the cold snap. You can see why missing data would be a problem, since it would create a gap that isn’t necessarily due to cold temperatures.

I couldn't figure out how to get the time periods and check them for validity all in one step, so I wrote a simple function that counts the days with valid data between two dates, then used this function in the real query. Only periods with non-null data on each day during the cold snap were included.

CREATE FUNCTION valid_n(date, date)
RETURNS bigint AS
  'SELECT count(*)
   FROM ghcnd_pivot
   WHERE station_name = ''FAIRBANKS INTL AP''
      AND dte BETWEEN $1 AND $2
      AND tmin_c IS NOT NULL'
LANGUAGE SQL
RETURNS NULL ON NULL INPUT;

Here we go:

SELECT rank() OVER (ORDER BY days DESC) AS rank,
       start, "end", days FROM (
   SELECT start + interval '1 day' AS start,
         "end" - interval '1 day' AS end,
         interv - 1 AS days,
         valid_n(date(start + interval '1 day'),
                  date("end" - interval '1 day')) as valid_n
   FROM (
      SELECT dte AS start,
            lead(dte) OVER (ORDER BY dte) AS end,
            lead(dte) OVER (ORDER BY dte) - dte AS interv
      FROM (
         SELECT dte
         FROM ghcnd_pivot
         WHERE station_name = 'FAIRBANKS INTL AP'
            AND tmin_c > f_to_c(-20)
      ) AS foo
   ) AS bar
   WHERE interv >= 17
) AS f
WHERE days = valid_n
ORDER BY days DESC;

And the top 10:

Top ten longest cold snaps (−20°F or colder minimum temp)
rank start end days
1 1917‑11‑26 1918‑01‑01 37
2 1909‑01‑13 1909‑02‑12 31
3 1948‑11‑17 1948‑12‑13 27
4 1925‑01‑16 1925‑02‑10 26
4 1947‑01‑12 1947‑02‑06 26
4 1943‑01‑02 1943‑01‑27 26
4 1968‑12‑26 1969‑01‑20 26
4 1979‑02‑01 1979‑02‑26 26
9 1980‑12‑06 1980‑12‑30 25
9 1930‑01‑28 1930‑02‑21 25

There have been seven cold snaps that lasted 16 days (including the one we’re currently in), tied for 45th place.

Keep in mind that defining days where the daily minimum is −20°F or colder is a pretty generous definition of a cold snap. If we require the minimum temperatures be below −40° the lengths are considerably shorter:

Top ten longest cold snaps (−40° or colder minimum temp)
rank start end days
1 1964‑12‑25 1965‑01‑11 18
2 1973‑01‑12 1973‑01‑26 15
2 1961‑12‑16 1961‑12‑30 15
2 2008‑12‑28 2009‑01‑11 15
5 1950‑02‑04 1950‑02‑17 14
5 1989‑01‑18 1989‑01‑31 14
5 1979‑02‑03 1979‑02‑16 14
5 1947‑01‑23 1947‑02‑05 14
9 1909‑01‑14 1909‑01‑25 12
9 1942‑12‑15 1942‑12‑26 12
9 1932‑02‑18 1932‑02‑29 12
9 1935‑12‑02 1935‑12‑13 12
9 1951‑01‑14 1951‑01‑25 12

I think it’s also interesting that only three (marked with a grey background) of the top ten cold snaps defined at −20°F appear in those that have a −40° threshold.

sun, 25-jan-2015, 08:26

Following up on yesterday’s post about minimum temperatures, I was thinking that a cumulative measure of cold temperatures would probably be a better measure of how cold a winter is. We all remember the extremely cold days each winter when the propane gells or the car won’t start, but it’s the long periods of deep cold that really take their toll on buildings, equipment, and people in the Interior.

One way of measuring this is to find all the days in a winter year when the average temperature is below freezing and sum all the temperatures below freezing for that winter year. For example, if the temperature is 50°F, that’s not below freezing so it doesn’t count. If the temperature is −40°, that’s 72 freezing degrees (Fahrenheit). Do this for each day in a year and add up all the values.

Here’s the code to make the plot below (see my previous post for how we got fai_pivot).

fai_winter_year_freezing_degree_days <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92)),
               fdd=ifelse(TAVG < 0, -1*TAVG*9/5, 0)) %>%
      filter(winter_year < 2014) %>%
      group_by(station_name, winter_year) %>%
      select(station_name, winter_year, fdd) %>%
      summarize(fdd=sum(fdd, na.rm=TRUE), n=n()) %>%
      filter(n>350) %>%
      select(station_name, winter_year, fdd) %>%
      spread(station_name, fdd)

fdd_gathered <-
   fai_winter_year_freezing_degree_days %>%
      gather(station_name, fdd, -winter_year) %>%
      arrange(winter_year)
q <-
   fdd_gathered %>%
      ggplot(aes(x=winter_year, y=fdd, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(data=subset(fdd_gathered, winter_year<1975),
                        method="lm", se=FALSE) +
            geom_smooth(data=subset(fdd_gathered, winter_year>=1975),
                        method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year",
                               breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Freezing degree days (degrees F)",
                               breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                              labels=c("College Observatory",
                                       "Fairbanks Airport",
                                       "University Exp. Station"),
                              values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            theme(legend.position = c(0.875, 0.120)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

rescale <- 0.65
svg('freezing_degree_days.svg', height=10*rescale, width=16*rescale)
print(q)
dev.off()

And the plot.

//media.swingleydev.com/img/blog/2015/01/freezing_degree_days.svg

Cumulative freezing degree days by winter year

You’ll notice I’ve split the trend lines at 1975. When I ran the regressions for the entire period, none of them were statistically significant, but looking at the plot, it seems like something happens in 1975 where the cumulative freezing degree days suddenly drop. Since then, they've been increasing at a faster, and statistically significant rate.

This is odd, and it makes me wonder if I've made a mistake in the calculations because what this says is that, at least since 1975, the winters are getting colder as measured by the total number of degrees below freezing each winter. My previous post (and studies of climate in general) show that the climate is warming, not cooling.

One bias that's possible with cumulative calculations like this is that missing data becomes more important, but I looked at the same relationships when I only include years with at least 364 days of valid data (only one or two missing days) and the same pattern exists.

Curious. When combined, this analysis and yesterday's suggest that winters in Fairbanks are getting colder overall, but that the minimum temperature in any year is likely to be warmer than in the past.

tags: R  weather  dplyr  climate  tidyr 
sat, 24-jan-2015, 12:41

The Weather Service is calling for our first −40° temperatures of the winter, which is pretty remarkable given how late in the winter it is. The 2014/2015 winter is turning out to be one of the warmest on record, and until this upcoming cold snap, we’ve only had a few days below normal, and mostly it’s been significantly warmer. You can see this on my Normalized temperature anomaly plot, where most of the last four months has been reddish.

I thought I’d take a look at the minimum winter temperatures for the three longest running Fairbanks weather stations to see what patterns emerge. This will be a good opportunity to further experiment with the dplyr and tidyr R packages I’m learning.

The data set is the Global Historical Climatology Network - Daily (GHCND) data from the National Climatic Data Center (NCDC). The data, at least as I’ve been collecting it, has been fully normalized, which is another way of saying that it’s stored in a way that makes database operations efficient, but not necessarily the way people want to look at it.

There are three main tables, ghchd_stations containing data about each station, ghcnd_variables containing information about the variables in the data, and ghcnd_obs which contains the observations. We need ghchd_stations in order to find what stations we’re interested in, by name or location, for example. And we need ghcnd_variables to convert the values in the observation table to the proper units. The observation table looks something like this:

gnchd_obs
station_id dte variable raw_value qual_flag
USW00026411 2014-12-25 TMIN -205  
USW00026411 2014-12-25 TMAX -77  
USW00026411 2014-12-25 PRCP 15  
USW00026411 2014-12-25 SNOW 20  
USW00026411 2014-12-25 SNWD 230  

There are a few problems with using this table directly. First, the station_id column doesn’t tell us anything about the station (name, location, etc.) without joining it to the stations table. Second, we need to use the variables table to convert the raw values listed in the table to their actual values. For example, temperatures are in degrees Celsius × 10, so we need to divide the raw value to get actual temperatures. Finally, to get the so that we have one row per date, with columns for the variables we’re interested in we have to “pivot” the data (to use Excel terminology).

Here’s how we get all the data using R.

Load the libraries we will need:

library(dplyr)
library(tidyr)
library(ggplot2)
library(scales)
library(lubridate)
library(knitr)

Connect to the database and get the tables we need, choosing only the stations we want from the stations table. In the filter statement you can see we’re using a PostgreSQL specific operator ~ to do the filtering. In other databases we’d probably use %in% and include the station names as a list.

noaa_db <- src_postgres(host="localhost", user="cswingley", port=5434, dbname="noaa")

# Construct database table objects for the data
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")

# Filter stations to just the long term Fairbanks stations:
fai_stations <-
   tbl(noaa_db, "ghcnd_stations") %>%
   filter(station_name %~% "(FAIRBANKS INT|UNIVERSITY EXP|COLLEGE OBSY)")

Here’s where we grab the data. We are using the magrittr package’s pipe operator (%>%) to chain operations together, making it really easy to follow exactly how we’re manipulating the data along the way.

# Get the raw data
fai_raw <-
   ghcnd_obs %>%
   inner_join(fai_stations, by="station_id") %>%
   inner_join(ghcnd_vars, by="variable") %>%
   mutate(value=raw_value*raw_multiplier) %>%
   filter(qual_flag=='') %>%
   select(station_name, dte, variable, value) %>%
   collect()

# Save it
save(fai_raw, file="fai_raw.rdata", compress="xz")

In order, we start with the complete observation table (which contains 29 million rows at this moment), then we join it with our filtered stations using inner_join(fai_stations, by="station_id"). Now we’re down to 723 thousand rows of data. We join it with the variables table, then create a new column called value that is the raw value from the observation table multiplied by the multiplier from the variable table. We remove any observation that doesn’t have an empty string for the quality flag (a value in this fields indicates there’s something wrong with the data). Finally, we reduce the number of columns we’re keeping to just the station name, date, variable name, and the actual value.

We then use collect() to actually run all these operations and collect the results into an R object. One of the neat things about database operations using dplyr is that the SQL isn’t actually performed until it is actually necessary, which really speeds up the testing phase of the analysis. You can play around with joining, filtering and transforming the data using operations that are fast until you have it just right, then collect() to finalize the steps.

At this stage, the data is still in it’s normalized form. We’ve fixed the station name and the values in the data are now what was observed, but we still need to pivot the data to make is useful.

We’ll use the tidyr spread() function to make the value that appears in the variable column (TMIN, TMAX, etc.) appear as columns in the output, and put the data in the value column into the cells in each column and row. We’re also calculating an average daily temperature from the minimum and maximum temperatures and selecting just the columns we want.

# pivot, calculate average temp, include useful vars
fai_pivot <-
   fai_raw %>%
   spread(variable, value) %>%
   transform(TAVG=(TMIN+TMAX)/2.0) %>%
   select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
         WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)

Now we’ve got a table with rows for each station name and date, and columns with all the observed variables we might be interested in.

Time for some analysis. Let’s get the minimum temperatures by year and station. When looking at winter temperatures, it makes more sense to group by “winter year” rather that the actual year. In our case, we’re subtracting 92 days from the date and getting the year. This makes the winter year start in April instead of January and means that the 2014/2015 winter has a winter year of 2014.

# Find coldest temperatures by winter year, as a nice table
fai_winter_year_minimum <-
   fai_pivot %>%
      mutate(winter_year=year(dte - days(92))) %>%
      filter(winter_year < 2014) %>%
      group_by(station_name, winter_year) %>%
      select(station_name, winter_year, TMIN) %>%
      summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
      filter(n>350) %>%
      select(station_name, winter_year, tmin) %>%
      spread(station_name, tmin)

In order, we’re taking the pivoted data (fai_pivot), adding a column for winter year (mutate), removing the data from the current year since the winter isn’t over (filter), grouping by station and winter year (group_by), reducing the columns down to just minimum temperature (select), summarizing by minimum temperature after converting to Fahrenheit and the number of days with valid data (summarize), only selecting years with 350 ore more days of data (select), and finally grabbing and formatting just the columns we want (select, spread).

Here’s the last 20 years and how we get a nice table of them.

last_twenty <-
   fai_winter_year_minimum %>%
      filter(winter_year > 1993)

# Write to an RST table
sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Minimum temperatures, last 20 years
Winter Year College Obsy Fairbanks Airport University Exp Stn
1994 -43.96 -47.92 -47.92
1995 -45.04 -45.04 -47.92
1996 -50.98 -50.98 -54.04
1997 -43.96 -47.92 -47.92
1998 -52.06 -54.94 -54.04
1999 -50.08 -52.96 -50.98
2000 -27.94 -36.04 -27.04
2001 -40.00 -43.06 -36.04
2002 -34.96 -38.92 -34.06
2003 -45.94 -45.94 NA
2004 NA -47.02 -49.00
2005 -47.92 -50.98 -49.00
2006 NA -43.96 -41.98
2007 -38.92 -47.92 -45.94
2008 -47.02 -47.02 -49.00
2009 -32.98 -41.08 -41.08
2010 -36.94 -43.96 -38.02
2011 -47.92 -50.98 -52.06
2012 -43.96 -47.92 -45.04
2013 -36.94 -40.90 NA

To plot it, we need to re-normalize it so that each row in the data has winter_year, station_name, and tmin in it.

Here’s the plotting code, including the commands to re-normalize.

q <-
   fai_winter_year_minimum %>%
      gather(station_name, tmin, -winter_year) %>%
      arrange(winter_year) %>%
      ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year",
                               breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Minimum temperature (degrees F)",
                               breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                              labels=c("College Observatory",
                                       "Fairbanks Airport",
                                       "University Exp. Station"),
                              values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            theme(legend.position = c(0.875, 0.120)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

The lines are the linear regression lines between winter year and minimum temperature. You can see that the trend is for increasing minimum temperatures. Each of these lines is statistically significant (both the coefficients and the overall model), but they only explain about 7% of the variation in temperatures. Given the spread of the points, that’s not surprising. The data shows that the lowest winter temperature at the Fairbanks airport is rising by 0.062 degrees each year.

tags: R  weather  dplyr  climate  tidyr 
sat, 08-nov-2014, 12:50

Following up on my previous post, I tried the regression approach for predicting future snow depth from current values. As you recall, I produced a plot that showed how much snow we’ve had on the ground on each date at the Fairbanks Airport between 1917 and 2013. These boxplots gave us an idea of what a normal snow depth looks like on each date, but couldn’t really tell us much about what we might expect for snow depth for the rest of the winter.

Regression

I ran a linear regression analysis looking at how snow depth on November 8th relates to snow depth on November 27th and December 25th of the same year. Here’s the SQL:

SELECT * FROM (
    SELECT extract(year from dte) AS year,
        max(CASE WHEN to_char(dte, 'mm-dd') = '11-08'
                 THEN round(snwd_mm/25.4, 1)
                 ELSE NULL END) AS nov_8,
        max(CASE WHEN to_char(dte, 'mm-dd') = '11-27'
                 THEN round(snwd_mm/25.4, 1)
                 ELSE NULL END) AS nov_27,
        max(CASE WHEN to_char(dte, 'mm-dd') = '12-15'
                 THEN round(snwd_mm/25.4, 1)
                 ELSE NULL END) AS dec_25
    FROM ghcnd_pivot
    WHERE station_name = 'FAIRBANKS INTL AP'
        AND snwd_mm IS NOT NULL
    GROUP BY extract(year from dte)
    ORDER BY year
) AS sub
WHERE nov_8 IS NOT NULL
    AND nov_27 IS NOT NULL
    AND dec_25 IS NOT NULL;

I’m grouping on year, then grabbing the snow depth for the three dates of interest. I would have liked to include dates in January and February in order to see how the relationship weakens as the winter progresses, but that’s a lot more complicated because then we are comparing the dates from one year to the next and the grouping I used in the query above wouldn’t work.

One note on this analysis: linear regression has a bunch of assumptions that need to be met before considering the analysis to be valid. One of these assumptions is that observations are independent from one another, which is problematic in this case because snow depth is a cumulative statistic; the depth tomorrow is necessarily related to the depth of the snow today (snow depth tomorrow = snow depth today + snowfall). Whether it’s necessarily related to the depth of the snow a month from now is less certain, and I’m making the possibly dubious assumption that autocorrelation disappears when the time interval between observations is longer than a few weeks.

Results

Here are the results comparing the snow depth on November 8th to November 27th:

> reg <- lm(data=results, nov_27 ~ nov_8)
> summary(reg)

Call:
lm(formula = nov_27 ~ nov_8, data = results)

Residuals:
    Min      1Q  Median      3Q     Max
-8.7132 -3.0490 -0.6063  1.7258 23.8403

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)   3.1635     0.9707   3.259   0.0016 **
nov_8         1.1107     0.1420   7.820 1.15e-11 ***
---
Signif. codes:  0 *** 0.001 ** 0.01 * 0.05 . 0.1   1

Residual standard error: 4.775 on 87 degrees of freedom
Multiple R-squared:  0.4128,    Adjusted R-squared:  0.406
F-statistic: 61.16 on 1 and 87 DF,  p-value: 1.146e-11

And between November 8th and December 25th:

> reg <- lm(data=results, dec_25 ~ nov_8)
> summary(reg)

Call:
lm(formula = dec_25 ~ nov_8, data = results)

Residuals:
    Min      1Q  Median      3Q     Max
-10.209  -3.195  -1.195   2.781  10.791

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)   6.2227     0.8723   7.133 2.75e-10 ***
nov_8         0.9965     0.1276   7.807 1.22e-11 ***
---
Signif. codes:  0 *** 0.001 ** 0.01 * 0.05 . 0.1   1

Residual standard error: 4.292 on 87 degrees of freedom
Multiple R-squared:  0.412,     Adjusted R-squared:  0.4052
F-statistic: 60.95 on 1 and 87 DF,  p-value: 1.219e-11

Both regressions are very similar. The coefficients and the overall model are both very significant, and the value indicates that in each case, the snow depth on November 8th explains about 40% of the variation in the snow depth on the later date. The amount of variation explained hardly changes at all, despite almost a month difference between the two analyses.

Here's a plot of the relationship between today’s date and Christmas (PDF version)

//media.swingleydev.com/img/blog/2014/11/snow_depth_nov_dec.svg

The blue line is the linear regression model.

Conclusions

For 2014, we’ve got 2 inches of snow on the ground on November 8th. The models predict we’ll have 5.4 inches on November 27th and 8 inches on December 25th. That isn’t great, but keep in mind that even though the relationship is quite strong, it explains less than half of the variation in the data, which means that it’s quite possible we will have a lot more, or less. Looking back at the plot, you can see that for all the years where we had two inches of snow on November 8th, we had between five and fifteen inches of snow in that same year on December 25th. I’m certainly hoping we’re closer to fifteen.

tags: R  SQL  weather  snow depth 

<< 0 1 2 3 4 5 6 7 8 9 10 11 12 13 >>
Meta Photolog Archives