Abstract (tl;dr)
We’re getting some bad home Internet service from Alaska Communications, and it’s getting worse. There are clear patterns indicating lower quality service in the evening, and very poor download rates over the past couple days. Scroll down to check out the plots.
Introduction
Over the past year we’ve started having trouble watching streaming video over our Internet connection. We’re paying around $100/month for phone, long distance and a 4 Mbps DSL Internet connection, which is a lot of money if we’re not getting a quality product. The connection was pretty great when we first signed up (and frankly, it’s better than a lot of people in Fairbanks), but over time, the quality has degraded and despite having a technician out to take a look, it hasn’t gotten better.
Methods
In September last year I started monitoring our bandwidth, once every two hours, using the Python speedtest-cli tool, which uses speedtest.net to get the data.
To use it, install the package:
$ pip install speedtest-cli
Then set up a cron job on your server to run this once every two hours. I have it running on the raspberry pi that collects our weather data. I use this script, which appends data to a file each time it is run. You’ll want to change the server to whichever is closest and most reliable at your location.
#! /bin/bash
results_dir="/path/to/results"
date >> ${results_dir}/speedtest_results
speedtest --server 3191 --simple >> ${results_dir}/speedtest_results
The raw output file just keeps growing, and looks like this:
Mon Sep 1 09:20:08 AKDT 2014 Ping: 199.155 ms Download: 2.51 Mbits/s Upload: 0.60 Mbits/s Mon Sep 1 10:26:01 AKDT 2014 Ping: 158.118 ms Download: 3.73 Mbits/s Upload: 0.60 Mbits/s ...
This isn’t a very good format for analysis, so I wrote a Python script to process the data into a tidy data set with one row per observation, and columns for ping time, download and upload rates as numbers.
From here, we can look at the data in R. First, let’s see how our rates change during the day. One thing we’ve noticed is that our Internet will be fine until around seven or eight in the evening, at which point we can no longer stream video successfully. Hulu is particularly bad at handling a lower quality connection.
Code to get the data and add some columns to group the data appropriately for plotting:
#! /usr/bin/env Rscript
# Prep:
# parse_speedtest_results.py speedtest_results speedtest_results.csv
library(lubridate)
library(ggplot2)
library(dplyr)
speed <- read.csv('speedtest_results.csv', header=TRUE) %>%
tbl_df() %>%
mutate(date=ymd_hms(as.character(date)),
yyyymm=paste(year(date), sprintf("%02d", month(date)), sep='-'),
month=month(date),
hour=hour(date))
Plot it:
q <- ggplot(data=speed, aes(x=factor(hour), y=download)) +
geom_boxplot() +
scale_x_discrete(name="Hour of the day") +
scale_y_continuous(name="Download speed (Mbps)") +
ggtitle(paste("Speedtest results (",
min(floor_date(speed$date, "day")), " - " ,
max(floor_date(speed$date, "day")), ")", sep="")) +
theme_bw() +
facet_wrap(~ yyyymm)
Results and Discussion
Here’s the result:
Box and whisker plots (boxplots) show how data is distributed. The box represents the range where half the data lies (from the 25th to the 75th percentile) and the line through the box represents the median value. The vertical lines extending above and below the box (the whiskers), show where most of the rest of the observations are, and the dots are extreme values. The figure above has a single boxplot for each two hour period, and the plots are split into month-long periods so we can see if there are any trends over time.
There are some clear patterns across all months: our bandwidth is pretty close to what we’re paying for for most of the day. The boxes are all up near 4 Mbps and they’re skinny, indicating that most of the observations are close to 4 Mbps. Starting in the early evening, the boxes start getting larger, demonstrating that we’re not always getting our expected rate. The boxes are very large between eight and ten, which means we’re as likely to get 2 Mbps as the 4 we pay for.
Patterns over time are also showing up. Starting in January, there’s another drop in our bandwidth around noon and by February it’s rare that we’re getting the full speed of our connection at any time of day.
One note: it is possible that some of the decline in our bandwidth during the evening is because the download script is competing with the other things we are doing on the Internet when we are home from work. This doesn’t explain the drop around noon, however, and when I look at the actual Internet usage diagrams collected from our router using SMTP / MRTG, it doesn’t appear that we are really using enough bandwidth to explain the dramatic and consistent drops seen in the plot above.
February is starting to look different from the other months, I took a closer look at the data for that month. I’m filtering the data to just February, and based on a look at the initial version of this plot, I added trend lines for the period before and after noon on the 12th of February.
library(dplyr)
library(lubridate)
library(ggplot2)
library(scales)
speeds <- tbl_df(read.csv('speedtest_results.csv', header=TRUE))
speed_plot <-
speeds %>%
mutate(date=ymd_hms(date),
grp=ifelse(date<'2015-02-12 12:00:00', 'before', 'after')) %>%
filter(date > '2015-01-31 23:59:59') %>%
ggplot(aes(x=date, y=download)) +
geom_point() +
theme_bw() +
geom_smooth(aes(group=grp), method="lm", se=FALSE) +
scale_y_continuous(limits=c(0,4.5),
breaks=c(0,1,2,3,4),
name="Download speed (Mbps)") +
theme(axis.title.x=element_blank())
The result:
Ouch. Throughout the month our bandwidth has been going down, but you can also see that after noon on the 12th, we’re no longer getting 4 Mpbs no matter what time of day it is. The trend line probably isn’t statistically significant for this period, but it’s clear that our Internet service, for which we pay a lot of money for, is getting worse and worse, now averaging less than 2 Mbps.
Conclusion
I think there’s enough evidence here that we aren’t getting what we are paying for from our ISP. Time to contact Alaska Communications and get them to either reduce our rates based on the poor quality of service they are providing, or upgrade their equipment to handle the traffic on our line. I suspect they probably oversold the connection and the equipment can’t handle all the users trying to get their full bandwidth at the same time.
Whenever we’re in the middle of a cold snap, as we are right now, I’m tempted to see how the current snap compares to those in the past. The one we’re in right now isn’t all that bad: sixteen days in a row where the minimum temperature is colder than −20°F. In some years, such a threshold wouldn’t even qualify as the definition of a “cold snap,” but right now, it feels like one.
Getting the length of consecutive things in a database isn’t simple. What we’ll do is get a list of all the days where the minimum daily temperature was warmer than −20°F. Then go through each record and count the number of days between the current row and the next one. Most of these will be one, but when the number of days is greater than one, that means there’s one or more observations in between the “warm” days where the minimum temperature was colder than −20°F (or there was missing data).
For example, given this set of dates and temperatures from earlier this year:
date | tmin_f |
---|---|
2015‑01‑02 | −15 |
2015‑01‑03 | −20 |
2015‑01‑04 | −26 |
2015‑01‑05 | −30 |
2015‑01‑06 | −30 |
2015‑01‑07 | −26 |
2015‑01‑08 | −17 |
Once we select for rows where the temperature is above −20°F we get this:
date | tmin_f |
---|---|
2015‑01‑02 | −15 |
2015‑01‑08 | −17 |
Now we can grab the start and end of the period (January 2nd + one day and January 8th - one day) and get the length of the cold snap. You can see why missing data would be a problem, since it would create a gap that isn’t necessarily due to cold temperatures.
I couldn't figure out how to get the time periods and check them for validity all in one step, so I wrote a simple function that counts the days with valid data between two dates, then used this function in the real query. Only periods with non-null data on each day during the cold snap were included.
CREATE FUNCTION valid_n(date, date)
RETURNS bigint AS
'SELECT count(*)
FROM ghcnd_pivot
WHERE station_name = ''FAIRBANKS INTL AP''
AND dte BETWEEN $1 AND $2
AND tmin_c IS NOT NULL'
LANGUAGE SQL
RETURNS NULL ON NULL INPUT;
Here we go:
SELECT rank() OVER (ORDER BY days DESC) AS rank,
start, "end", days FROM (
SELECT start + interval '1 day' AS start,
"end" - interval '1 day' AS end,
interv - 1 AS days,
valid_n(date(start + interval '1 day'),
date("end" - interval '1 day')) as valid_n
FROM (
SELECT dte AS start,
lead(dte) OVER (ORDER BY dte) AS end,
lead(dte) OVER (ORDER BY dte) - dte AS interv
FROM (
SELECT dte
FROM ghcnd_pivot
WHERE station_name = 'FAIRBANKS INTL AP'
AND tmin_c > f_to_c(-20)
) AS foo
) AS bar
WHERE interv >= 17
) AS f
WHERE days = valid_n
ORDER BY days DESC;
And the top 10:
rank | start | end | days |
---|---|---|---|
1 | 1917‑11‑26 | 1918‑01‑01 | 37 |
2 | 1909‑01‑13 | 1909‑02‑12 | 31 |
3 | 1948‑11‑17 | 1948‑12‑13 | 27 |
4 | 1925‑01‑16 | 1925‑02‑10 | 26 |
4 | 1947‑01‑12 | 1947‑02‑06 | 26 |
4 | 1943‑01‑02 | 1943‑01‑27 | 26 |
4 | 1968‑12‑26 | 1969‑01‑20 | 26 |
4 | 1979‑02‑01 | 1979‑02‑26 | 26 |
9 | 1980‑12‑06 | 1980‑12‑30 | 25 |
9 | 1930‑01‑28 | 1930‑02‑21 | 25 |
There have been seven cold snaps that lasted 16 days (including the one we’re currently in), tied for 45th place.
Keep in mind that defining days where the daily minimum is −20°F or colder is a pretty generous definition of a cold snap. If we require the minimum temperatures be below −40° the lengths are considerably shorter:
rank | start | end | days |
---|---|---|---|
1 | 1964‑12‑25 | 1965‑01‑11 | 18 |
2 | 1973‑01‑12 | 1973‑01‑26 | 15 |
2 | 1961‑12‑16 | 1961‑12‑30 | 15 |
2 | 2008‑12‑28 | 2009‑01‑11 | 15 |
5 | 1950‑02‑04 | 1950‑02‑17 | 14 |
5 | 1989‑01‑18 | 1989‑01‑31 | 14 |
5 | 1979‑02‑03 | 1979‑02‑16 | 14 |
5 | 1947‑01‑23 | 1947‑02‑05 | 14 |
9 | 1909‑01‑14 | 1909‑01‑25 | 12 |
9 | 1942‑12‑15 | 1942‑12‑26 | 12 |
9 | 1932‑02‑18 | 1932‑02‑29 | 12 |
9 | 1935‑12‑02 | 1935‑12‑13 | 12 |
9 | 1951‑01‑14 | 1951‑01‑25 | 12 |
I think it’s also interesting that only three (marked with a grey background) of the top ten cold snaps defined at −20°F appear in those that have a −40° threshold.

I’ve been a bit behind on mentioning the 2015 Tournament of Books. The contestants were announced last month. As usual, here’s the list with a three star rating system for those I've read: ☆ - not worthy, ☆☆ - good, ★★★ - great.
- Silence Once Begun by Jesse Ball ☆☆
- A Brave Man Seven Storeys Tall by Will Chancellor ☆
- All the Light We Cannot See by Anthony Doerr ★★★
- Those Who Leave and Those Who Stay by Elena Ferrante
- An Untamed State by Roxane Gay ★★★
- Wittgenstein Jr by Lars Iyer
- A Brief History of Seven Killings by Marlon James
- Redeployment by Phil Klay
- Station Eleven by Emily St. John Mandel ☆☆
- The Bone Clocks by David Mitchell ★★★
- Everything I Never Told You by Celeste Ng ☆☆
- Dept. of Speculation by Jenny Offill ★★★
- Adam by Ariel Schrag
- The Paying Guests by Sarah Waters ☆
- Annihilation by Jeff VanderMeer ☆☆
- All the Birds, Singing by Evie Wyld ★★★
Thus far, my early favorite is, of course, The Bone Clocks by David Mitchell. It's a fantastic book, similar in design to Cloud Atlas, but better. Both All the Light We Cannot See and Dept. of Speculation are distant runner's up. All the Light is great story, told in very short and easy to digest chapters, and Speculation is a funny, heartrending, strange, and ultimately redemptive story of marriage.
Following up on yesterday’s post about minimum temperatures, I was thinking that a cumulative measure of cold temperatures would probably be a better measure of how cold a winter is. We all remember the extremely cold days each winter when the propane gells or the car won’t start, but it’s the long periods of deep cold that really take their toll on buildings, equipment, and people in the Interior.
One way of measuring this is to find all the days in a winter year when the average temperature is below freezing and sum all the temperatures below freezing for that winter year. For example, if the temperature is 50°F, that’s not below freezing so it doesn’t count. If the temperature is −40°, that’s 72 freezing degrees (Fahrenheit). Do this for each day in a year and add up all the values.
Here’s the code to make the plot below (see my previous post for how we got fai_pivot).
fai_winter_year_freezing_degree_days <-
fai_pivot %>%
mutate(winter_year=year(dte - days(92)),
fdd=ifelse(TAVG < 0, -1*TAVG*9/5, 0)) %>%
filter(winter_year < 2014) %>%
group_by(station_name, winter_year) %>%
select(station_name, winter_year, fdd) %>%
summarize(fdd=sum(fdd, na.rm=TRUE), n=n()) %>%
filter(n>350) %>%
select(station_name, winter_year, fdd) %>%
spread(station_name, fdd)
fdd_gathered <-
fai_winter_year_freezing_degree_days %>%
gather(station_name, fdd, -winter_year) %>%
arrange(winter_year)
q <-
fdd_gathered %>%
ggplot(aes(x=winter_year, y=fdd, colour=station_name)) +
geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(data=subset(fdd_gathered, winter_year<1975),
method="lm", se=FALSE) +
geom_smooth(data=subset(fdd_gathered, winter_year>=1975),
method="lm", se=FALSE) +
scale_x_continuous(name="Winter Year",
breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Freezing degree days (degrees F)",
breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
labels=c("College Observatory",
"Fairbanks Airport",
"University Exp. Station"),
values=c("darkorange", "blue", "darkcyan")) +
theme_bw() +
theme(legend.position = c(0.875, 0.120)) +
theme(axis.text.x = element_text(angle=45, hjust=1))
rescale <- 0.65
svg('freezing_degree_days.svg', height=10*rescale, width=16*rescale)
print(q)
dev.off()
And the plot.
You’ll notice I’ve split the trend lines at 1975. When I ran the regressions for the entire period, none of them were statistically significant, but looking at the plot, it seems like something happens in 1975 where the cumulative freezing degree days suddenly drop. Since then, they've been increasing at a faster, and statistically significant rate.
This is odd, and it makes me wonder if I've made a mistake in the calculations because what this says is that, at least since 1975, the winters are getting colder as measured by the total number of degrees below freezing each winter. My previous post (and studies of climate in general) show that the climate is warming, not cooling.
One bias that's possible with cumulative calculations like this is that missing data becomes more important, but I looked at the same relationships when I only include years with at least 364 days of valid data (only one or two missing days) and the same pattern exists.
Curious. When combined, this analysis and yesterday's suggest that winters in Fairbanks are getting colder overall, but that the minimum temperature in any year is likely to be warmer than in the past.
The Weather Service is calling for our first −40° temperatures of the winter, which is pretty remarkable given how late in the winter it is. The 2014/2015 winter is turning out to be one of the warmest on record, and until this upcoming cold snap, we’ve only had a few days below normal, and mostly it’s been significantly warmer. You can see this on my Normalized temperature anomaly plot, where most of the last four months has been reddish.
I thought I’d take a look at the minimum winter temperatures for the three longest running Fairbanks weather stations to see what patterns emerge. This will be a good opportunity to further experiment with the dplyr and tidyr R packages I’m learning.
The data set is the Global Historical Climatology Network - Daily (GHCND) data from the National Climatic Data Center (NCDC). The data, at least as I’ve been collecting it, has been fully normalized, which is another way of saying that it’s stored in a way that makes database operations efficient, but not necessarily the way people want to look at it.
There are three main tables, ghchd_stations containing data about each station, ghcnd_variables containing information about the variables in the data, and ghcnd_obs which contains the observations. We need ghchd_stations in order to find what stations we’re interested in, by name or location, for example. And we need ghcnd_variables to convert the values in the observation table to the proper units. The observation table looks something like this:
station_id | dte | variable | raw_value | qual_flag |
---|---|---|---|---|
USW00026411 | 2014-12-25 | TMIN | -205 | |
USW00026411 | 2014-12-25 | TMAX | -77 | |
USW00026411 | 2014-12-25 | PRCP | 15 | |
USW00026411 | 2014-12-25 | SNOW | 20 | |
USW00026411 | 2014-12-25 | SNWD | 230 |
There are a few problems with using this table directly. First, the station_id column doesn’t tell us anything about the station (name, location, etc.) without joining it to the stations table. Second, we need to use the variables table to convert the raw values listed in the table to their actual values. For example, temperatures are in degrees Celsius × 10, so we need to divide the raw value to get actual temperatures. Finally, to get the so that we have one row per date, with columns for the variables we’re interested in we have to “pivot” the data (to use Excel terminology).
Here’s how we get all the data using R.
Load the libraries we will need:
library(dplyr)
library(tidyr)
library(ggplot2)
library(scales)
library(lubridate)
library(knitr)
Connect to the database and get the tables we need, choosing only the stations we want from the stations table. In the filter statement you can see we’re using a PostgreSQL specific operator ~ to do the filtering. In other databases we’d probably use %in% and include the station names as a list.
noaa_db <- src_postgres(host="localhost", user="cswingley", port=5434, dbname="noaa")
# Construct database table objects for the data
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")
# Filter stations to just the long term Fairbanks stations:
fai_stations <-
tbl(noaa_db, "ghcnd_stations") %>%
filter(station_name %~% "(FAIRBANKS INT|UNIVERSITY EXP|COLLEGE OBSY)")
Here’s where we grab the data. We are using the magrittr package’s pipe operator (%>%) to chain operations together, making it really easy to follow exactly how we’re manipulating the data along the way.
# Get the raw data
fai_raw <-
ghcnd_obs %>%
inner_join(fai_stations, by="station_id") %>%
inner_join(ghcnd_vars, by="variable") %>%
mutate(value=raw_value*raw_multiplier) %>%
filter(qual_flag=='') %>%
select(station_name, dte, variable, value) %>%
collect()
# Save it
save(fai_raw, file="fai_raw.rdata", compress="xz")
In order, we start with the complete observation table (which contains 29 million rows at this moment), then we join it with our filtered stations using inner_join(fai_stations, by="station_id"). Now we’re down to 723 thousand rows of data. We join it with the variables table, then create a new column called value that is the raw value from the observation table multiplied by the multiplier from the variable table. We remove any observation that doesn’t have an empty string for the quality flag (a value in this fields indicates there’s something wrong with the data). Finally, we reduce the number of columns we’re keeping to just the station name, date, variable name, and the actual value.
We then use collect() to actually run all these operations and collect the results into an R object. One of the neat things about database operations using dplyr is that the SQL isn’t actually performed until it is actually necessary, which really speeds up the testing phase of the analysis. You can play around with joining, filtering and transforming the data using operations that are fast until you have it just right, then collect() to finalize the steps.
At this stage, the data is still in it’s normalized form. We’ve fixed the station name and the values in the data are now what was observed, but we still need to pivot the data to make is useful.
We’ll use the tidyr spread() function to make the value that appears in the variable column (TMIN, TMAX, etc.) appear as columns in the output, and put the data in the value column into the cells in each column and row. We’re also calculating an average daily temperature from the minimum and maximum temperatures and selecting just the columns we want.
# pivot, calculate average temp, include useful vars
fai_pivot <-
fai_raw %>%
spread(variable, value) %>%
transform(TAVG=(TMIN+TMAX)/2.0) %>%
select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)
Now we’ve got a table with rows for each station name and date, and columns with all the observed variables we might be interested in.
Time for some analysis. Let’s get the minimum temperatures by year and station. When looking at winter temperatures, it makes more sense to group by “winter year” rather that the actual year. In our case, we’re subtracting 92 days from the date and getting the year. This makes the winter year start in April instead of January and means that the 2014/2015 winter has a winter year of 2014.
# Find coldest temperatures by winter year, as a nice table
fai_winter_year_minimum <-
fai_pivot %>%
mutate(winter_year=year(dte - days(92))) %>%
filter(winter_year < 2014) %>%
group_by(station_name, winter_year) %>%
select(station_name, winter_year, TMIN) %>%
summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
filter(n>350) %>%
select(station_name, winter_year, tmin) %>%
spread(station_name, tmin)
In order, we’re taking the pivoted data (fai_pivot), adding a column for winter year (mutate), removing the data from the current year since the winter isn’t over (filter), grouping by station and winter year (group_by), reducing the columns down to just minimum temperature (select), summarizing by minimum temperature after converting to Fahrenheit and the number of days with valid data (summarize), only selecting years with 350 ore more days of data (select), and finally grabbing and formatting just the columns we want (select, spread).
Here’s the last 20 years and how we get a nice table of them.
last_twenty <-
fai_winter_year_minimum %>%
filter(winter_year > 1993)
# Write to an RST table
sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Winter Year | College Obsy | Fairbanks Airport | University Exp Stn |
---|---|---|---|
1994 | -43.96 | -47.92 | -47.92 |
1995 | -45.04 | -45.04 | -47.92 |
1996 | -50.98 | -50.98 | -54.04 |
1997 | -43.96 | -47.92 | -47.92 |
1998 | -52.06 | -54.94 | -54.04 |
1999 | -50.08 | -52.96 | -50.98 |
2000 | -27.94 | -36.04 | -27.04 |
2001 | -40.00 | -43.06 | -36.04 |
2002 | -34.96 | -38.92 | -34.06 |
2003 | -45.94 | -45.94 | NA |
2004 | NA | -47.02 | -49.00 |
2005 | -47.92 | -50.98 | -49.00 |
2006 | NA | -43.96 | -41.98 |
2007 | -38.92 | -47.92 | -45.94 |
2008 | -47.02 | -47.02 | -49.00 |
2009 | -32.98 | -41.08 | -41.08 |
2010 | -36.94 | -43.96 | -38.02 |
2011 | -47.92 | -50.98 | -52.06 |
2012 | -43.96 | -47.92 | -45.04 |
2013 | -36.94 | -40.90 | NA |
To plot it, we need to re-normalize it so that each row in the data has winter_year, station_name, and tmin in it.
Here’s the plotting code, including the commands to re-normalize.
q <-
fai_winter_year_minimum %>%
gather(station_name, tmin, -winter_year) %>%
arrange(winter_year) %>%
ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(method="lm", se=FALSE) +
scale_x_continuous(name="Winter Year",
breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Minimum temperature (degrees F)",
breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
labels=c("College Observatory",
"Fairbanks Airport",
"University Exp. Station"),
values=c("darkorange", "blue", "darkcyan")) +
theme_bw() +
theme(legend.position = c(0.875, 0.120)) +
theme(axis.text.x = element_text(angle=45, hjust=1))
The lines are the linear regression lines between winter year and minimum temperature. You can see that the trend is for increasing minimum temperatures. Each of these lines is statistically significant (both the coefficients and the overall model), but they only explain about 7% of the variation in temperatures. Given the spread of the points, that’s not surprising. The data shows that the lowest winter temperature at the Fairbanks airport is rising by 0.062 degrees each year.