Share @ LinkedIn Facebook  time-series, resampling, moving-window-functions
Time Series - Resampling & Moving Window Functions in Python using Pandas

Time Series: Resampling & Moving Window Functions in Python using Pandas

Table of Contents

1. Resampling

Resampling time series generally refers to:

  • Enforcing frequency to data when you have data measured without any kind of frequency (e.g. data collected with different time delta between various measurements).
  • Enforcing different frequencies than the already present frequency of measured data.

We need methods that can help us enforce some kind of frequency to data so that it makes analysis easy. Python library Pandas is quite commonly used to hold time series data and it provides a list of tools to handle sampling of data. We'll be exploring ways to resample time series data using pandas.

In [1]:
import pandas as pd
import numpy as np

import matplotlib.pyplot as plt

import warnings

warnings.filterwarnings("ignore")

%matplotlib inline

Resampling is generally performed in two ways:

  • Up Sampling: It happens when you convert time series from lower frequency to higher frequency like from month-based to day-based or hour-based to minute-based. When time series is data is converted from lower frequency to higher frequency then a number of observations increases hence we need a method to fill newly created frequency. We'll explain below various methods available when going through examples.
  • Down Sampling: It happens when you convert time series from higher frequency to lower frequency like from week-based to month-based, hour-based to day-based, etc. When you convert time series from higher frequency to lower frequency then the number of samples will decrease and also it'll result in loss of some values. We'll explain it below when going through examples.

1.1 asfreq()

The first method that we'll like to introduce is asfreq() method for resampling. Pandas series, as well as dataframe objects, has this method available which we can call on them.

asfreq() method accepts important parameters like freq, method, and fill_value.

  • freq parameter lets us specify a new frequency for time series object.
  • method parameter provides a list of methods like ffill, bfill, backfill and pad for filling in newly created indexes when we up-sampled time series data. Forward fill will fill newly created indexes with values in previous indexes whereas backward fill will fill the newly created indexes with values from the next index value. pad method will fill in with the same values for a particular time interval. The default value for method parameter is None and it puts NaNs in newly created indexes when upsampling.
  • fill_value lets us fill in NaNs with a value specified as this parameter. It does not fill existing NaNs in data but only NaNs which are generated by asfreq() when upsampling/downsampling data.

We'll explore the usage of asfreq() below with few examples.

In [2]:
rng = pd.date_range(start = "1-1-2020", periods=5, freq="H")
ts = pd.Series(data=range(5), index=rng)
ts
Out[2]:
2020-01-01 00:00:00    0
2020-01-01 01:00:00    1
2020-01-01 02:00:00    2
2020-01-01 03:00:00    3
2020-01-01 04:00:00    4
Freq: H, dtype: int64

Below we are trying a few examples to demonstrate upsampling. We'll explore various methods to fill in newly created indexes.

In [3]:
ts.asfreq(freq="30min")
Out[3]:
2020-01-01 00:00:00    0.0
2020-01-01 00:30:00    NaN
2020-01-01 01:00:00    1.0
2020-01-01 01:30:00    NaN
2020-01-01 02:00:00    2.0
2020-01-01 02:30:00    NaN
2020-01-01 03:00:00    3.0
2020-01-01 03:30:00    NaN
2020-01-01 04:00:00    4.0
Freq: 30T, dtype: float64

We can notice from the above example that asfreq() method by default put NaN in all newly created indexes. We can either pass a value to be filled in into this newly created indexes by setting fill_value parameter or we can call any fill method as well. We'll explain it below with few examples.

In [4]:
ts.asfreq(freq="30min", fill_value=0.0)
Out[4]:
2020-01-01 00:00:00    0.0
2020-01-01 00:30:00    0.0
2020-01-01 01:00:00    1.0
2020-01-01 01:30:00    0.0
2020-01-01 02:00:00    2.0
2020-01-01 02:30:00    0.0
2020-01-01 03:00:00    3.0
2020-01-01 03:30:00    0.0
2020-01-01 04:00:00    4.0
Freq: 30T, dtype: float64

We can see that the above example filled in all NaNs with 0.0.

In [5]:
ts.asfreq(freq="30min", method="ffill")
Out[5]:
2020-01-01 00:00:00    0
2020-01-01 00:30:00    0
2020-01-01 01:00:00    1
2020-01-01 01:30:00    1
2020-01-01 02:00:00    2
2020-01-01 02:30:00    2
2020-01-01 03:00:00    3
2020-01-01 03:30:00    3
2020-01-01 04:00:00    4
Freq: 30T, dtype: int64

We can notice from the above examples that ffill method filled in a newly created index with the value of previous indexes.

In [6]:
ts.asfreq(freq="45min", method="ffill")
Out[6]:
2020-01-01 00:00:00    0
2020-01-01 00:45:00    0
2020-01-01 01:30:00    1
2020-01-01 02:15:00    2
2020-01-01 03:00:00    3
2020-01-01 03:45:00    3
Freq: 45T, dtype: int64
In [7]:
ts.asfreq(freq="45min", method="bfill")
Out[7]:
2020-01-01 00:00:00    0
2020-01-01 00:45:00    1
2020-01-01 01:30:00    2
2020-01-01 02:15:00    3
2020-01-01 03:00:00    3
2020-01-01 03:45:00    4
Freq: 45T, dtype: int64
In [8]:
ts.asfreq(freq="45min", method="pad")
Out[8]:
2020-01-01 00:00:00    0
2020-01-01 00:45:00    0
2020-01-01 01:30:00    1
2020-01-01 02:15:00    2
2020-01-01 03:00:00    3
2020-01-01 03:45:00    3
Freq: 45T, dtype: int64
In [9]:
df = pd.DataFrame({"TimeSeries":ts})
df
Out[9]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 01:00:00 1
2020-01-01 02:00:00 2
2020-01-01 03:00:00 3
2020-01-01 04:00:00 4
In [10]:
df.asfreq(freq="45min")
Out[10]:
TimeSeries
2020-01-01 00:00:00 0.0
2020-01-01 00:45:00 NaN
2020-01-01 01:30:00 NaN
2020-01-01 02:15:00 NaN
2020-01-01 03:00:00 3.0
2020-01-01 03:45:00 NaN
In [11]:
df.asfreq(freq="45min", fill_value=0.0)
Out[11]:
TimeSeries
2020-01-01 00:00:00 0.0
2020-01-01 00:45:00 0.0
2020-01-01 01:30:00 0.0
2020-01-01 02:15:00 0.0
2020-01-01 03:00:00 3.0
2020-01-01 03:45:00 0.0
In [12]:
df.asfreq("30min", method="ffill")
Out[12]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 00:30:00 0
2020-01-01 01:00:00 1
2020-01-01 01:30:00 1
2020-01-01 02:00:00 2
2020-01-01 02:30:00 2
2020-01-01 03:00:00 3
2020-01-01 03:30:00 3
2020-01-01 04:00:00 4
In [13]:
df.asfreq("30min", method="bfill")
Out[13]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 00:30:00 1
2020-01-01 01:00:00 1
2020-01-01 01:30:00 2
2020-01-01 02:00:00 2
2020-01-01 02:30:00 3
2020-01-01 03:00:00 3
2020-01-01 03:30:00 4
2020-01-01 04:00:00 4
In [14]:
df.asfreq("30min", method="pad")
Out[14]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 00:30:00 0
2020-01-01 01:00:00 1
2020-01-01 01:30:00 1
2020-01-01 02:00:00 2
2020-01-01 02:30:00 2
2020-01-01 03:00:00 3
2020-01-01 03:30:00 3
2020-01-01 04:00:00 4

We'll now explain a few examples of downsampling.

In [15]:
ts.asfreq(freq="1H30min")
Out[15]:
2020-01-01 00:00:00    0.0
2020-01-01 01:30:00    NaN
2020-01-01 03:00:00    3.0
Freq: 90T, dtype: float64
In [16]:
ts.asfreq(freq="1H30min", fill_value=0.0)
Out[16]:
2020-01-01 00:00:00    0.0
2020-01-01 01:30:00    0.0
2020-01-01 03:00:00    3.0
Freq: 90T, dtype: float64
In [17]:
ts.asfreq(freq="1H30min", method="ffill")
Out[17]:
2020-01-01 00:00:00    0
2020-01-01 01:30:00    1
2020-01-01 03:00:00    3
Freq: 90T, dtype: int64
In [18]:
ts.asfreq(freq="1H30min", method="bfill")
Out[18]:
2020-01-01 00:00:00    0
2020-01-01 01:30:00    2
2020-01-01 03:00:00    3
Freq: 90T, dtype: int64
In [19]:
ts.asfreq(freq="1H30min", method="pad")
Out[19]:
2020-01-01 00:00:00    0
2020-01-01 01:30:00    1
2020-01-01 03:00:00    3
Freq: 90T, dtype: int64
In [20]:
df.asfreq(freq="1H30min")
Out[20]:
TimeSeries
2020-01-01 00:00:00 0.0
2020-01-01 01:30:00 NaN
2020-01-01 03:00:00 3.0
In [21]:
df.asfreq(freq="1H30min", fill_value=0.0)
Out[21]:
TimeSeries
2020-01-01 00:00:00 0.0
2020-01-01 01:30:00 0.0
2020-01-01 03:00:00 3.0
In [22]:
df.asfreq(freq="1H30min", method="ffill")
Out[22]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 01:30:00 1
2020-01-01 03:00:00 3
In [23]:
df.asfreq(freq="1H30min", method="bfill")
Out[23]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 01:30:00 2
2020-01-01 03:00:00 3
In [24]:
df.asfreq(freq="1H30min", method="pad")
Out[24]:
TimeSeries
2020-01-01 00:00:00 0
2020-01-01 01:30:00 1
2020-01-01 03:00:00 3

We can lose data sometimes when doing downsampling and the asfreq() method just uses a simple approach of downsampling. It provides only method bfill, ffill, and pad for filling in data when upsampling or downsampling. What if we need to apply some other function than these three functions. We need a more reliable approach to handle downsampling. Pandas provides another method called resample() which can help us with that.

1.2 resample()

resample() method accepts new frequency to be applied to time series data and returns Resampler object. We can apply various methods other than bfill, ffill and pad for filling in data when doing upsampling/downsampling. The Resampler object supports a list of aggregation functions like mean, std, var, count, etc which will be applied to time-series data when doing upsampling or downsampling. We'll explain the usage of resample() below with few examples.

We are below trying various ways to downsample the data below.

In [25]:
ts.resample("1H30min").mean()
Out[25]:
2020-01-01 00:00:00    0.5
2020-01-01 01:30:00    2.0
2020-01-01 03:00:00    3.5
Freq: 90T, dtype: float64

The above example is taking mean of index values appearing in that 1 hour and 30-minute windows. Out time series is sampled at 1 hour so in 1 hour and 30 minutes window generally, 2 values will fall in. It'll take mean of that values when downsampling to the new index. We can call functions other than mean() like std(), var(), sum(), count(),interpolate() etc.

In [26]:
ts.resample("1H15min").mean()
Out[26]:
2020-01-01 00:00:00    0.5
2020-01-01 01:15:00    2.0
2020-01-01 02:30:00    3.0
2020-01-01 03:45:00    4.0
Freq: 75T, dtype: float64
In [27]:
ts.resample("1H15min").std()
Out[27]:
2020-01-01 00:00:00    0.707107
2020-01-01 01:15:00         NaN
2020-01-01 02:30:00         NaN
2020-01-01 03:45:00         NaN
Freq: 75T, dtype: float64
In [28]:
ts.resample("1H15min").var()
Out[28]:
2020-01-01 00:00:00    0.5
2020-01-01 01:15:00    NaN
2020-01-01 02:30:00    NaN
2020-01-01 03:45:00    NaN
Freq: 75T, dtype: float64
In [29]:
ts.resample("1H15min").sum()
Out[29]:
2020-01-01 00:00:00    1
2020-01-01 01:15:00    2
2020-01-01 02:30:00    3
2020-01-01 03:45:00    4
Freq: 75T, dtype: int64
In [30]:
ts.resample("1H15min").count()
Out[30]:
2020-01-01 00:00:00    2
2020-01-01 01:15:00    1
2020-01-01 02:30:00    1
2020-01-01 03:45:00    1
Freq: 75T, dtype: int64
In [31]:
ts.resample("1H15min").bfill()
Out[31]:
2020-01-01 00:00:00    0
2020-01-01 01:15:00    2
2020-01-01 02:30:00    3
2020-01-01 03:45:00    4
Freq: 75T, dtype: int64
In [32]:
ts.resample("1H15min").ffill()
Out[32]:
2020-01-01 00:00:00    0
2020-01-01 01:15:00    1
2020-01-01 02:30:00    2
2020-01-01 03:45:00    3
Freq: 75T, dtype: int64

We'll now try below a few examples by upsampling time series.

In [33]:
ts.resample("45min").bfill()
Out[33]:
2020-01-01 00:00:00    0
2020-01-01 00:45:00    1
2020-01-01 01:30:00    2
2020-01-01 02:15:00    3
2020-01-01 03:00:00    3
2020-01-01 03:45:00    4
Freq: 45T, dtype: int64
In [34]:
ts.resample("45min").apply(lambda x: x**2 if x.values.tolist() else np.nan)
Out[34]:
2020-01-01 00:00:00     0.0
2020-01-01 00:45:00     1.0
2020-01-01 01:30:00     4.0
2020-01-01 02:15:00     NaN
2020-01-01 03:00:00     9.0
2020-01-01 03:45:00    16.0
Freq: 45T, dtype: float64
In [35]:
ts.resample("45min").interpolate()
Out[35]:
2020-01-01 00:00:00    0.00
2020-01-01 00:45:00    0.75
2020-01-01 01:30:00    1.50
2020-01-01 02:15:00    2.25
2020-01-01 03:00:00    3.00
2020-01-01 03:45:00    3.00
Freq: 45T, dtype: float64
In [36]:
df.resample("45min").mean().fillna(0.0)
Out[36]:
TimeSeries
2020-01-01 00:00:00 0.0
2020-01-01 00:45:00 1.0
2020-01-01 01:30:00 2.0
2020-01-01 02:15:00 0.0
2020-01-01 03:00:00 3.0
2020-01-01 03:45:00 4.0

The above examples clearly state that resample() is a very flexible function and lets us resample time series by applying a variety of functions.

2. Moving Window Functions

Moving window functions refers to functions that can be applied to time-series data by moving fixed/variable size window over total data and computing descriptive statistics over window data each time. Here window generally refers to a number of samples taken from total time series in order and represents a particular represents a period of time.

There are 2 kinds of window functions:

  • Rolling Window Functions: It performs aggregate operations on the window with the same amount of sample each time.
  • Expanding Window Functions: It performs aggregate operations on the window which expands with time.

Pandas provides a list of functions for performing window functions. We'll start with rolling() function.

2.1 rolling()

rolling() function lets us perform rolling window functions on time series data. rolling() function can be called on both series and dataframe in pandas. It accepts window size as a parameter to group values by that window size and returns Rolling objects which have grouped values according to window size. We can then apply various aggregate functions on this object as per our needs. We'll create a simple dataframe of random data to explain this further.

In [37]:
df = pd.DataFrame(np.random.randn(100, 4),
                  index = pd.date_range('1/1/2020', periods = 100),
                  columns = ['A', 'B', 'C', 'D'])

df.head()
Out[37]:
A B C D
2020-01-01 0.792758 0.262306 -1.033230 -1.913741
2020-01-02 2.279012 0.704082 1.021807 0.995765
2020-01-03 2.715893 0.262504 -0.156704 -0.255339
2020-01-04 -0.858527 1.132931 -0.173379 0.052590
2020-01-05 -0.675983 1.259856 0.581401 -0.336817
In [38]:
df.plot(figsize=(8,4));
In [39]:
r = df.rolling(3)
r
Out[39]:
Rolling [window=3,center=False,axis=0]

Above, We have created a rolling object with a window size of 3. We can now apply various aggregate functions on this object to get a modified time series. We'll start by applying a mean function to a rolling object and then visualize data of column B from the original dataframe and rolled output.

In [40]:
df["B"].plot(color="grey", figsize=(8,4));
r.mean()["B"].plot(color="red");

There are many other descriptive statistics functions available which can be applied to rolling object like count(), median(), std(), var(), quantile(), skew(), etc. We can try a few below for our learning purpose.

In [41]:
df["B"].plot(color="grey", figsize=(8,4));
r.quantile(0.25)["B"].plot(color="red");
In [42]:
df["B"].plot(color="grey", figsize=(8,4));
r.skew()["B"].plot(color="red");
In [43]:
df["B"].plot(color="grey", figsize=(8,4));
r.var()["B"].plot(color="red");

We can even apply our own function by passing it to apply() function. We are explaining its usage below with an example.

In [44]:
df["B"].plot(color="grey", figsize=(8,4));
r.apply(lambda x: x.sum())["B"].plot(color="red");

We can apply more than one aggregate function by passing them to agg() function. We'll explain it below with an example. We can apply aggregate functions to only one column as well as ignoring other columns.

In [45]:
r.agg(["mean", "std"]).head()
Out[45]:
A B C D
mean std mean std mean std mean std
2020-01-01 NaN NaN NaN NaN NaN NaN NaN NaN
2020-01-02 NaN NaN NaN NaN NaN NaN NaN NaN
2020-01-03 1.929221 1.008155 0.409631 0.255002 -0.056042 1.031210 -0.391105 1.459497
2020-01-04 1.378792 1.949850 0.699839 0.435229 0.230575 0.685278 0.264339 0.651877
2020-01-05 0.393794 2.013066 0.885097 0.542903 0.083773 0.431039 -0.179855 0.205385
In [46]:
r["A"].agg(["mean", "std"]).head()
Out[46]:
mean std
2020-01-01 NaN NaN
2020-01-02 NaN NaN
2020-01-03 1.929221 1.008155
2020-01-04 1.378792 1.949850
2020-01-05 0.393794 2.013066

We can perform a rolling window function on data samples at a different frequency than the original frequency as well. We'll below load data as hourly and then apply rolling window function by daily sampling that data.

In [47]:
df = pd.DataFrame(np.random.randn(100, 4),
                  index = pd.date_range('1/1/2020', freq="H", periods = 100),
                  columns = ['A', 'B', 'C', 'D'])
df.head()
Out[47]:
A B C D
2020-01-01 00:00:00 -0.661877 -1.309971 -0.222158 -0.839181
2020-01-01 01:00:00 1.670444 0.305705 -0.479218 1.202464
2020-01-01 02:00:00 0.010780 1.395900 -0.997947 2.104720
2020-01-01 03:00:00 0.250527 -0.556719 -0.309415 0.242392
2020-01-01 04:00:00 -0.800937 -0.915483 -1.090798 0.273126
In [48]:
df.resample("1D").mean().rolling(3).mean().head()
Out[48]:
A B C D
2020-01-01 NaN NaN NaN NaN
2020-01-02 NaN NaN NaN NaN
2020-01-03 -0.089716 0.142506 0.009205 -0.024009
2020-01-04 -0.112366 0.079827 0.033722 -0.122137
2020-01-05 -0.193246 0.120248 0.346680 0.024083
In [49]:
df.resample("1D").mean().rolling(3).mean().plot();

We can notice above that our output is with daily frequency than the hourly frequency of original data.

2.2 expanding()

Pandas provided a function named expanding() to perform expanding window functions on our time series data. expanding() function can be called on both series and dataframe in pandas. As we discussed above, expanding window functions are applied to total data and takes into consideration all previous values, unlike the rolling window which takes fixed-size samples into consideration. We'll explain it's usage below with few examples.

In [50]:
df.expanding(min_periods=1).mean().head()
Out[50]:
A B C D
2020-01-01 00:00:00 -0.661877 -1.309971 -0.222158 -0.839181
2020-01-01 01:00:00 0.504284 -0.502133 -0.350688 0.181642
2020-01-01 02:00:00 0.339782 0.130545 -0.566441 0.822668
2020-01-01 03:00:00 0.317469 -0.041271 -0.502184 0.677599
2020-01-01 04:00:00 0.093788 -0.216114 -0.619907 0.596704
In [51]:
df.expanding(min_periods=1).mean().plot();

We can notice from the above plot that the output of expanding the window is fluctuating at the beginning but then settling as more samples come into the computation. The output fluctuates bit initially due to less number of samples taken into consideration initially. The number of samples increases as we move forward with computation and keeps on increasing till the whole time-series has completed.

We can apply various aggregation function to expanding window like count(), median(), std(), var(), quantile(), skew(), etc. We'll explain them below with few examples.

In [52]:
df.expanding(min_periods=1).std().plot();
In [53]:
df.expanding(min_periods=1).var().plot();

We can apply more than one aggregation function by passing their names as a list to agg() function as well as we can apply our own function by passing it to apply() function. We have explained both usage below with examples.

In [54]:
df.expanding(min_periods=1).agg(["mean", "var"]).head()
Out[54]:
A B C D
mean var mean var mean var mean var
2020-01-01 00:00:00 -0.661877 NaN -1.309971 NaN -0.222158 NaN -0.839181 NaN
2020-01-01 01:00:00 0.504284 2.719860 -0.502133 1.305205 -0.350688 0.033040 0.181642 2.084157
2020-01-01 02:00:00 0.339782 1.441112 0.130545 1.853446 -0.566441 0.156168 0.822668 2.274822
2020-01-01 03:00:00 0.317469 0.962733 -0.041271 1.353714 -0.502184 0.120628 0.677599 1.600728
2020-01-01 04:00:00 0.093788 0.972216 -0.216114 1.168134 -0.619907 0.159764 0.596704 1.233266
In [55]:
df.expanding(min_periods=1).apply(lambda x: x.sum()).plot();
In [56]:
df["A"].expanding(min_periods=1).apply(lambda x: x.sum()).plot();

We'll generally use expanding() windows function when we care about all past samples in time series data even though new samples are added to it. We'll use it when we want to take all previous samples into consideration. We'll use rolling() window functions when only the past few samples are important and all samples before it can be ignored.

2.3 ewm()

An exponential weighted moving average is weighted moving average of last n samples from time-series data. ewm() function can be called on both series and dataframe in pandas. The exponential weighted moving average function assigns weights to each previous samples which decreases with each previous sample. We'll explain its usage by comparing it with rolling() window function.

In [57]:
df["A"].ewm(span=10).mean().plot(color="tab:red");
df["A"].rolling(window=10).mean().plot(color="tab:green");
In [58]:
df["A"].ewm(span=10, min_periods=5).mean().plot(color="tab:red");
df["A"].rolling(window=10).mean().plot(color="tab:green");

We can apply a different kinds of aggregation functions like we applied above with rolling() and expanding() functions. We'll try below a few examples for explanation purposes.

In [59]:
df.ewm(span=10).std().plot();
In [60]:
df.ewm(span=10).agg(["mean", "var"]).head()
Out[60]:
A B C D
mean var mean var mean var mean var
2020-01-01 00:00:00 -0.661877 NaN -1.309971 NaN -0.222158 NaN -0.839181 NaN
2020-01-01 01:00:00 0.620900 2.719860 -0.421350 1.305205 -0.363541 0.033040 0.283724 2.084157
2020-01-01 02:00:00 0.375636 1.359974 0.309173 1.794198 -0.618568 0.161951 1.015752 2.149708
2020-01-01 03:00:00 0.334418 0.817961 0.023900 1.297502 -0.516716 0.125473 0.760965 1.464669
2020-01-01 04:00:00 0.008489 0.884884 -0.245771 1.100328 -0.681519 0.170145 0.620919 1.044235

This concludes our small tutorial on resampling and moving window functions with time-series data using pandas. Please feel free to let us know your views in the comments section below.

References



Sunny Solanki  Sunny Solanki