May cohort is now open: How to secure your spot:

Quickly store 2,370,886 rows of historic options data with ArcticDB

Over 1,200,000 options contracts trade daily. Storing options data for analysis has become something only professionals can do using sophisticated tools.

One of the professionals recently open sourced their tools for lightening fast data storage and retrieval.

ArcticDB is a DataFrame database that is used in production by the systematic trading company, Man Group.

It’s used for storage, retrieval, and processing petabyte-scale data in DataFrame format.

In today’s newsletter, we’ll use it to store 2,370,886 rows of historic options data.

Quickly store 2,370,886 rows of historic options data

ArcticDB is an embedded, serverless database engine, tailored for integration with pandas and the Python data science ecosystem.

It can efficiently store a 20-year historical record of over 400,000 distinct securities under a single symbol with sub-second retrieval.

In ArcticDB, each symbol is treated as an independent entity without data overlap.

The engine operates independently of any additional infrastructure requiring only a functional Python environment and object storage access.

It uses common object storage solutions such as S3-compatible storage systems and Azure Blob Storage or local storage but can also store data locally in the LMDB format, which we’ll use today.

You can follow along with today’s newsletter with data you can download here.

Ready to get started?

Imports and set up

First, import the libraries we need. We’ll use some base Python modules to import the CSV files.

import os
import glob
import matplotlib.pyplot as plt
import datetime as dt

import arcticdb as adb
import pandas as pd

Create a locally hosted LMDB instance in the current directory and set up an ArcticDB library to store the options data.

arctic = adb.Arctic("lmdb://equity_options")
lib = arctic.get_library("options", create_if_missing=True)

Now build a helper function that accepts a file path, reads in a CSV file, resets the index, and parses the date strings to pandas Timestamps.

def read_chains(fl):
    df = (
        pd
        .read_csv(fl)
        .set_index("date")
    )
    df.index = pd.to_datetime(df.index)
    return df

After you download the sample data, unzip it to the same directory where you’re writing the code.

files = glob.glob(os.path.join("rut-eod", "*.csv"))
for fl in files:
    chains = read_chains(fl)
    chains.option_expiration = pd.to_datetime(chains.option_expiration)
    underlyings = chains.symbol.unique()
    for underlying in underlyings:
        df = chains[chains.symbol == underlying]
        adb_sym = f"options/{underlying}"
        adb_fcn = lib.update if lib.has_symbol(adb_sym) else lib.write
        adb_fcn(adb_sym, df)

It first retrieves a list of CSV files from the “rut-eod” directory and iterates through them.

For each file, it reads the options chains, converts the ‘option_expiration’ field to a datetime format, and then processes each unique underlying symbol found in these chains.

The code then determines whether to update or write new data to a storage system based on whether the symbol already exists in the database.

The result is end of day historic options data with 2,370,886 quotes between 2006-07-28 and 2014-09-04.

Using the ArcticDB query builder

We’ll use the powerful QueryBuilder class to retrieve options data for a given as of date and expiration date. We’ll also filter the data based on a range of delta values.

def read_vol_curve(as_of_date, underlying, expiry, delta_low, delta_high):
    q = adb.QueryBuilder()
		filter = (
			(q["option_expiration"] == expiry) & 
			(
				(
					(q["delta"] >= delta_low) & (q["delta"] <= delta_high)
				) | (
					(q["delta"] >= -delta_high) & (q["delta"] <= -delta_low)
				)
			)
		)
    q = (
        q[filter]
        .groupby("strike")
        .agg({"iv": "mean"})
    )
    return lib.read(
        f"options/{underlying}", 
        date_range=(as_of_date, as_of_date),
        query_builder=q
    ).data

This function builds a query to filter options that match the expiry and that fall within the given delta range.

It then groups these options by their strike price and calculates the average implied volatility for each group.

Finally, the function returns the aggregated data for the specified underlying asset and as-of date, returning the processed data.

Next, use the same filtering pattern to extract the expiration dates.

def query_expirations(as_of_date, underlying, dte=30):
    q = adb.QueryBuilder()
    filter = (q.option_expiration > as_of_date + dt.timedelta(days=dte))
    q = q[filter].groupby("option_expiration").agg({"volume": "sum"})
    return (
        lib
        .read(
            f"options/{underlying}", 
            date_range=(as_of_date, as_of_date), 
            query_builder=q
        )
        .data
        .sort_index()
        .index
    )

We retrieve a list of expiration dates with total trading volumes for a given underlying asset. It filters options expiring more than dte days after the specified as_of_date, aggregates the trading volume by expiration date, and then sorts these dates.

Chart the implied volatility curves

We’ll use the stored options data to create a chart of the implied volatility curves on a specified date. These curves are referred to as the implied volatility skew.

First, set some parameters.

as_of_date = pd.Timestamp("2013-06-03")
expiry = pd.Timestamp("2013-06-22")
underlying = "RUT"
dte = 30
delta_low = 0.05
delta_high = 0.50

Then generate the chart.

expiries = query_expirations(as_of_date, underlying, dte)
_, ax = plt.subplots(1, 1)
cmap = plt.get_cmap("rainbow", len(expiries))
format_kw = {"linewidth": 0.5, "alpha": 0.85}
for i, expiry in enumerate(expiries):
    curve = read_vol_curve(
        as_of_date, 
        underlying, 
        expiry, 
        delta_low, 
        delta_high
    )
    (
        curve
        .sort_index()
        .plot(
            ax=ax, 
            y="iv", 
            label=expiry.strftime("%Y-%m-%d"),
            grid=True,
            color=cmap(i),
            **format_kw
        )
    )
ax.set_ylabel("implied volatility")
ax.legend(loc="upper right", framealpha=0.7)

The result is a chart of the implied volatility skew across the expiration dates on a given date.

Quickly store 2,370,886 rows of historic options data. ArcticDB is a DataFrame database by the systematic trading company, Man Group.

It first retrieves a list of expiration dates and then iterates over these dates, fetching the IV curve for each.

Each curve is plotted on a subplot with a unique color from a rainbow colormap, labeled with its expiration date, and formatted according to specified parameters. The plot is finalized with labels for implied volatility on the y-axis and a legend indicating each expiration date’s curve.

Next steps

There are two steps you can take to further the analysis. First, read the documentation for the ArcticDB QueryBuilder which is an important feature of the library. Then, recreate this example and use the other data to further your analysis.