{ "cells": [ { "cell_type": "markdown", "id": "03c87582", "metadata": {}, "source": [ "# DES Hub Netbacks - Percentage of Terminals \"In the Money\"\n", "Determining how many terminals are considered \"in the money\", as a percentage of the all the major LNG European terminals on Spark Access. Useful as a signal for European LNG flows." ] }, { "cell_type": "markdown", "id": "21bd6181", "metadata": {}, "source": [ "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__\n" ] }, { "cell_type": "markdown", "id": "0037c795", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "This section can remain unchanged for most Spark API users." ] }, { "cell_type": "code", "execution_count": 31, "id": "5cbcba1b", "metadata": {}, "outputs": [], "source": [ "# import libraries for callin the API\n", "import json\n", "import os\n", "import sys\n", "import pandas as pd\n", "from base64 import b64encode\n", "from urllib.parse import urljoin\n", "from pprint import pprint\n", "import requests\n", "from io import StringIO\n", "import time\n", "import numpy as np\n", "import datetime\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")" ] }, { "cell_type": "code", "execution_count": 32, "id": "b7442d2b", "metadata": {}, "outputs": [], "source": [ "# defining query functions \n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", " print(\n", " \">>>> Client_id={}****, client_secret={}****\".format(\n", " client_id[:5], client_secret[:5]\n", " )\n", " )\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " \"\"\"\n", " OAuth2 authentication requires a POST request with client credentials before accessing the API. \n", " This POST request will return an Access Token which will be used for the API GET request.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"accept\": \"application/json\",\n", " }\n", "\n", " print(f\"Fetching {url}\")\n", "\n", " # HTTP GET request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": b64encode(payload).decode(),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " \"scopes\": \"read:access,read:prices\"\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]" ] }, { "cell_type": "markdown", "id": "9b4b250b", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. Please refer to the API documentation if you have not dowloaded them already.\n", "\n", "The code then prints the available prices that are callable from the API, and their corresponding Python ticker names are displayed as a list at the bottom of the Output." ] }, { "cell_type": "code", "execution_count": null, "id": "3ec2647c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Found credentials!\n", ">>>> Client_id=01c23****, client_secret=80763****\n", ">>>> Successfully fetched an access token eyJhb****, valid 604799 seconds.\n" ] } ], "source": [ "# Insert file path to your client credentials here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)" ] }, { "cell_type": "markdown", "id": "e0328738", "metadata": {}, "source": [ "## 2. DES Hub Netbacks\n", "\n", "Calling the DES Hub Netbacks data, and sorting into a Historical DataFrame" ] }, { "cell_type": "code", "execution_count": 34, "id": "35759cdd", "metadata": {}, "outputs": [], "source": [ "## Defining the function to import the data\n", "\n", "def fetch_deshub_releases(access_token, unit, limit=None, offset=None, terminal=None):\n", " \n", " query_params = \"?unit={}\".format(unit)\n", " if limit is not None:\n", " query_params += \"&limit={}\".format(limit)\n", " if offset is not None:\n", " query_params += \"&offset={}\".format(offset)\n", " if terminal is not None:\n", " query_params += \"&terminal={}\".format(terminal)\n", "\n", "\n", " content = do_api_get_query(\n", " uri=\"/beta/access/des-hub-netbacks/{}\".format(query_params), access_token=access_token\n", " )\n", "\n", " return content\n" ] }, { "cell_type": "code", "execution_count": null, "id": "d490f453", "metadata": {}, "outputs": [], "source": [ "# Sorting the JSON into a Pandas DataFrame\n", "\n", "def deshub_organise_dataframe(data):\n", " \"\"\"\n", " This function sorts the API content into a dataframe. The columns available are Release Date, Terminal, Month, Vessel Size, $/MMBtu and €/MWh. \n", " Essentially, this function parses the Access database using the Month, Terminal and Vessel size columns as reference.\n", " \"\"\"\n", " # create columns\n", " data_dict = {\n", " 'Release Date':[],\n", " 'Terminal':[],\n", " 'Month Index':[],\n", " 'Delivery Month':[],\n", " 'DES Hub Netback - TTF Basis':[],\n", " 'DES Hub Netback - Outright':[],\n", " 'Total Regas':[],\n", " 'Basic Slot (Berth)':[],\n", " 'Basic Slot (Unload/Stor/Regas)':[],\n", " 'Basic Slot (B/U/S/R)':[],\n", " 'Additional Storage':[],\n", " 'Additional Sendout':[],\n", " 'Gas in Kind': [],\n", " 'Entry Capacity':[],\n", " 'Commodity Charge':[]\n", " }\n", "\n", " # loop for each Terminal\n", " for l in data['data']:\n", " \n", " # assigning values to each column\n", " data_dict['Release Date'].append(l[\"releaseDate\"])\n", " data_dict['Terminal'].append(data['metaData']['terminals'][l['terminalUuid']])\n", " data_dict['Month Index'].append(l['monthIndex'])\n", " data_dict['Delivery Month'].append(l['deliveryMonth'])\n", "\n", " data_dict['DES Hub Netback - TTF Basis'].append(float(l['netbackTtfBasis']))\n", " data_dict['DES Hub Netback - Outright'].append(float(l['netbackOutright']))\n", " data_dict['Total Regas'].append(float(l['totalRegasificationCost']))\n", " data_dict['Basic Slot (Berth)'].append(float(l['slotBerth']))\n", " data_dict['Basic Slot (Unload/Stor/Regas)'].append(float(l['slotUnloadStorageRegas']))\n", " data_dict['Basic Slot (B/U/S/R)'].append(float(l['slotBerthUnloadStorageRegas']))\n", " data_dict['Additional Storage'].append(float(l['additionalStorage']))\n", " data_dict['Additional Sendout'].append(float(l['additionalSendout']))\n", " data_dict['Gas in Kind'].append(float(l['gasInKind']))\n", " data_dict['Entry Capacity'].append(float(l['entryCapacity']))\n", " data_dict['Commodity Charge'].append(float(l['commodityCharge']))\n", " \n", " \n", " # convert into dataframe\n", " df = pd.DataFrame(data_dict)\n", " \n", " df['Delivery Month'] = pd.to_datetime(df['Delivery Month'])\n", " df['Release Date'] = pd.to_datetime(df['Release Date'])\n", "\n", " # defining \"Variable Regas Costs only\" - here, we treat slot costs as the only fixed regas cost component\n", " df['DES Hub Netback - TTF Basis - Var Regas Costs Only'] = df['DES Hub Netback - TTF Basis'] \\\n", " + df['Basic Slot (B/U/S/R)'] \\\n", " + df['Basic Slot (Berth)'] \\\n", " + df['Basic Slot (B/U/S/R)']\n", " \n", " return df\n" ] }, { "cell_type": "markdown", "id": "4aa81e2c", "metadata": {}, "source": [ "### Historical Data Function\n", "\n", "Currently, a maximum of 30 historical datasets can be called at one time due to the size of the data file. \n", "\n", "If more data points are required, the below code can be used. It calls 30 historical datasets at a time, but utilises the 'offset' parameter to call datasets further back in the historical database. To call more history, increase the 'n_offset' parameter in the first line of the code. The 'n_offset' parameter describes the number of historical data requests to be executed." ] }, { "cell_type": "code", "execution_count": 36, "id": "9fae6c01", "metadata": {}, "outputs": [], "source": [ "def loop_historical_data(token,n_offset):\n", " # initalise first set of historical data and initialising dataframe\n", " historical = fetch_deshub_releases(access_token, unit='usd-per-mmbtu', limit=30)\n", " hist_df = deshub_organise_dataframe(historical)\n", " terminal_list = list(historical['metaData']['terminals'].values())\n", "\n", " # Looping through earlier historical data and adding to the historical dataframe\n", " for i in range(1,n_offset+1):\n", " historical = fetch_deshub_releases(access_token, unit='usd-per-mmbtu', limit=30, offset=i*30)\n", " hist_df = pd.concat([hist_df,deshub_organise_dataframe(historical)])\n", "\n", " return hist_df, terminal_list" ] }, { "cell_type": "code", "execution_count": 37, "id": "8fcf6066", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=30\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=60\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=90\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=120\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=150\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=180\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=210\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=240\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=270\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=300\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=330\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=360\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=390\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=420\n", "Fetching https://api.sparkcommodities.com/beta/access/des-hub-netbacks/?unit=usd-per-mmbtu&limit=30&offset=450\n" ] } ], "source": [ "loops = 15\n", "hdf, full_terms = loop_historical_data(access_token,loops)" ] }, { "cell_type": "markdown", "id": "66b16cd0", "metadata": {}, "source": [ "# 3. SparkNWE & SparkSWE - Data Import\n", "\n", "Calling the SparkNWE & SparkSWE front month prices (in TTF basis format) to compare against the terminal DES Hub netbacks and determine which terminals are \"in the money\"" ] }, { "cell_type": "code", "execution_count": 38, "id": "1c4262ca", "metadata": {}, "outputs": [], "source": [ "# Calling contracts endpoint to import cargo data\n", "def fetch_cargo_releases(access_token, ticker, limit=4, offset=None):\n", "\n", " print(\">>>> Get price releases for {}\".format(ticker))\n", "\n", " query_params = \"?limit={}\".format(limit)\n", " if offset is not None:\n", " query_params += \"&offset={}\".format(offset)\n", "\n", " content = do_api_get_query(\n", " uri=\"/v1.0/contracts/{}/price-releases/{}\".format(ticker, query_params),\n", " access_token=access_token,\n", " )\n", "\n", " my_dict = content['data']\n", " \n", " return my_dict\n", "\n", "# Function to import data and then sort into a DataFrame\n", "def cargo_to_dataframe(access_token, ticker, limit, month):\n", "\n", " # imports front month or forward curve prices, depending on the \"month\" user input\n", " if month == 'M+1':\n", " full_tick = ticker + '-b-f'\n", " hist_data = fetch_cargo_releases(access_token, full_tick, limit)\n", " else:\n", " full_tick = ticker + '-b-fo'\n", " hist_data = fetch_cargo_releases(access_token, full_tick, limit)\n", " \n", "\n", " release_dates = []\n", " period_start = []\n", " ticker = []\n", " spark = []\n", "\n", " spark_min = []\n", " spark_max = []\n", " cal_month = []\n", "\n", " # iterating through historical data points to fetch relevant data\n", " for release in hist_data:\n", " release_date = release[\"releaseDate\"]\n", " ticker.append(release['contractId'])\n", " release_dates.append(release_date)\n", "\n", " mi = int(month[-1])-2\n", "\n", " data_point = release['data'][0]['dataPoints'][mi]\n", "\n", " period_start_at = data_point[\"deliveryPeriod\"][\"startAt\"]\n", " period_start.append(period_start_at)\n", " \n", " spark.append(data_point['derivedPrices']['usdPerMMBtu']['spark'])\n", " spark_min.append(data_point['derivedPrices']['usdPerMMBtu']['sparkMin'])\n", " spark_max.append(data_point['derivedPrices']['usdPerMMBtu']['sparkMax'])\n", " \n", " cal_month.append(datetime.datetime.strptime(period_start_at, '%Y-%m-%d').strftime('%b-%Y'))\n", " \n", "\n", " # Converting into DataFrame\n", " hist_df = pd.DataFrame({\n", " 'Release Date': release_dates,\n", " 'ticker': ticker,\n", " 'Period Start': period_start,\n", " 'Price': spark,\n", " })\n", " \n", " \n", " hist_df['Price'] = pd.to_numeric(hist_df['Price'])\n", " hist_df['Release Date'] = pd.to_datetime(hist_df['Release Date'])\n", "\n", " hist_df['Release Date'] = hist_df['Release Date'].dt.tz_localize(None) \n", "\n", " return hist_df" ] }, { "cell_type": "markdown", "id": "f7393260", "metadata": {}, "source": [ "# 4. Analysis\n", "\n", "- Input contract month required\n", "- Subtracting the SparkNWE/SWE prices from the DES Hub Netbacks to determine whether terminals are \"in\" or \"out of the money\".\n", " - This will be labelled as \"WTP\", or the \"Willingness to Pay\" metric\n", "- Calculating the percentage of terminals in/out of the money historically\n", "- Plotting this fractional WTP historical evolution over time" ] }, { "cell_type": "markdown", "id": "86be1584", "metadata": {}, "source": [ "### Inputs" ] }, { "cell_type": "code", "execution_count": 39, "id": "1d15ea9c", "metadata": {}, "outputs": [], "source": [ "# Choose which forward month you'd like to analyse - either front month (\"M+1\") or any other month up until M+11\n", "month = 'M+1'\n", "\n", "# Here we define which terminals we want to use in the analytics. The default is all terminals, but you can choose a subset if preferred (as demonstrated in comment below)\n", "terms = full_terms.copy()\n", "#terms = ['gate', 'dunkerque', 'zeebrugge']" ] }, { "cell_type": "markdown", "id": "ec85e243", "metadata": {}, "source": [ "### Data Calling & Analytical Procedures" ] }, { "cell_type": "code", "execution_count": 40, "id": "6fb701b6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Get price releases for sparknwe-b-f\n", "Fetching https://api.sparkcommodities.com/v1.0/contracts/sparknwe-b-f/price-releases/?limit=450\n", ">>>> Get price releases for sparkswe-b-f\n", "Fetching https://api.sparkcommodities.com/v1.0/contracts/sparkswe-b-f/price-releases/?limit=450\n" ] } ], "source": [ "# Import NWE/SWE LNG prices\n", "sparknwe = cargo_to_dataframe(access_token, 'sparknwe', loops*30, month=month)\n", "sparkswe = cargo_to_dataframe(access_token, 'sparkswe', loops*30, month=month)\n", "\n", "# retrieve the same amount of historical data for SparkSWE as SparkNWE \n", "sparkswe = sparkswe[sparkswe['Release Date'] >= sparknwe['Release Date'].iloc[-1]].copy()\n", "\n", "# Combine datasets and backfill SWE data as needed (due to reduced assessment frequency)\n", "cargo_df = pd.merge(sparknwe, sparkswe, how='left', on='Release Date')\n", "cargo_df['Price_y'] = cargo_df['Price_y'].bfill().copy()\n", "\n", "cargo_df = cargo_df[['Release Date', 'Price_x', 'Price_y']].copy()\n", "cargo_df = cargo_df.rename(columns={'Price_x': 'SparkNWE',\n", " 'Price_y': 'SparkSWE'})" ] }, { "cell_type": "code", "execution_count": 41, "id": "7f44cd95", "metadata": {}, "outputs": [], "source": [ "# Defining which terminals belong to NWE or SWE, so that the relevant DES LNG price can be subtracted to calculate the terminals' WTP\n", "terminal_region_dict = {\n", " 'gate': 'nwe',\n", " 'grain-lng': 'nwe',\n", " 'zeebrugge': 'nwe',\n", " 'south-hook': 'nwe',\n", " 'dunkerque': 'nwe',\n", " 'le-havre': 'nwe',\n", " 'montoir': 'nwe',\n", " 'eems-energy-terminal': 'nwe',\n", " 'brunsbuttel': 'nwe',\n", " 'deutsche-ostsee': 'nwe',\n", " 'wilhelmshaven': 'nwe',\n", " 'wilhelmshaven-2': 'nwe',\n", " 'stade': 'nwe',\n", " 'fos-cavaou': 'swe',\n", " 'adriatic': 'swe',\n", " 'olt-toscana': 'swe',\n", " 'piombino': 'swe',\n", " 'ravenna': 'swe',\n", " 'tvb': 'swe'\n", "}" ] }, { "cell_type": "code", "execution_count": 42, "id": "62688b87", "metadata": {}, "outputs": [], "source": [ "# Initialising the \"month\" dataframe, which uses the \"month\" user input to create a DataFrame with all the relevant DES Hub netbacks data for that month for each terminal\n", "# the Gate DES Hub Netbacks data is used to set the \"Release Date\" and \"Delivery Month\" columns as Gate has the longest historical dataset\n", "# Here, we also use the \"Variable Regas Costs only\" Netbacks, which considers slot costs sunk (defined in the \"deshub_organise_dataframe\" function)\n", "month_df = hdf[(hdf['Terminal'] == 'gate') & (hdf['Month Index'] == month)][['Release Date', 'Delivery Month', 'DES Hub Netback - TTF Basis - Var Regas Costs Only']]\n", "month_df = month_df.rename(columns={'DES Hub Netback - TTF Basis - Var Regas Costs Only':'gate'})\n", "\n", "# defining a new list of terminals without \"gate\" in it\n", "terms2 = [x if x != 'gate' else None for x in terms]\n", "\n", "# iterating through list of terminals and adding data to the Terminal WTP dataframe\n", "for t in terms2:\n", " if t is not None:\n", " tdf = hdf[(hdf['Terminal'] == t) & (hdf['Month Index'] == month)][['Release Date', 'DES Hub Netback - TTF Basis - Var Regas Costs Only']]\n", " month_df = month_df.merge(tdf, on='Release Date', how='left')\n", " month_df = month_df.rename(columns={'DES Hub Netback - TTF Basis - Var Regas Costs Only':t})\n", "\n", "# Calculating the Average, Minimum and Maximum WTP values for all terminals (i.e. for Europe)\n", "month_df['Ave'] = month_df[terms].mean(axis=1)\n", "month_df['Min'] = month_df[terms].min(axis=1)\n", "month_df['Max'] = month_df[terms].max(axis=1)\n", "\n", "# Merging Cargo prices with the DataFrame, and backfilling data as needed so that the datasets are the same length (needed due to differing price release frequency)\n", "month_df = month_df.merge(cargo_df, how='left', on='Release Date')\n", "month_df['SparkNWE'] = month_df['SparkNWE'].bfill().copy()\n", "month_df['SparkSWE'] = month_df['SparkSWE'].bfill().copy()" ] }, { "cell_type": "code", "execution_count": 43, "id": "26c07bca", "metadata": {}, "outputs": [], "source": [ "# Creating a WTP dataframe, subtracting NWE/SWE prices from each terminals' DES Hub netbacks data.\n", "# The use of NWE or SWE prices for each terminal is set by the \"terminal_region_dict\" defined earlier in the script \n", "wtp_df = month_df[['Release Date', 'Delivery Month', 'SparkNWE', 'SparkSWE']].copy()\n", "\n", "for t in terms:\n", " if terminal_region_dict[t] == 'nwe':\n", " wtp_df[t] = month_df[t].copy() - month_df['SparkNWE'].copy()\n", " elif terminal_region_dict[t] == 'swe':\n", " wtp_df[t] = month_df[t].copy() - month_df['SparkSWE'].copy()\n", " else:\n", " wtp_df[t] = month_df[t].copy() - month_df['SparkNWE'].copy()" ] }, { "cell_type": "code", "execution_count": 44, "id": "f0fc11fc", "metadata": {}, "outputs": [], "source": [ "# Calculating the Average, Min and Max WTP metric over all terminals\n", "wtp_df['Ave'] = wtp_df[terms].mean(axis=1)\n", "wtp_df['Min'] = wtp_df[terms].min(axis=1)\n", "wtp_df['Max'] = wtp_df[terms].max(axis=1)\n", " \n", "# Calculate fraction of terminals in the money\n", "wtp_df['Fraction'] = wtp_df[terms].gt(0).sum(axis=1)/(len(terms)-wtp_df.isna().sum(axis=1))" ] }, { "cell_type": "code", "execution_count": 45, "id": "adab54c5", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | Release Date | \n", "Delivery Month | \n", "SparkNWE | \n", "SparkSWE | \n", "adriatic | \n", "brunsbuttel | \n", "deutsche-ostsee | \n", "dunkerque | \n", "eems-energy-terminal | \n", "fos-cavaou | \n", "... | \n", "south-hook | \n", "stade | \n", "tvb | \n", "wilhelmshaven | \n", "wilhelmshaven-2 | \n", "zeebrugge | \n", "Ave | \n", "Min | \n", "Max | \n", "Fraction | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | \n", "2025-06-13 | \n", "2025-07-01 | \n", "-0.460 | \n", "-0.440 | \n", "0.135 | \n", "1.350 | \n", "0.676 | \n", "0.684 | \n", "0.799 | \n", "-0.263 | \n", "... | \n", "-0.722 | \n", "1.376 | \n", "-0.011 | \n", "1.350 | \n", "1.275 | \n", "0.258 | \n", "0.666526 | \n", "-1.038 | \n", "2.332 | \n", "0.736842 | \n", "
1 | \n", "2025-06-12 | \n", "2025-07-01 | \n", "-0.465 | \n", "-0.440 | \n", "0.139 | \n", "1.381 | \n", "0.714 | \n", "0.691 | \n", "0.809 | \n", "-0.263 | \n", "... | \n", "-0.816 | \n", "1.405 | \n", "-0.013 | \n", "1.381 | \n", "1.309 | \n", "0.262 | \n", "0.670789 | \n", "-1.121 | \n", "2.343 | \n", "0.736842 | \n", "
2 | \n", "2025-06-11 | \n", "2025-07-01 | \n", "-0.440 | \n", "-0.435 | \n", "0.137 | \n", "1.342 | \n", "0.687 | \n", "0.627 | \n", "0.779 | \n", "-0.299 | \n", "... | \n", "-0.784 | \n", "1.365 | \n", "-0.022 | \n", "1.342 | \n", "1.271 | \n", "0.227 | \n", "0.645737 | \n", "-1.083 | \n", "2.322 | \n", "0.736842 | \n", "
3 | \n", "2025-06-10 | \n", "2025-07-01 | \n", "-0.455 | \n", "-0.435 | \n", "0.125 | \n", "1.339 | \n", "0.698 | \n", "0.631 | \n", "0.796 | \n", "-0.305 | \n", "... | \n", "-0.785 | \n", "1.361 | \n", "-0.033 | \n", "1.339 | \n", "1.271 | \n", "0.306 | \n", "0.649316 | \n", "-1.075 | \n", "2.300 | \n", "0.736842 | \n", "
4 | \n", "2025-06-09 | \n", "2025-07-01 | \n", "-0.450 | \n", "-0.460 | \n", "0.065 | \n", "1.322 | \n", "0.677 | \n", "0.634 | \n", "0.788 | \n", "-0.271 | \n", "... | \n", "-0.806 | \n", "1.345 | \n", "-0.021 | \n", "1.322 | \n", "1.252 | \n", "0.261 | \n", "0.628316 | \n", "-1.099 | \n", "2.235 | \n", "0.736842 | \n", "
5 rows × 27 columns
\n", "