{ "cells": [ { "cell_type": "markdown", "id": "03c87582", "metadata": {}, "source": [ "# DES Hub Netbacks - Country Competitiveness Comparison\n", "\n", "This script is used to compare the regas terminal competitiveness, averaged by country, across Europe\n", "\n", "For a full explanation of how to import our DES Hub Netbacks data, please refer to our Python Jupyter Notebook Code Samples:\n", "\n", "https://www.sparkcommodities.com/api/code-examples/jupyter.html" ] }, { "cell_type": "markdown", "id": "21bd6181", "metadata": {}, "source": [ "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__\n" ] }, { "cell_type": "markdown", "id": "0037c795", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "This section can remain unchanged for most Spark API users." ] }, { "cell_type": "code", "execution_count": 120, "id": "5cbcba1b", "metadata": {}, "outputs": [], "source": [ "# import libraries for callin the API\n", "import json\n", "import os\n", "import sys\n", "import pandas as pd\n", "from base64 import b64encode\n", "from urllib.parse import urljoin\n", "from pprint import pprint\n", "import requests\n", "from io import StringIO\n", "import time\n", "import numpy as np\n", "import datetime\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")" ] }, { "cell_type": "code", "execution_count": 121, "id": "b7442d2b", "metadata": {}, "outputs": [], "source": [ "# defining query functions \n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", " print(\n", " \">>>> Client_id={}****, client_secret={}****\".format(\n", " client_id[:5], client_secret[:5]\n", " )\n", " )\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " \"\"\"\n", " OAuth2 authentication requires a POST request with client credentials before accessing the API. \n", " This POST request will return an Access Token which will be used for the API GET request.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token, format='json'):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", " print(url)\n", "\n", " if format == 'json':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"application/json\",\n", " }\n", " elif format == 'csv':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"text/csv\"\n", " }\n", "\n", " # HTTP POST request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", " status = response.status\n", " print(status)\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # Storing response based on requested format\n", " if format == 'json':\n", " content = json.loads(resp_content)\n", " elif format == 'csv':\n", " content = resp_content\n", "\n", " return content\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": b64encode(payload).decode(),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " \"scopes\": \"read:access,read:prices\"\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]" ] }, { "cell_type": "markdown", "id": "9b4b250b", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. Please refer to the API documentation if you have not dowloaded them already." ] }, { "cell_type": "code", "execution_count": null, "id": "3ec2647c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ">>>> Found credentials!\n", ">>>> Client_id=01c23****, client_secret=80763****\n", ">>>> Successfully fetched an access token eyJhb****, valid 604799 seconds.\n" ] } ], "source": [ "# Insert file path to your client credentials here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)" ] }, { "cell_type": "markdown", "id": "e0328738", "metadata": {}, "source": [ "## 2. DES Hub Netbacks - Data Import\n", "\n", "Calling the DES Hub Netbacks data, and sorting into a Historical DataFrame" ] }, { "cell_type": "code", "execution_count": 123, "id": "35759cdd", "metadata": {}, "outputs": [], "source": [ "from io import StringIO\n", "\n", "## Defining the function\n", "\n", "def fetch_historical_price_releases(access_token, unit, latest=False, start=None, end=None, limit=None, offset=None, terminal_uuid=None, format='json'):\n", " \n", " # if latest=True, set query_params to call the \".../latest/\" endpoint\n", " if latest == True:\n", " query_params = 'latest/'\n", " query_params += \"?unit={}\".format(unit)\n", " \n", " else:\n", " query_params = \"?unit={}\".format(unit)\n", "\n", " if start is not None:\n", " query_params += \"&start={}\".format(start)\n", " if end is not None:\n", " query_params += \"&end={}\".format(end)\n", " \n", " if terminal_uuid is not None:\n", " query_params += \"&terminal-uuid={}\".format(terminal_uuid)\n", " \n", "\n", " uri = \"/v1.0/lng/access/des-hub-netbacks/{}\".format(query_params)\n", "\n", " content = do_api_get_query(\n", " uri=uri, access_token=access_token, format=format\n", " )\n", "\n", " if format == 'json':\n", " data = content\n", " elif format == 'csv':\n", " # if there's no data to show, returns raw response (empty string) and \"No Data to Show\" message\n", " if len(content) == 0:\n", " data = content\n", " print('No Data to Show')\n", " else:\n", " data = content.decode('utf-8')\n", " data = pd.read_csv(StringIO(data)) # automatically converting into a Pandas DataFrame when choosing CSV format\n", "\n", " return data\n" ] }, { "cell_type": "markdown", "id": "4aa81e2c", "metadata": {}, "source": [ "### Historical Data Function\n", "\n", "Currently, a maximum of 1 year's worth of historical data can be called at one time due to the size of the data file. \n", "\n", "If more data points are required, the below code can be used: the function calls the historical data 1 year at a time and combines the data into one Pandas DataFrame" ] }, { "cell_type": "code", "execution_count": 124, "id": "9fae6c01", "metadata": {}, "outputs": [], "source": [ "import datetime\n", "\n", "def loop_historical_data(token, hist_unit, hist_start, hist_end):\n", " \n", " hist_diff = (datetime.datetime.strptime(hist_end, '%Y-%m-%d') - datetime.datetime.strptime(hist_start, '%Y-%m-%d')).days\n", " t = 0\n", "\n", " starts = []\n", " ends = []\n", "\n", " w = 365\n", "\n", " while t < hist_diff:\n", " if t == 0:\n", " diff_end = datetime.datetime.strftime(datetime.datetime.strptime(hist_start, '%Y-%m-%d') + pd.Timedelta(w, unit='days'), '%Y-%m-%d')\n", " hist_df = fetch_historical_price_releases(access_token, unit=hist_unit, start=hist_start, end=diff_end, format='csv')\n", "\n", " starts.append(hist_start)\n", " ends.append(diff_end)\n", "\n", " else:\n", " if t < hist_diff-w:\n", " diff_start = datetime.datetime.strftime(datetime.datetime.strptime(hist_start, '%Y-%m-%d') + pd.Timedelta(t+1, unit='days'), '%Y-%m-%d')\n", " diff_end = datetime.datetime.strftime(datetime.datetime.strptime(diff_start, '%Y-%m-%d') + pd.Timedelta(w, unit='days'), '%Y-%m-%d')\n", " historical_addition = fetch_historical_price_releases(access_token, unit=hist_unit, start=diff_start, end=diff_end, format='csv')\n", " try:\n", " hist_df = pd.concat([hist_df,historical_addition])\n", " #exception if hist_df is empty\n", " except:\n", " hist_df = historical_addition.copy()\n", " starts.append(hist_start)\n", " ends.append(diff_end)\n", " else:\n", " diff_start = datetime.datetime.strftime(datetime.datetime.strptime(hist_start, '%Y-%m-%d') + pd.Timedelta(t+1, unit='days'), '%Y-%m-%d')\n", " diff_end = datetime.datetime.strftime(datetime.datetime.strptime(diff_start, '%Y-%m-%d') + pd.Timedelta(hist_diff-t, unit='days'), '%Y-%m-%d')\n", " historical_addition = fetch_historical_price_releases(access_token, unit=hist_unit, start=diff_start, end=diff_end, format='csv')\n", " hist_df = pd.concat([hist_df, historical_addition])\n", " starts.append(hist_start)\n", " ends.append(diff_end)\n", " \n", " #looping by year\n", " t += w\n", "\n", " for c in list(hist_df.columns)[8:]:\n", " hist_df[c] = pd.to_numeric(hist_df[c])\n", " hist_df['ReleaseDate'] = pd.to_datetime(hist_df['ReleaseDate'])\n", "\n", " return hist_df" ] }, { "cell_type": "code", "execution_count": 125, "id": "8fcf6066", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "https://api.sparkcommodities.com/v1.0/lng/access/des-hub-netbacks/?unit=usd-per-mmbtu&start=2024-01-01&end=2024-12-31\n", "200\n", "https://api.sparkcommodities.com/v1.0/lng/access/des-hub-netbacks/?unit=usd-per-mmbtu&start=2025-01-01&end=2026-01-01\n", "200\n", "https://api.sparkcommodities.com/v1.0/lng/access/des-hub-netbacks/?unit=usd-per-mmbtu&start=2026-01-01&end=2026-01-11\n", "200\n" ] }, { "data": { "text/html": [ "
| \n", " | ReleaseDate | \n", "DeliveryMonth | \n", "DeliveryMonthName | \n", "DeliveryMonthIndex | \n", "TerminalUUID | \n", "TerminalName | \n", "GasHubPriceSourceFrontMonth | \n", "GasHubPriceSourceForwardCurve | \n", "NetbackTTFBasis | \n", "NetbackOutright | \n", "... | \n", "SlotBerth | \n", "SlotBerthUnloadStorageRegas | \n", "SlotLtCapacityEstimate | \n", "AdditionalStorage | \n", "AdditionalSendout | \n", "FuelGasLossesGasInKind | \n", "EntryCapacity | \n", "Emissions | \n", "Power | \n", "EntryVariable | \n", "
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | \n", "2024-12-31 | \n", "2025-02-01 | \n", "Feb25 | \n", "M+1 | \n", "00317185-978a-4df5-970c-2c28d3ab893c | \n", "Isle of Grain | \n", "m | \n", "ukd | \n", "-0.871 | \n", "13.966 | \n", "... | \n", "NaN | \n", "NaN | \n", "0.305 | \n", "NaN | \n", "NaN | \n", "0.231 | \n", "0.48 | \n", "0.036 | \n", "0.228 | \n", "0.028 | \n", "
| 1 | \n", "2024-12-31 | \n", "2025-03-01 | \n", "Mar25 | \n", "M+2 | \n", "00317185-978a-4df5-970c-2c28d3ab893c | \n", "Isle of Grain | \n", "m | \n", "ukd | \n", "-1.139 | \n", "13.765 | \n", "... | \n", "NaN | \n", "NaN | \n", "0.305 | \n", "NaN | \n", "NaN | \n", "0.228 | \n", "0.48 | \n", "0.036 | \n", "0.224 | \n", "0.028 | \n", "
| 2 | \n", "2024-12-31 | \n", "2025-04-01 | \n", "Apr25 | \n", "M+3 | \n", "00317185-978a-4df5-970c-2c28d3ab893c | \n", "Isle of Grain | \n", "m | \n", "ukd | \n", "-1.196 | \n", "13.648 | \n", "... | \n", "NaN | \n", "NaN | \n", "0.305 | \n", "NaN | \n", "NaN | \n", "0.226 | \n", "0.48 | \n", "0.036 | \n", "0.195 | \n", "0.028 | \n", "
| 3 | \n", "2024-12-31 | \n", "2025-05-01 | \n", "May25 | \n", "M+4 | \n", "00317185-978a-4df5-970c-2c28d3ab893c | \n", "Isle of Grain | \n", "m | \n", "ukd | \n", "-1.262 | \n", "13.536 | \n", "... | \n", "NaN | \n", "NaN | \n", "0.305 | \n", "NaN | \n", "NaN | \n", "0.224 | \n", "0.48 | \n", "0.036 | \n", "0.226 | \n", "0.028 | \n", "
| 4 | \n", "2024-12-31 | \n", "2025-06-01 | \n", "Jun25 | \n", "M+5 | \n", "00317185-978a-4df5-970c-2c28d3ab893c | \n", "Isle of Grain | \n", "m | \n", "ukd | \n", "-1.436 | \n", "13.353 | \n", "... | \n", "NaN | \n", "NaN | \n", "0.305 | \n", "NaN | \n", "NaN | \n", "0.223 | \n", "0.48 | \n", "0.036 | \n", "0.300 | \n", "0.028 | \n", "
5 rows × 22 columns
\n", "| \n", " | Release Date | \n", "SparkNWE | \n", "SparkSWE | \n", "
|---|---|---|---|
| 0 | \n", "2026-01-14 | \n", "-0.570 | \n", "-0.545 | \n", "
| 1 | \n", "2026-01-13 | \n", "-0.540 | \n", "-0.545 | \n", "
| 2 | \n", "2026-01-12 | \n", "-0.520 | \n", "-0.535 | \n", "
| 3 | \n", "2026-01-09 | \n", "-0.530 | \n", "-0.535 | \n", "
| 4 | \n", "2026-01-08 | \n", "-0.545 | \n", "-0.535 | \n", "
| \n", " | ReleaseDate | \n", "DeliveryMonthIndex | \n", "Gate | \n", "Isle of Grain | \n", "Zeebrugge | \n", "Spain TVB | \n", "Dunkerque | \n", "Brunsbuttel | \n", "OLT Toscana | \n", "Le Havre | \n", "... | \n", "Deutsche Ostsee | \n", "Deutsche Ostsee Phase 1 | \n", "Ravenna | \n", "Wilhelmshaven 2 | \n", "Ave | \n", "Min | \n", "Max | \n", "Release Date | \n", "SparkNWE | \n", "SparkSWE | \n", "
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 507 | \n", "2026-01-09 | \n", "M+1 | \n", "-0.412 | \n", "-0.791 | \n", "-0.183 | \n", "-0.362 | \n", "-0.345 | \n", "0.246 | \n", "0.542 | \n", "-0.729 | \n", "... | \n", "-0.110 | \n", "NaN | \n", "0.461 | \n", "0.017 | \n", "-0.139722 | \n", "-0.791 | \n", "0.598 | \n", "2026-01-09 | \n", "-0.530 | \n", "-0.535 | \n", "
| 508 | \n", "2026-01-08 | \n", "M+1 | \n", "-0.412 | \n", "-0.787 | \n", "-0.181 | \n", "-0.355 | \n", "-0.346 | \n", "0.222 | \n", "0.545 | \n", "-0.725 | \n", "... | \n", "-0.130 | \n", "NaN | \n", "0.466 | \n", "-0.004 | \n", "-0.142556 | \n", "-0.787 | \n", "0.602 | \n", "2026-01-08 | \n", "-0.545 | \n", "-0.535 | \n", "
| 509 | \n", "2026-01-07 | \n", "M+1 | \n", "-0.414 | \n", "-0.754 | \n", "-0.178 | \n", "-0.357 | \n", "-0.346 | \n", "0.222 | \n", "0.542 | \n", "-0.735 | \n", "... | \n", "-0.136 | \n", "NaN | \n", "0.460 | \n", "-0.010 | \n", "-0.141667 | \n", "-0.754 | \n", "0.598 | \n", "2026-01-07 | \n", "-0.540 | \n", "-0.535 | \n", "
| 510 | \n", "2026-01-06 | \n", "M+1 | \n", "-0.414 | \n", "-0.731 | \n", "-0.182 | \n", "-0.368 | \n", "-0.348 | \n", "0.209 | \n", "0.546 | \n", "-0.730 | \n", "... | \n", "-0.144 | \n", "NaN | \n", "0.465 | \n", "-0.017 | \n", "-0.140833 | \n", "-0.731 | \n", "0.602 | \n", "2026-01-06 | \n", "-0.535 | \n", "-0.535 | \n", "
| 511 | \n", "2026-01-05 | \n", "M+1 | \n", "-0.413 | \n", "-0.757 | \n", "-0.214 | \n", "-0.283 | \n", "-0.338 | \n", "0.206 | \n", "0.487 | \n", "-0.716 | \n", "... | \n", "-0.144 | \n", "NaN | \n", "0.408 | \n", "-0.018 | \n", "-0.151389 | \n", "-0.757 | \n", "0.544 | \n", "2026-01-05 | \n", "-0.535 | \n", "-0.490 | \n", "
5 rows × 27 columns
\n", "