{ "cells": [ { "cell_type": "markdown", "id": "92475c69", "metadata": {}, "source": [ "# Python API Example - European Gas Contracts\n", "## Calling the latest & historical Spark European Gas Prices\n", "\n", "### Have any questions?\n", "\n", "If you have any questions regarding our API, or need help accessing specific datasets, please contact us at:\n", "\n", "__data@sparkcommodities.com__\n", "\n", "or refer to our API website for more information about this endpoint:\n", "https://www.sparkcommodities.com/api/" ] }, { "cell_type": "markdown", "id": "c5716130", "metadata": {}, "source": [ "## 1. Importing Data\n", "\n", "Here we define the functions that allow us to retrieve the valid credentials to access the Spark API.\n", "\n", "__This section can remain unchanged for most Spark API users.__" ] }, { "cell_type": "code", "execution_count": null, "id": "33fb0640", "metadata": {}, "outputs": [], "source": [ "import json\n", "import os\n", "import sys\n", "import pandas as pd\n", "import numpy as np\n", "from base64 import b64encode\n", "from pprint import pprint\n", "from urllib.parse import urljoin\n", "from datetime import datetime\n", "\n", "\n", "try:\n", " from urllib import request, parse\n", " from urllib.error import HTTPError\n", "except ImportError:\n", " raise RuntimeError(\"Python 3 required\")\n", "\n", "\n", "API_BASE_URL = \"https://api.sparkcommodities.com\"\n", "\n", "\n", "def retrieve_credentials(file_path=None):\n", " \"\"\"\n", " Find credentials either by reading the client_credentials file or reading\n", " environment variables\n", " \"\"\"\n", " if file_path is None:\n", " client_id = os.getenv(\"SPARK_CLIENT_ID\")\n", " client_secret = os.getenv(\"SPARK_CLIENT_SECRET\")\n", " if not client_id or not client_secret:\n", " raise RuntimeError(\n", " \"SPARK_CLIENT_ID and SPARK_CLIENT_SECRET environment vars required\"\n", " )\n", " else:\n", " # Parse the file\n", " if not os.path.isfile(file_path):\n", " raise RuntimeError(\"The file {} doesn't exist\".format(file_path))\n", "\n", " with open(file_path) as fp:\n", " lines = [l.replace(\"\\n\", \"\") for l in fp.readlines()]\n", "\n", " if lines[0] in (\"clientId,clientSecret\", \"client_id,client_secret\"):\n", " client_id, client_secret = lines[1].split(\",\")\n", " else:\n", " print(\"First line read: '{}'\".format(lines[0]))\n", " raise RuntimeError(\n", " \"The specified file {} doesn't look like to be a Spark API client \"\n", " \"credentials file\".format(file_path)\n", " )\n", "\n", " print(\">>>> Found credentials!\")\n", " print(\n", " \">>>> Client_id={}, client_secret={}****\".format(client_id, client_secret[:5])\n", " )\n", "\n", " return client_id, client_secret\n", "\n", "\n", "def do_api_post_query(uri, body, headers):\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " data = json.dumps(body).encode(\"utf-8\")\n", "\n", " # HTTP POST request\n", " req = request.Request(url, data=data, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", "\n", " # The server must return HTTP 201. Raise an error if this is not the case\n", " assert response.status == 201, resp_content\n", "\n", " # The server returned a JSON response\n", " content = json.loads(resp_content)\n", "\n", " return content\n", "\n", "\n", "def do_api_get_query(uri, access_token, format='json'):\n", " \"\"\"\n", " After receiving an Access Token, we can request information from the API.\n", " \"\"\"\n", " url = urljoin(API_BASE_URL, uri)\n", "\n", " if format == 'json':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"application/json\",\n", " }\n", " elif format == 'csv':\n", " headers = {\n", " \"Authorization\": \"Bearer {}\".format(access_token),\n", " \"Accept\": \"text/csv\"\n", " }\n", " else:\n", " raise AttributeError('The format parameter only takes `csv` or `json` as inputs')\n", "\n", " # HTTP GET request\n", " req = request.Request(url, headers=headers)\n", " try:\n", " response = request.urlopen(req)\n", " except HTTPError as e:\n", " print(\"HTTP Error: \", e.code)\n", " print(e.read())\n", " sys.exit(1)\n", "\n", " resp_content = response.read()\n", " #status = response.status\n", "\n", " # The server must return HTTP 200. Raise an error if this is not the case\n", " assert response.status == 200, resp_content\n", "\n", " # Storing response based on requested format\n", " if format == 'json':\n", " content = json.loads(resp_content)\n", " elif format == 'csv':\n", " content = resp_content\n", "\n", " return content\n", "\n", "\n", "def get_access_token(client_id, client_secret):\n", " \"\"\"\n", " Get a new access_token. Access tokens are the thing that applications use to make\n", " API requests. Access tokens must be kept confidential in storage.\n", "\n", " # Procedure:\n", "\n", " Do a POST query with `grantType` and `scopes` in the body. A basic authorization\n", " HTTP header is required. The \"Basic\" HTTP authentication scheme is defined in\n", " RFC 7617, which transmits credentials as `clientId:clientSecret` pairs, encoded\n", " using base64.\n", " \"\"\"\n", "\n", " # Note: for the sake of this example, we choose to use the Python urllib from the\n", " # standard lib. One should consider using https://requests.readthedocs.io/\n", "\n", " payload = \"{}:{}\".format(client_id, client_secret).encode()\n", " headers = {\n", " \"Authorization\": \"Basic {}\".format(b64encode(payload).decode()),\n", " \"Accept\": \"application/json\",\n", " \"Content-Type\": \"application/json\",\n", " }\n", " body = {\n", " \"grantType\": \"clientCredentials\",\n", " }\n", "\n", " content = do_api_post_query(uri=\"/oauth/token/\", body=body, headers=headers)\n", "\n", " print(\n", " \">>>> Successfully fetched an access token {}****, valid {} seconds.\".format(\n", " content[\"accessToken\"][:5], content[\"expiresIn\"]\n", " )\n", " )\n", "\n", " return content[\"accessToken\"]\n", "\n", "\n" ] }, { "cell_type": "markdown", "id": "fd3171a8", "metadata": {}, "source": [ "## N.B. Credentials\n", "\n", "Here we call the above functions, and input the file path to our credentials.\n", "\n", "N.B. You must have downloaded your client credentials CSV file before proceeding. You can create and download API credentials from the Spark Platform:\n", "\n", "https://app.sparkcommodities.com/data-integrations/api\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "fd7e89bf", "metadata": {}, "outputs": [], "source": [ "# Insert file path to your client credentials here\n", "client_id, client_secret = retrieve_credentials(file_path=\"/tmp/client_credentials.csv\")\n", "\n", "# Authenticate:\n", "access_token = get_access_token(client_id, client_secret)\n", "print(access_token)" ] }, { "cell_type": "markdown", "id": "0994ce16", "metadata": {}, "source": [ "# 2. Fetching Contract Prices\n", "\n", "Here we define the function used to call price releases for Spark Gas Contracts. This endpoint takes 5 parameters - details on these parameters can be found on the API docs website:\n", "\n", "https://www.sparkcommodities.com/api/gas/contracts.html\n", "\n", "\n", "Users can also choose which format to retrieve the data in, 'json' or 'csv'.\n", "\n", "__N.B__ Metadata is only available via the JSON format." ] }, { "cell_type": "code", "execution_count": 3, "id": "dff2524b", "metadata": {}, "outputs": [], "source": [ "from io import StringIO\n", "\n", "## Defining the function\n", "\n", "def fetch_prices(access_token, ticker, unit, start, end, limit=None, offset=None, format='json'):\n", "\n", " query_params = '?unit={}'.format(unit)\n", "\n", " query_params += '&start={}'.format(start)\n", "\n", " query_params += '&end={}'.format(end)\n", "\n", " if limit is not None:\n", " query_params += '&limit={}'.format(limit)\n", "\n", " if offset is not None:\n", " query_params += '&offset={}'.format(offset)\n", "\n", "\n", " uri=\"v1.0/gas/contracts/{}/{}\".format(ticker,query_params)\n", " print(uri)\n", " \n", " content = do_api_get_query(\n", " uri=uri, access_token=access_token, format=format\n", " )\n", " \n", " if format == 'json':\n", " my_dict = content\n", " elif format == 'csv':\n", " # if there's no data to show, returns raw response (empty string) and \"No Data to Show\" message\n", " if len(content) == 0:\n", " my_dict = content\n", " print('No Data to Show')\n", " else:\n", " my_dict = content.decode('utf-8')\n", " my_dict = pd.read_csv(StringIO(my_dict)) # automatically converting into a Pandas DataFrame when choosing CSV format\n", " \n", " return my_dict" ] }, { "cell_type": "code", "execution_count": 9, "id": "6844b461", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "v1.0/gas/contracts/sparkleba-ttf-fo/?unit=eur-per-mwh&start=2026-01-01&end=2026-02-01\n" ] }, { "data": { "text/plain": [ "{'tickers': [{'tickerName': 'SparkLEBA-TTF-DA',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:00:05.141428Z',\n", " 'revisedAtUtc': '2026-02-10T18:00:05.141428Z'},\n", " {'tickerName': 'SparkLEBA-TTF-F',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:00:05.141428Z',\n", " 'revisedAtUtc': '2026-02-10T18:00:05.141428Z'},\n", " {'tickerName': 'SparkLEBA-TTF-Fo',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:00:05.141428Z',\n", " 'revisedAtUtc': '2026-02-10T18:00:05.141428Z'},\n", " {'tickerName': 'SparkLEBA-THE-TTF-DA',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-THE-TTF-BOM',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-THE-TTF-F',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-THE-TTF-Fo',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-TTF-BOM',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:00:05.141428Z',\n", " 'revisedAtUtc': '2026-02-10T18:00:05.141428Z'},\n", " {'tickerName': 'SparkLEBA-THE-DA',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-THE-F',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-THE-Fo',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'},\n", " {'tickerName': 'SparkLEBA-THE-BOM',\n", " 'revision': 0,\n", " 'publishedAtUtc': '2026-02-10T18:03:38.733786Z',\n", " 'revisedAtUtc': '2026-02-10T18:03:38.733786Z'}],\n", " 'unit': 'eur-per-mwh',\n", " 'start': '2026-01-01',\n", " 'end': '2026-02-01'}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calling the data as a JSON\n", "data_json = fetch_prices(access_token, ticker='sparkleba-ttf-fo', unit='eur-per-mwh', start='2026-01-01', end='2026-02-01', format='json')\n", "\n", "# printing metaData\n", "data_json['metaData']" ] }, { "cell_type": "code", "execution_count": null, "id": "c8bc66b2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "v1.0/gas/contracts/sparkleba-ttf-fo/?unit=eur-per-mwh&start=2026-01-01&end=2026-02-10\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ReleaseDateTickerNamePeriodFromPeriodToPeriodTypePeriodNamePeriodIndexDailyHighDailyLowCloseRevisionPublishedAtUTCRevisedAtUTC
02026-01-02SparkLEBA-TTF-Fo2027-01-012027-12-31calCal27Cal+126.03125.54825.92202026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
12026-01-02SparkLEBA-TTF-Fo2028-01-012028-12-31calCal28Cal+224.53824.01624.48302026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
22026-01-02SparkLEBA-TTF-Fo2029-01-012029-12-31calCal29Cal+323.00822.65222.96802026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
32026-01-02SparkLEBA-TTF-Fo2030-01-012030-12-31calCal30Cal+421.87221.65821.83802026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
42026-01-02SparkLEBA-TTF-Fo2026-03-012026-03-31monthMar26M+228.62027.80028.51002026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
..........................................
25362026-02-10SparkLEBA-TTF-Fo2028-10-012029-03-31seasonWin28S+623.47822.88322.88302026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
25372026-02-10SparkLEBA-TTF-Fo2029-04-012029-09-30seasonSum29S+720.72320.04520.04502026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
25382026-02-10SparkLEBA-TTF-Fo2029-10-012030-03-31seasonWin29S+822.20321.56321.61802026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
25392026-02-10SparkLEBA-TTF-Fo2030-04-012030-09-30seasonSum30S+920.29819.56819.90302026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
25402026-02-10SparkLEBA-TTF-Fo2030-10-012031-03-31seasonWin30S+1022.71021.26022.63002026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
\n", "

2541 rows × 13 columns

\n", "
" ], "text/plain": [ " ReleaseDate TickerName PeriodFrom PeriodTo PeriodType \\\n", "0 2026-01-02 SparkLEBA-TTF-Fo 2027-01-01 2027-12-31 cal \n", "1 2026-01-02 SparkLEBA-TTF-Fo 2028-01-01 2028-12-31 cal \n", "2 2026-01-02 SparkLEBA-TTF-Fo 2029-01-01 2029-12-31 cal \n", "3 2026-01-02 SparkLEBA-TTF-Fo 2030-01-01 2030-12-31 cal \n", "4 2026-01-02 SparkLEBA-TTF-Fo 2026-03-01 2026-03-31 month \n", "... ... ... ... ... ... \n", "2536 2026-02-10 SparkLEBA-TTF-Fo 2028-10-01 2029-03-31 season \n", "2537 2026-02-10 SparkLEBA-TTF-Fo 2029-04-01 2029-09-30 season \n", "2538 2026-02-10 SparkLEBA-TTF-Fo 2029-10-01 2030-03-31 season \n", "2539 2026-02-10 SparkLEBA-TTF-Fo 2030-04-01 2030-09-30 season \n", "2540 2026-02-10 SparkLEBA-TTF-Fo 2030-10-01 2031-03-31 season \n", "\n", " PeriodName PeriodIndex DailyHigh DailyLow Close Revision \\\n", "0 Cal27 Cal+1 26.031 25.548 25.922 0 \n", "1 Cal28 Cal+2 24.538 24.016 24.483 0 \n", "2 Cal29 Cal+3 23.008 22.652 22.968 0 \n", "3 Cal30 Cal+4 21.872 21.658 21.838 0 \n", "4 Mar26 M+2 28.620 27.800 28.510 0 \n", "... ... ... ... ... ... ... \n", "2536 Win28 S+6 23.478 22.883 22.883 0 \n", "2537 Sum29 S+7 20.723 20.045 20.045 0 \n", "2538 Win29 S+8 22.203 21.563 21.618 0 \n", "2539 Sum30 S+9 20.298 19.568 19.903 0 \n", "2540 Win30 S+10 22.710 21.260 22.630 0 \n", "\n", " PublishedAtUTC RevisedAtUTC \n", "0 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "1 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "2 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "3 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "4 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "... ... ... \n", "2536 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "2537 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "2538 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "2539 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "2540 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "\n", "[2541 rows x 13 columns]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calling data as a CSV\n", "csv_df = fetch_prices(access_token, ticker='sparkleba-ttf-fo', unit='eur-per-mwh', start='2026-01-01', end='2026-02-10', format='csv')\n", "csv_df.head(5)" ] }, { "cell_type": "code", "execution_count": null, "id": "a928dd2c", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ReleaseDateTickerNamePeriodFromPeriodToPeriodTypePeriodNamePeriodIndexDailyHighDailyLowCloseRevisionPublishedAtUTCRevisedAtUTC
18202026-01-30SparkLEBA-TTF-Fo2027-01-012027-12-31calCal27Cal+128.13327.01227.36302026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
18212026-01-30SparkLEBA-TTF-Fo2028-01-012028-12-31calCal28Cal+224.34322.96924.05502026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
18222026-01-30SparkLEBA-TTF-Fo2029-01-012029-12-31calCal29Cal+323.00621.87822.54302026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
18232026-01-30SparkLEBA-TTF-Fo2030-01-012030-12-31calCal30Cal+422.62521.75022.22502026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
18242026-01-30SparkLEBA-TTF-Fo2026-03-012026-03-31monthMar26M+239.49037.46539.40502026-02-10T18:00:05.141428Z2026-02-10T18:00:05.141428Z
\n", "
" ], "text/plain": [ " ReleaseDate TickerName PeriodFrom PeriodTo PeriodType \\\n", "1820 2026-01-30 SparkLEBA-TTF-Fo 2027-01-01 2027-12-31 cal \n", "1821 2026-01-30 SparkLEBA-TTF-Fo 2028-01-01 2028-12-31 cal \n", "1822 2026-01-30 SparkLEBA-TTF-Fo 2029-01-01 2029-12-31 cal \n", "1823 2026-01-30 SparkLEBA-TTF-Fo 2030-01-01 2030-12-31 cal \n", "1824 2026-01-30 SparkLEBA-TTF-Fo 2026-03-01 2026-03-31 month \n", "\n", " PeriodName PeriodIndex DailyHigh DailyLow Close Revision \\\n", "1820 Cal27 Cal+1 28.133 27.012 27.363 0 \n", "1821 Cal28 Cal+2 24.343 22.969 24.055 0 \n", "1822 Cal29 Cal+3 23.006 21.878 22.543 0 \n", "1823 Cal30 Cal+4 22.625 21.750 22.225 0 \n", "1824 Mar26 M+2 39.490 37.465 39.405 0 \n", "\n", " PublishedAtUTC RevisedAtUTC \n", "1820 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "1821 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "1822 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "1823 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z \n", "1824 2026-02-10T18:00:05.141428Z 2026-02-10T18:00:05.141428Z " ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# retrieving a single release date\n", "df = csv_df[csv_df['ReleaseDate'] == '2026-01-30']\n", "df.head(5)" ] } ], "metadata": { "kernelspec": { "display_name": "base", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.5" } }, "nbformat": 4, "nbformat_minor": 5 }