{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Aggregating and downscaling timeseries data\n", "\n", "The **pyam** package offers many tools to facilitate processing of scenario data.\n", "In this notebook, we illustrate methods to aggregate and downscale timeseries data of an **IamDataFrame** across regions and sectors, as well as checking consistency of given data along these dimensions.\n", "\n", "In this tutorial, we show how to make the most of **pyam** to compute such aggregate timeseries data, and to check that a scenario ensemble (or just a single scenario) is complete and that timeseries data \"add up\" across regions and along the variable tree (i.e., that the sum of values of the subcategories such as `Primary Energy|*` are identical to the values of the category `Primary Energy`).\n", "\n", "There are two distinct use cases where these features can be used.\n", "\n", "### Use case 1: compute data at higher/lower sectoral or spatial aggregation\n", "\n", "Given scenario results at a specific (usually very detailed) sectoral and spatial resolution, **pyam** offers a suite of functions to easily compute aggregate timeseries. For example, this allows to sum up national energy demand to regional or global values,\n", "or to compute the average of a global carbon price weighted by regional emissions.\n", "\n", "These functions can be used as part of an automated workflow to generate complete scenario results from raw model outputs.\n", "\n", "### Use case 2: check the consistency of data across sectoral or spatial levels\n", "\n", "In model comparison exercises or ensemble compilation projects, a user needs to verify the internal consistency of submitted scenario results (cf. Huppmann et al., 2018, doi: [10.1038/s41558-018-0317-4](http://rdcu.be/9i8a)).\n", "Such inconsistencies can be due to incomplete variable hierarchies, reporting templates incompatible with model specifications, or user error.\n", "\n", "## Overview\n", "\n", "This notebook illustrates the following features:\n", "\n", "0. Import data from file and inspect the scenario\n", "1. Aggregate timeseries over sectors (i.e., sub-categories)\n", "2. Aggregate timeseries over regions including weighted average\n", "3. Downscale timeseries given at a region level to sub-regions using a proxy variable\n", "4. Downscale timeseries using an explicit weighting dataframe\n", "5. Check the internal consistency of a scenario (ensemble)\n", "\n", "
\n", "\n", "**See Also**\n", "\n", "The **pyam** package also supports algebraic operations (addition, subtraction, multiplication, division)\n", "on the timeseries data along any axis or dimension.\n", "See the [algebraic operations tutorial notebook](https://pyam-iamc.readthedocs.io/en/stable/tutorials/algebraic_operations.html)\n", "for more information.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0. Import data from file and inspect the scenario\n", "\n", "The stylized scenario used in this tutorial has data for two regions (`reg_a` & `reg_b`) as well as the `World` aggregate, and for categories of variables: primary energy demand, emissions, carbon price, and population." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pyam import IamDataFrame\n", "\n", "df = IamDataFrame(data=\"tutorial_data_aggregating_downscaling.csv\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.region" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.variable" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Aggregating timeseries across sectors\n", "\n", "Let's first display the data for the components of primary energy demand." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.filter(variable=\"Primary Energy|*\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we are going to use the [aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.aggregate) function to compute the total `Primary Energy` from its components (wind and coal) in each region (including `World`).\n", "\n", "The function returns an **IamDataFrame**, so we can use [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.timeseries) to display the resulting data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.aggregate(\"Primary Energy\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we are interested in **use case 1**, we could use the argument `append=True` to directly add the computed aggregate to the **IamDataFrame** instance.\n", "\n", "However, in this tutorial, the data already includes the total primary energy demand. Therefore, we illustrate **use case 2** and apply the [check_aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.check_aggregate) function to verify whether a given variable is the sum of its sectoral components\n", "(i.e., `Primary Energy` should be equal to `Primary Energy|Coal` plus `Primary Energy|Wind`).\n", "The validation is performed separately for each region.\n", "\n", "The function returns `None` if the validation is correct (which it is for primary energy demand)\n", "or a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) highlighting where the aggregate does not match (this will be illustrated in the next section)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.check_aggregate(\"Primary Energy\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function also returns useful logging messages when there is nothing to check (because there are no sectors below `Primary Energy|Wind`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.check_aggregate(\"Primary Energy|Wind\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Aggregating timeseries across subregions\n", "\n", "Similarly to the previous example, we now use the [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.aggregate_region) function to compute regional aggregates.\n", "By default, this method sums all the regions in the dataframe to make a `World` region; this can be changed with the keyword arguments `region` and `subregions`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.aggregate_region(\"Primary Energy\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding regional components\n", "\n", "As a next step, we use [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.check_aggregate_region) to verify that the regional aggregate of CO2 emissions matches the timeseries data given in the scenario." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.check_aggregate_region(\"Emissions|CO2\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As announced above, this validation failed and we see a dataframe of the expected data at the `region` level and the aggregation computed from the `subregions`.\n", "\n", "Let's look at the entire emissions timeseries in the scenario to find out what is going on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.filter(variable=\"Emissions*\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Investigating the data carefully, you will notice that emissions from the energy sector and agriculture, forestry & land use (AFOLU) are given in the subregions and the `World` region, whereas emissions from bunker fuels are only defined at the global level.\n", "This is a common issue in emissions data, where some sources (e.g., global aviation and maritime transport) cannot be attributed to one region.\n", "\n", "Luckily, the functions [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.aggregate_region)\n", "and [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.check_aggregate_region)\n", "support this use case:\n", "by adding `components=True`, the regional aggregation will include any sub-categories of the variable that are only present at the `region` level but not in any subregion." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.aggregate_region(\"Emissions|CO2\", components=True).timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The regional aggregate now matches the data given at the `World` level in the tutorial data.\n", "\n", "Note that the components to be included at the region level can also be specified directly via a list of variables, in this case we would use `components=['Emissions|CO2|Bunkers']`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Computing a weighted average across regions\n", "\n", "One other frequent requirement when aggregating across regions is a weighted average.\n", "\n", "To illustrate this feature, the tutorial data includes carbon price data.\n", "Naturally, the appropriate weighting data are the regional carbon emissions.\n", "\n", "The following cells show:\n", "\n", "0. The carbon price data across the regions\n", "1. A (failing) validation that the regional aggregation (without weights) matches the reported prices at the `World` level\n", "2. The emissions-weighted average of carbon prices returned as a new **IamDataFrame**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.filter(variable=\"Price|Carbon\").timeseries()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.check_aggregate_region(\"Price|Carbon\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.aggregate_region(\"Price|Carbon\", weight=\"Emissions|CO2\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Downscaling timeseries data to subregions using a proxy\n", "\n", "The inverse operation of regional aggregation is \"downscaling\" of timeseries data given at a regional level to a number of subregions, usually using some other data as proxy to divide and allocate the total to the subregions.\n", "\n", "This section shows an example using the [downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.downscale_region) function to divide the total primary energy demand using population as a proxy." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.filter(variable=\"Population\").timeseries()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.downscale_region(\"Primary Energy\", proxy=\"Population\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By the way, the functions\n", "[aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.aggregate), \n", "[aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.aggregate_region) and\n", "[downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.downscale_region)\n", "also take lists of variables as `variable` argument.\n", "See the next cell for an example." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "var_list = [\"Primary Energy\", \"Primary Energy|Coal\"]\n", "df.downscale_region(var_list, proxy=\"Population\").timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Downscaling timeseries data to subregions using a weighting dataframe\n", "\n", "In cases where using existing data directly as a proxy (as illustrated in the previous section) is not practical,\n", "a user can also create a weighting dataframe and pass that directly to the `downscale_region()` function.\n", "\n", "The example below uses the weighting factors implied by the population variable for easy comparison to the previous section." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "weight = pd.DataFrame(\n", " [[0.66, 0.6], [0.33, 0.4]],\n", " index=pd.Series([\"reg_a\", \"reg_b\"], name=\"region\"),\n", " columns=pd.Series([2005, 2010], name=\"year\"),\n", ")\n", "weight" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.downscale_region(var_list, weight=weight).timeseries()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Checking the internal consistency of a scenario (ensemble)\n", "\n", "The previous sections illustrated two functions to validate specific variables across their sectors (sub-categories) or regional disaggregation.\n", "These two functions are combined in the [check_internal_consistency()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.check_internal_consistency) feature.\n", "\n", "
\n", "\n", "This feature of the **pyam** package currently only supports \"consistency\"\n", "in the sense of a strictly hierarchical variable tree\n", "(with subcategories summing up to the category value including components, discussed above)\n", "and that all the regions sum to the 'World' region. \n", "See [this issue](https://github.com/IAMconsortium/pyam/issues/106) for more information.\n", "\n", "
\n", "\n", "If we have an internally consistent scenario ensemble (or single scenario), the function will return `None`; otherwise, it will return a concatenation of [pandas.DataFrames](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indicating all detected inconsistencies.\n", "\n", "For this section, we use a tutorial scenario which is constructed to highlight the individual validation features below.\n", "The scenario below has two inconsistencies:\n", "\n", "1. In year `2010` and regions `region_b` & `World`, the values of coal and wind do not add up to the total `Primary Energy` value\n", "2. In year `2020` in the `World` region, the value of `Primary Energy` and `Primary Energy|Coal` is not the sum of `region_a` and `region_b`
\n", " (but the sum of wind and coal to `Primary Energy` in each sub-region is correct)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tutorial_df = IamDataFrame(\n", " pd.DataFrame(\n", " [\n", " [\"World\", \"Primary Energy\", \"EJ/yr\", 7, 15],\n", " [\"World\", \"Primary Energy|Coal\", \"EJ/yr\", 4, 11],\n", " [\"World\", \"Primary Energy|Wind\", \"EJ/yr\", 2, 4],\n", " [\"region_a\", \"Primary Energy\", \"EJ/yr\", 4, 8],\n", " [\"region_a\", \"Primary Energy|Coal\", \"EJ/yr\", 2, 6],\n", " [\"region_a\", \"Primary Energy|Wind\", \"EJ/yr\", 2, 2],\n", " [\"region_b\", \"Primary Energy\", \"EJ/yr\", 3, 6],\n", " [\"region_b\", \"Primary Energy|Coal\", \"EJ/yr\", 2, 4],\n", " [\"region_b\", \"Primary Energy|Wind\", \"EJ/yr\", 0, 2],\n", " ],\n", " columns=[\"region\", \"variable\", \"unit\", 2010, 2020],\n", " ),\n", " model=\"model_a\",\n", " scenario=\"scen_a\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All checking-functions take arguments for [np.is_close()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html) as keyword arguments. We show our recommended settings and how to use them here." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np_isclose_args = {\n", " \"equal_nan\": True,\n", " \"rtol\": 1e-03,\n", " \"atol\": 1e-05,\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tutorial_df.check_internal_consistency(**np_isclose_args)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output of this function reports both types of illustrative inconsistencies in the scenario constructed for this section.\n", "The log also shows that the other two variables (coal and wind) cannot be assessed because they have no subcategories.\n", "\n", "
\n", "\n", "In practice, it would now be up to the user to determine\n", "the cause of the inconsistency (or confirm that this is expected for some reason).\n", "\n", "
" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.0" } }, "nbformat": 4, "nbformat_minor": 2 }