https://github.com/catalyst-cooperative/open-grid-emissions

Tools for producing high-quality hourly generation and emissions data for U.S. electric grids

https://github.com/catalyst-cooperative/open-grid-emissions

Science Score: 51.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Committers with academic emails
    1 of 3 committers (33.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (17.8%) to scientific vocabulary

Keywords from Contributors

carbon-accounting carbon-emissions climate climate-change decarbonization eia emissions epa ghg ghg-emissions
Last synced: 6 months ago · JSON representation ·

Repository

Tools for producing high-quality hourly generation and emissions data for U.S. electric grids

Basic Info
  • Host: GitHub
  • Owner: catalyst-cooperative
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 52.5 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Fork of singularity-energy/open-grid-emissions
Created over 3 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation

README.md

Open Grid Emissions Initiative

Project Status: Active – The project has reached a stable, usable state and is being actively developed. Code style: black DOI

The Open Grid Emissions Initiative seeks to fill a critical need for high-quality, publicly-accessible, hourly grid emissions data that can be used for GHG accounting, policymaking, academic research, and energy attribute certificate markets. The initiative includes this repository of open-source grid emissions data processing tools that use peer-reviewed, well-documented, and validated methodologies to create the accompanying public dataset of hourly, monthly, and annual U.S. electric grid generation, GHG, and air pollution data.

Please check out our documentation for more details about the Open Grid Emissions methodology.

The Open Grid Emissions Dataset can be downloaded here. An archive of previous versions of the dataset and intermediate data outputs (for research and validation purposes) can be found on Zenodo.

Installing and running the data pipeline

To install and run the pipeline on your computer, open anaconda prompt, navigate to the folder where you want to save the repository, and run the following commands:

conda install git git clone https://github.com/singularity-energy/open-grid-emissions.git conda update conda cd open-grid-emissions conda env create -f environment.yml conda activate open_grid_emissions cd src python data_pipeline.py --year 2021

A more detailed walkthough of these steps can be found below in the "Development Setup" section.

Data Availability and Release Schedule

The latest release includes data for year 2019-2021 covering the contiguous United States, Alaska, and Hawaii. In future releases, we plan to expand the geographic coverage to additional U.S. territories (dependent on data availability), and to expand the historical coverage of the data.

Parts of the input data used for the Open Grid Emissions dataset is released by the U.S. Energy Information Administration in the Autumn following the end of each year (2022 data should be available Autumn 2023). Each release will include the most recent year of available data as well as updates of all previous available years based on any updates to the OGEI methodology. All previous versions of the data will be archived on Zenodo.

Updated datasets will also be published whenever a new version of the open-grid-emissions repository is released.

Contribute

There are many ways that you can contribute! - Tell us how you are using the dataset or python tools - Request new features or data outputs by submitting a feature request or emailing us at <> - Tell us how we can make the datasets even easier to use - Ask a question about the data or methods in our discussion forum - Submit an issue if you've identified a way the methods or assumptions could be improved - Contribute your subject matter expertise to the discussion about open issues and questions - Submit a pull request to help us fix open issues

Repository Structure

Modules

  • column_checks: functions that check that all data outputs have the correct column names
  • data_pipeline: main script for running the data pipeline from start to finish
  • download_data: functions that download data from the internet
  • data_cleaning: functions that clean loaded data
  • eia930: functions for cleaning and formatting EIA-930 data
  • emissions: functions used for imputing emissions data
  • filepaths: Used to identify where repository files are located on the user's computer
  • gross_to_net_generation: Functions for identifying subplants and gross to net generation conversion factors
  • impute_hourly_profiles: functions related to assigning an hourly profile to monthly data
  • load_data: functions for loading data from downloaded files
  • output_data: functions for writing intermediate and final data to csvs
  • validation: functions for testing and validating data outputs
  • visualization: functions for visualizing data in notebooks

Notebooks

Notebooks are organized into five directories based on their purpose - explore_data: notebooks used for exploring data outputs and results - explore_methods: notebooks that can be used to explore specific methods step-by-step - manual_data: notebooks that are used to create/update certain files in data/manual - validation: notebooks related to validating results - visualization: notebooks used to visualize data - work_in_progress: temporary notebooks being used for development purposes on specific branches

Data Structure

  • data/downloads contains all files that are downloaded by functions in load_data
  • data/manual contains all manually-created files, including the egrid static tables
  • data/outputs contains intermediate outputs from the data pipeline... any files created by our code that are not final results
  • data/results contains all final output files that will be published

Development Setup

If you would like to run the code on your own computer and/or contribute updates to the code, the following steps can help get you started.

Users unfamiliar with git / python

Install conda and python

We suggest using miniconda or Anaconda to manage the packages needed to run the Open Grid Emissions code. Anaconda and Miniconda install a similar environment, but Anaconda installs more packages by default and Miniconda installs them as needed. These can be downloaded from miniconda or Anaconda

Install a code editor

If you want to edit the code and do not already have an integrated development environment (IDE) installed, one good option is Visual Studio Code (download: https://code.visualstudio.com/).

Install and setup git software manager

In order to download the repository, you will need to use git. You can either install Git Bash from https://git-scm.com/downloads, or you can install it using conda. To do so, fter installing Anaconda or Miniconda, open an Anaconda Command Prompt (Windows) or Terminal.app (Mac) and type the following command:

conda install git

Then you will need set up git following these instructions: https://docs.github.com/en/get-started/quickstart/set-up-git

Once you have git and conda installed

Download the codebase to a local repository

Using Anaconda command prompt or Git Bash, use the cd and mkdir commands to create and/or enter the directory where you would like to download the code (e.g. "Users/myusername/GitHub"). Then run:

git clone https://github.com/singularity-energy/open-grid-emissions.git

Setup the conda environment

Open anaconda prompt, use cd to navigate to the directory where your local files are stored (e.g. "GitHub/open-grid-emissions"), and then run:

conda update conda conda env create -f environment.yml

Installation requires that the conda channel-priority be set to "flexible". This is the default behavior, so if you've never manually changed this, you shouldn't have to worry about this. However, if you receive an error message like "Found conflicts!" when trying to install the environment, try setting your channel priority to flexible by running the following command: conda config --set channel_priority flexible and then re-running the above commands.

Running the complete data pipeline

If you would like to run the full data pipeline to generate all intermediate outputs and results files, open anaconda prompt, navigate to open-grid-emissions/src, and run the following (replacing 2021 with whichever year you want to run):

conda activate open_grid_emissions python data_pipeline.py --year 2021

Keeping the code updated

From time to time, the code will be updated on GitHub. To ensure that you are keeping your local version of the code up to date, open git bash and follow these steps: ```

change the directory to where ever your local git repository is saved

after hitting enter, it should show the name of the git branch (e.g. "(main)")

cd GitHub/open-grid-emissions

save any changes that you might have made locally to your copy of the code

git add .

fetch and merge the updated code from github

git pull origin main ```

Contribution Guidelines

If you plan on contributing edits to the codebase that will be merged into the main branch, please follow these best practices:

  1. Please do not make edits directly to the main branch. Any new features or edits should be completed in a new branch. To do so, open git bash, navigate to your local repo (e.g. cd GitHub/open-grid-emissions), and create a new branch, giving it a descriptive name related to the edit you will be doing:

    git checkout -b branch_name

  2. As you code, it is a good practice to 'save' your work frequently by opening git bash, navigating to your local repo (cd GitHub/open-grid-emissions), making sure that your current feature branch is active (you should see the feature name in parentheses next to the command line), and running

    git add .

  3. You should commit your work to the branch whenever you have working code or whenever you stop working on it using:

    git add .
    git commit -m "short message about updates"

  4. Once you are done with your edits, save and commit your code using step #3 and then push your changes:

    git push

  5. Now open the GitHub repo web page. You should see the branch you pushed up in a yellow bar at the top of the page with a button to "Compare & pull request".

    • Click "Compare & pull request". This will take you to the "Open a pull request" page.
    • From here, you should write a brief description of what you actually changed.
    • Click "Create pull request"
    • The changes will be reviewed and discussed. Once any edits have been made, the code will be merged into the main branch.

Conventions and standards

  • We generally follow the naming conventions used by the Public Utility Data Liberation Project: https://catalystcoop-pudl.readthedocs.io/en/latest/dev/naming_conventions.html
  • Functions should include descriptive docstrings (using the Google style guide https://google.github.io/styleguide/pyguide.html#383-functions-and-methods), inline comments should be used to describe individual steps, and variable names should be made descriptive (e.g. cems_plants_with_missing_co2_data not cems_missing or cpmco2)
  • All pandas merge operations should include the validate parameter to ensure that unintentional duplicate entries are not created (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html)
  • All pandas groupby operations should include the dropna=False parameter so that data with missing groupby keys are not unintentionally dropped from the data.
  • All code should be formatted using black
  • Clear all outputs from notebooks before committing your work.
  • Any manual changes to reported categorical data, conversion factors, or manual data mappings should be loaded from a .csv file data/manual rather than stored in a dictionary or variable in the code.

Owner

  • Name: Catalyst Cooperative
  • Login: catalyst-cooperative
  • Kind: organization
  • Email: hello@catalyst.coop
  • Location: United States of America

Catalyst is a small data engineering cooperative working on electricity regulation and climate change.

Citation (CITATION.cff)

cff-version: 1.2.0
title: Open Grid Emissions Initiative
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Gregory
    family-names: Miller
    orcid: 'https://orcid.org/0000-0003-3750-9292'
  - given-names: Gailin
    family-names: Pease
    orcid: 'https://orcid.org/0000-0003-3528-6048'
    affiliation: "Singularity Energy"
  - given-names: Milo
    family-names: Knowles
    orcid: 'https://orcid.org/0000-0003-4052-5517'
    affiliation: "Singularity Energy"
  - given-names: Wenbo
    family-names: Shi
    affiliation: "Singularity Energy"
identifiers:
  - type: doi
    value: 'https://doi.org/10.5281/zenodo.7495818'
version: 0.2.0
license: MIT
date-released: '2022-12-30'

GitHub Events

Total
Last Year

Committers

Last synced: about 2 years ago

All Time
  • Total Commits: 426
  • Total Committers: 3
  • Avg Commits per committer: 142.0
  • Development Distribution Score (DDS): 0.401
Past Year
  • Commits: 118
  • Committers: 3
  • Avg Commits per committer: 39.333
  • Development Distribution Score (DDS): 0.203
Top Committers
Name Email Commits
grgmiller g****r@u****u 255
gailin-p g****e@g****m 133
Milo Knowles m****7@g****m 38
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: about 2 years ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

environment.yml conda
  • black
  • blas *
  • coloredlogs
  • cvxopt
  • cvxpy 1.2.1.*
  • flake8
  • ipykernel
  • nomkl
  • notebook
  • numpy
  • openpyxl
  • pandas
  • pip
  • plotly
  • pyarrow
  • pytest
  • python >=3.10,<3.11
  • python-snappy
  • qdldl-python 0.1.5,!=0.1.5.post2
  • requests >=2.28.1
  • seaborn
  • setuptools
  • sqlalchemy
  • sqlite
  • statsmodels