geo-tide-backend

Backend source code to produce geospatial data layers for the MCSC's Geospatial Trucking Industry Decarbonization Explorer (Geo-TIDE)

https://github.com/mcsc-impact-climate/geo-tide-backend

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Committers with academic emails
    28 of 34 committers (82.4%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.9%) to scientific vocabulary

Keywords

biofuels decarbonization decision-support electrification geospatial hydrogen trucking
Last synced: 6 months ago · JSON representation ·

Repository

Backend source code to produce geospatial data layers for the MCSC's Geospatial Trucking Industry Decarbonization Explorer (Geo-TIDE)

Basic Info
Statistics
  • Stars: 3
  • Watchers: 1
  • Forks: 3
  • Open Issues: 0
  • Releases: 2
Topics
biofuels decarbonization decision-support electrification geospatial hydrogen trucking
Created about 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

DOI

Backend to produce layers for the Geo-TIDE tool

This repo contains code to produce and synthesize visualize publicly available geospatial data to support trucking fleets in navigating the transition to alternative energy carriers. The tool uses data from the "freight analysis framework" (FAF5) database and other public data sources.

The layers can be interactively visualized using the Geo-TIDE tool (link to the Geo-TIDE code repo).

Pre-requisites

  • python3

Setup

bash git clone git@github.com:mcsc-impact-climate/FAF5-Analysis.git

Install python requirements bash pip install -r requirements.txt

Downloading the data

The script download_data.sh downloads all the data needed to run this code into the data directory. Note that it will only download a file if it doesn't already exist in the data directory. To run:

bash bash download_data.sh

Processing highway assignments

The script ProcessFAFHighwayData.py reads in both the FAF5 network links for the entire US and the associated highway network assignments for total trucking flows, and joins the total flows for 2022 (all commodities combined) with the FAF5 network links via their common link IDs to produce a combined shapefile.

To run:

bash python processFAFHighwayData.py

This should produce a shapefile in data/highway_assignment_links.

Processing eGRID emission intensity data

The script ProcessGridData.py reads in the shapefile containing the borders of subregions within which eGRIDs reports grid emissions data, along with the associated eGRIDs data, and joins the shapefile with the eGRIDs data via the subregion ID to produce a combined shapefile.

To run:

bash python source/ProcessGridData.py

This should produce shapefiles in data/egrid2020_subregions_merged and data/eia2020_subregions_merged.

Processing electricity prices and demand charges

The script ProcessPrices.py reads in the shapefile containing borders of zip codes and states, along with the associated electricity price data and demand charges, and joins the shapefiles with the electricity price data via the subregion ID to produce combined shapefiles. It also evaluates electricity price, demand charge and diesel price by state.

To run:

bash python source/ProcessPrices.py

Processing State-level Incentives and Regulations

The script ProcessStateSupport.py reads in the shapefile containing borders of US states, along with CSV files containing state-level incentives relevant to trucking from the AFDC website, and joins the CSV files with the shapefile to produce a set of shapefiles with the number of incentives of each type (fuel, vehicle purchase, emissions and infrastructure) and fuel target (electrification, hydrogen, ethanol, etc.) for each state.

To run:

bash python source/ProcessStateSupport.py

Processing planned infrastructure corridors for heavy duty vehicles

The script PrepareInfrastructureCorridors.py reads in either a shapefile with the US highway system, or shapefiles with specific regions of planned heavy duty vehicle infrastructure corridors announced by the Biden-Harris administration. For corridors represented as subsets of the national highway system, the code produces shapefiles for each highway segment with a planned infrastructure project. For corridors represented as regions of the US, the code produces shapefiles showing the region(s) where the planned infrastructure project will take place.

To run:

bash python source/PrepareInfrastructureCorridors.py

This should produce shapefiles for zipcode-level and state-level electricity prices in data/electricity_rates_merged

Analyzing VIUS data

The script AnalyzeVius.py produces distributions of GREET vehicle class, fuel type, age, and payload from the VIUS data. To run:

bash python source/AnalyzeVius.py

Processing VIUS data to evaluate average product of fuel efficiency and payload

Run the script ViusTools.py to produce an output file tabulating the product of fuel efficiency (mpg) times payload for each commodity, along with the associated standard deviation:

bash python source/ViusTools.py

This should produce the following output file: data/VIUS_Results/mpg_times_payload.csv.

Producing shapefiles to visualize freight flows and emission intensities

The script Point2PointFAF.py combines outputs from VIUS, GREET and FAF5 and merges it with geospatial shapefiles with the contours of FAF5 regions to associate each region with tons, ton-miles, and associated emissions of imports to and exports from each region, along with areal densities of these three quantities (i.e. divided by the surface area of the associated region). There is also functionality to evaluate these quantities for a user-specified mode, commodity, origin region, or destination region.

Before running this code, you'll need to have first run the following:

bash python source/ViusTools.py

To run:

bash python source/Point2PointFAF.py -m user_specified_mode -c "user_specified_commodity" -o user_specified_origin_ID -d user_specified_destination_ID

This should produce a csv and shapefile in data/Point2Point_outputs/mode_truck_commodity_Logs_origin_11_dest_all.[extension].

NOTE: The "" around the commodity option is important because some commodities contain spaces, and python does NOT like command line arguments with spaces...

where each argument defaults to 'all' if left unspecified. The mode is one of {all, truck, water, rail}. The available commodities can be found in the 'Commodity (SCTG2)' sheet in data/FAF5_regional_flows_origin_destination/FAF5_metadata.xlsx ('Description' column). The origin and destination region IDs can be found in the 'FAF Zone (Domestic)' sheet of the same excel file ('Numeric Label' column').

For example, to filter for logs carried by trucks from FAF5 region 11 to FAF5 region 139:

bash python source/Point2PointFAF.py -m truck -c Logs -o 11 -d 139

There's also a bash script in source/run_all_Point2Point.sh that can be executed to produce merged shapefiles for all combinations of modes, commodities, origins and destinations.

To run:

bash bash source/run_all_Point2Point.sh

WARNING: This may take several hours to run in full, and the shapefiles and csv files produced will take up ~100 GB. To reduce this, you can comment out items that you don't want in the COMMODITIES, REGIONS and MODES variables.

Creating shapefiles for hydrogen production facilities

The script PrepareHydrogenHubs.py combines locations and information about operating and planned hydrogen production facilities and the U.S. and Canada into shapefiles located in data/hydrogen_hubs/shapefiles. To run:

bash python source/PrepareHydrogenHubs.py

Identifying truck stops and hydrogen production facilities within a given radius

The script IdentifyFacilitiesInRadius.py identifies truck stops and hydrogen production facilities within a user-provided radius and central location - (33N, 97W) and 600 miles by default.

Analyze potential infrastructure investment savings from collective investment in truck stop charging

The script AnalyzeTruckStopCharging.py is designed to quantify the charging demand at truck stops along U.S. interstates that are sparsified to support the specified truck range, and estimate the potential difference in infrastructure costs needed if the entire electrified trucking fleet were to electrify and either: * The full electrified fleet shared the investment and usage of charging infrastructure, or * The fleet was divided in half, and each half invested in and used their respective charging infrastructure separately.

The idea of this exercise is to understand the potential infrastructure savings from trucking fleets pooling infrastructure investments in charging infrastructure based on real-world freight flow data.

The methodology is detailed in MacDonell and Borrero, 2024.

To run with a specified set of options:

bash python source/AnalyzeTruckStopCharging.py -c [charging time (hours)] -m [max allowable wait time (hours)] -r [truck range (miles)]

To run over all options visualized in the geospatial mapping tool:

bash bash source/run_all_AnalyzeTruckStopCharging.sh

Evaluating state-level electricity demand if trucking is fully electrified

The script EvaluateTruckingEnergyDemand.py aggregates highway-level FAF5 commodity flows and trips to evaluate the approximate annual energy demand (in MWh) that would be placed on the grid for each state if all trucking operations were to be fully electrified. The energy demand is calculated assuming that the flows are carried by the Tesla Semi, using the mileage with respect to payload calibrated using code in this repo (link to relevant section of README). The underlying calibration is performed in this repo using data from the PepsiCo Tesla Semi pilot.

To run:

bash python source/EvaluateTruckingEnergyDemand.py

This produces an output shapefile in data/trucking_energy_demand containing the energy demand for each state from electrified trucking, both as an absolute value (in MWh), and as a percent of each of the following: * The total energy generated in the state in 2022 * The theoretical total energy generation capacity for the state in 2022 (if the grid were to run at its full summer generating capacity 24/7) * The theoretical excess energy generation capacity (i.e. theoretical - actual energy generated in 2022)

Comparing electricity demand for full trucking electrification with historical load in Texas ERCOT weather zones

Visualizing demand for each charging site

The script TT_charging_analysis.py produces a plot visualizing the demands associated with electrifying trucking with charging at 8 sites in the Texas triangle region. To run:

bash python source/TT_charging_analysis.py This will produce Texas_charger_locations.png in the plots directory that compares the

Producing daily electricity demand curves for each charging site

The script MakeChargingLoadByZone.py produces a csv file for each ERCOT weather zone containing one or more charging sites. For each such zone, the csv file contains the daily load from each charging site in the weather zone, assuming it follows the most extreme variation found in Borlaug et al (2021) for immediate charging (see red curve in Fig. 5 in the paper).

To run: bash python source/MakeChargingLoadByZone.py

This will produce a csv file daily_ev_load_[zone].csv for each zone.

Comparing daily EV demand with historical load for each month

The script AnalyzeErcotData.py compares the daily EV demand each charging site in a zone (along with the total combined demand) with the estimated excess capacity of the grid over the day.

To run:

bash python source/AnalyzeErcotData.py

This will produce a plot for each zone and month called daily_ev_load_with_excess_[zone]_[month].png in the plots directory.

Owner

  • Name: MIT Climate & Sustainability Consortium
  • Login: mcsc-impact-climate
  • Kind: organization

Contains repos developed by the MIT Climate & Sustainability Consortium

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "MacDonell"
  given-names: "Danika"
  orcid: "https://orcid.org/0000-0001-5533-6300"
- family-names: "Borrero"
  given-names: "Micah"
  orcid: "https://orcid.org/0009-0005-1745-2972"
- family-names: "Kasami"
  given-names: "Brilant"
  orcid: "https://orcid.org/0000-0002-0897-151X"
- family-names: "Helena"
  given-names: "De Figueiredo Valente"
title: "FAF5-Analysis"
version: v0.1.0
doi: 10.5281/zenodo.13205855
date-released: 2024-08-03
url: "https://github.com/mcsc-impact-climate/FAF5-Analysis"

GitHub Events

Total
  • Release event: 1
  • Delete event: 4
  • Push event: 7
  • Pull request event: 5
  • Create event: 4
Last Year
  • Release event: 1
  • Delete event: 4
  • Push event: 7
  • Pull request event: 5
  • Create event: 4

Committers

Last synced: 6 months ago

All Time
  • Total Commits: 309
  • Total Committers: 34
  • Avg Commits per committer: 9.088
  • Development Distribution Score (DDS): 0.657
Past Year
  • Commits: 29
  • Committers: 2
  • Avg Commits per committer: 14.5
  • Development Distribution Score (DDS): 0.069
Top Committers
Name Email Commits
Danika MacDonell 4****m 106
Danika MacDonell d****l@D****l 69
cubicalknight m****o@g****m 12
Danika MacDonell d****l@d****U 11
Danika MacDonell d****l@d****U 10
Danika MacDonell d****l@d****U 7
Danika MacDonell d****l@d****U 6
Danika MacDonell d****l@d****U 6
Danika MacDonell d****l@d****U 6
Danika MacDonell d****l@d****U 6
Danika MacDonell d****l@d****u 6
Danika MacDonell d****l@d****u 6
brookebao 1****o 6
Danika MacDonell d****l@d****U 5
Danika MacDonell d****l@d****U 5
Danika MacDonell d****l@d****U 4
Danika MacDonell d****l@d****U 4
Danika MacDonell d****l@d****u 4
Danika MacDonell d****l@d****u 3
Danika MacDonell d****l@d****U 3
Danika MacDonell d****l@d****u 3
Danika MacDonell d****l@d****U 3
Brilant Kasami b****i@g****m 3
Danika MacDonell d****l@d****u 2
Danika MacDonell d****l@d****U 2
Danika MacDonell d****l@d****u 2
Danika MacDonell d****l@d****U 2
Danika MacDonell d****l@d****u 1
Danika MacDonell d****l@d****u 1
Danika MacDonell d****l@d****u 1
and 4 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 21
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 days
  • Total issue authors: 0
  • Total pull request authors: 4
  • Average comments per issue: 0
  • Average comments per pull request: 0.14
  • Merged pull requests: 19
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 10
  • Average time to close issues: N/A
  • Average time to close pull requests: less than a minute
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 8
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • danikam (15)
  • cubicalknight (4)
  • brookebao (1)
  • helena380 (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • flask ==2.3.3
  • flask-cors ==4.0.0
  • geopandas ==0.12.2
  • geopy ==2.3.0
  • openpyxl ==3.0.10
  • pandas ==1.5.3
  • scipy ==1.11.2
  • seaborn ==0.12.2
  • tqdm ==4.64.1