https://github.com/bymaxanjos/co2-traffic-emissions

Config files for my GitHub profile.

https://github.com/bymaxanjos/co2-traffic-emissions

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: researchgate.net
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary

Keywords

config github-config
Last synced: 5 months ago · JSON representation

Repository

Config files for my GitHub profile.

Basic Info
Statistics
  • Stars: 5
  • Watchers: 1
  • Forks: 2
  • Open Issues: 0
  • Releases: 0
Topics
config github-config
Created over 4 years ago · Last pushed 10 months ago
Metadata Files
Readme

README.md

Zoom City Carbon Model::traffic CO2 emissions at street level using Machine Learning

git_zccm

Introduction

We introduce the Zoom City Carbon Model (ZCCM), an R-based tool for calculating net of CO2 fluxes from urban areas at high spatial and temporal resolutions. ZCCM incorporates major sources and sinks of carbon in cities, such as road traffic, buildings, human breathing, and vegetation and soils. This document presents ZCCM::traffic model, which provides hourly estimates of traffic flow, average speed, and CO2 emissions at the road segment and whole-city level using local traffic data, meteorological data, spatial data, and Machine Learning techniques (ML). The ZCCM::traffic model is divided into three files: Learn ML model, Deploy ML model, and Emission Geographic Information platform. The LearnMLmodel trains and tests the ML-model, allowing users to assess the performance of the model for traffic estimates based on dataset. The DeployMLmodel generates timeseries (.csv) and maps (.multipolylines) of traffic estimates and CO2 emissions, while the Emission Geographic Information Platform communicates the outcomes of the ZCCM to users, stakeholders, research community, and public in general. This platform displays the outcomes of ZCCM in an interactive way through zoom CO2 maps and summary statistics of emissions, e.g., dashboard available on this link.

Screenshot 2023-06-16 at 12 41 44

The ZCCM::traffic model is still undergoing peer-review and should used with caution. Methodology is based on Anjos, M.; Meier, F. Zooming into City and tracking CO2 traffic emissions at street level. Carbon Balance and Management(submitted).

People

The development of the ZCCM::traffic model was led by Dr. Max Anjos and joined by Dr.Fred Meier, and it is hosted at the Chair of Climatology, Institute of Ecology, Technische Universitt Berlin.

Funding

This project was financed in part by the Coordenao de Aperfeioamento de Pessoal de Nvel Superior (CAPES) Finance Code 001, and by the Alexander Von Humboldt Foundation.

Contact

Please feel free to contact us if you have any questions or suggestions by emailing maxanjos\@campus.ul.pt. If you are interested in contributing to the development of the model, we welcome you to join our team.

Happy coding!

Input and setting requirements

To ensure the model runs correctly, it is necessary to load the following inputs::

  1. Traffic data .csv (required) with a minimum of two columns labeled date and id.
  2. Counting traffic stations .csv or .shp (required) with at least three columns labeled id, Latitude, and Longitude.
  3. Meteorological data .csv (conditionally required) with at least one column labeled date.
  4. Other variables (optional), in .csv or .shp format, should have the same date column recommendation.

Note that the model converts the date-time into a R-formatted version, e.g., "2023-03-13 11:00:00" or "2023-03-13".

The following R packages should be installed in you PC:

```{r}

if (!require("pacman")) install.packages("pacman") # if the pacman package is not installed, install it pacman::p_load(lubridate, tidyverse, data.table, sf, httr, openair, osmdata, tmap, recipes, timetk, caret, ranger, rmarkdown) # use pacman to load the following packages library(lubridate) # A package that makes it easier to work with dates and times in R. library(tidyverse) #A collection of packages for data manipulation and visualization, including dplyr, ggplot2, and tidyr library(data.table) #A package for fast and efficient data manipulation. library(sf) #A package for working with spatial data using the Simple Features (SF) standard. library(httr) library(openair) #A package for air quality data analysis and visualization. library(osmdata) #A package for accessing and working with OpenStreetMap data. library(tmap) #A package for creating static and interactive maps in R. library(recipes) #A package for preprocessing data using a formula-based interface. library(timeDate) #A package for working with dates and times in R. library(timetk) #A package for manipulating time series data in R. library(ranger) #A package for building fast and accurate random forests models. library(caret) #A package for training and evaluating machine learning models in R.

```

Create a folder on your PC and define the path. Then, import the ZCCM_functions.R file which contains all the necessary functions.

```{r} setwd("myFolder") #sets the working directory to the specified path source("myFolder/ZCCMfunctions.R") #runs the ZCCMfunctions file, which contains all specific functions

```

Learn ML model

In this code, we create a ML model to estimate the hourly traffic flow and average speed at street level in Berlin, Germany. We are using the following data:

  • Hourly volume of vehicles and average speed for different types of vehicles from the lane-specific detectors at 583 counting stations from August to September 2022. These data are sourced from the Digital Platform City Traffic Berlin / Traffic Detection Berlin and are named trafficberlin20220809.csv and countingstationsberlin.csv.

  • Hourly meteorological data such as air temperature, relative humidity, sunshine, rainfall, wind direction, and wind speed from the Berlin-Dahlem weather station (latitude 52.4537 and longitude 13.3017) managed by the German Weather Service Climate Data Center. The file is named weatherberlin20220809.csv.

  • ESRI Shapefile describing the different land use classes in Berlin from the Berlin Digital Environmental Atlas. The file is named var1berlinlanduse.shp.

The relevant files are stored as traffic, stations, weather. The var1 or var2 and so on are variable in which users may include and improve the power of the model for predictions spatial and temporal traffic estimates. In the traffic object, the columns for volume of vehicles and average speed should be renamed as icars and ispeed, respectively.

Load data

```{r}

Load data

traffic <- fread("Data/trafficberlin20220809.csv")

traffic <- traffic %>% #rename(icars = flowautomovel, ispeed = speedautomovel) %>% #Rename the type of cars and speed #groupby(Longitude, Latitude) %>% mutate(id= curgroup_id()) %>% #Create a id for each stations based on latitude and longitue
dplyr::select(date, id, icars, ispeed)

Get station shp

stationscsv <- fread("Data/countingstations_berlin.csv", dec=",") #Read cvs counting stations.

stations <- sf::stassf(stations_csv, coords = c("Longitude", "Latitude"), crs=4326)

stations <- traffic %>% distinct(Longitude, Latitude, .keepall = TRUE) %>% #Eleminate duplicity sf::stas_sf(coords = c("Longitude", "Latitude"), crs=4326) #Convert stations csv file to shapefile based on column Latitude and Longitude.

tmap_mode("view") qtm(stations)#Plot map

Get meteorological data

weather <- fread("Data/weatherberlin20220809.csv") %>% #Read weather csv file dplyr::select(-V1) #Delete column

Load other variables named as var1, var2 var3 ....

var1 <- sf::readsf("shps/var1berlin_landuse.shp") qtm(var1, fill="lndsAtl")#Plot map

```

Screenshot 2023-06-15 at 17 56 42

Get GIS features

Next, you need to obtain the road network for your city using the getOSMfeatures function. This function uses the osmdata package to download OpenStreetMap OSM features and the R package sf to convert them into spatial objects. It then geographically joins the OSM features (iNetRoad) and var1 with road classes segments using the stjoin and stnearestfeature functions (*GISroad). It is recommend for users to salve *iNetRoad or GIS_road files.

```{r}

Get study area polygon from OpenStreetMap data

icity <- "Berlin" shpverify <- osmdata::getbb(city, formatout = "sf_polygon", limit = 1, featuretype = "city")

Check if polygon was obtained successfully

if(!is.null(shpverify$geometry) & !inherits(shpverify, "list")) { studyarea <- shpverify$geometry studyarea <- stmakevalid(studyarea) %>% stassf() %>% sttransform(crs = "+proj=longlat +datum=WGS84 +nodefs") } else { studyarea <- shpverify$multipolygon studyarea <- stmakevalid(studyarea) %>% stassf() %>% sttransform(crs="+proj=longlat +datum=WGS84 +nodefs") } qtm(study_area)# Plot map

Define the road OSM classes. For more details: https://wiki.openstreetmap.org/wiki/Key:highway

class_roads <- c("motorway","trunk","primary", "secondary", "tertiary") #Define the road classes

Apply this function to get road newtork with aggregated osm spatial data

iNetRoad <- getOSMfeatures(city = icity, roadclass = classroads, cityarea = myarea, ishp = TRUE, #If TRUE, all feature shps are salved in the output folder. iplot = TRUE) #If TRUE, all feature maps are salved in the output folder.

st_write(iNetRoad, "myFolder/name.shp")

iNetRoad <- st_read("myFolder/name.shp")

Aggregate var1 to iNetRoad (or var2, var3...)

GISroad <- stjoin(iNetRoad, var1, join =stnearestfeature, left = FALSE) #Join with var1, var2, var3 .....

GISroad <- stjoin(GISroad, var2, stnearestfeature, stiswithindistance, dist = 0.1)

GISroad <- stjoin(GISroad, var3, stnearestfeature, stiswithindistance, dist = 0.1)

```

image

Roads categories

The next step is to divide all road segments into two categories: those with traffic count points, which we have labeled as "sampled", and those without, which we have labeled as "non-sampled". This task utilizes the previously obtained iNetroad or GIS_road object.

```{r}

Road Categories

roadsampled <- stjoin(GISroad, stations, join = stiswithindistance, dist = 20, left = FALSE) %>% mutate(category = "sampled") %>% stassf() %>% sttransform(crs = 4326) roadnonsampled <- GISroad[!GISroad$osmid%in%roadsampled$osmid,] roadnonsampled <- mutate(road_nonsampled, category = "nonsampled")

qtm(roadsampled, lines.col = "blue") + qtm(roadnonsampled, lines.col = "orange") #Plot mapa

```

image

Data splitting

The next step consists of dividing our dataset into two distinct sets: training and testing. First, we randomly assigned 80 % of our traffic count stations to the training set and 20 % to the test set using the R package caret. We made sure to distribute the number of stations evenly across different sampled road categories to ensure a representative sample (fclass defined in class_road). Next, we selected four months (August and September) from 2022, and split each month into the same training and testing sets. In the last task, we joined the split traffic with split counting stations by the column id to create train_dataset and test_dataset.

```{r}

Data station splitting

stationssplit <- roadsampled %>% distinct(id, .keepall = TRUE) %>% #create a dataframe with the unique station id dplyr::select(-id) %>% stjoin(stations, join = stnearestfeature, left = FALSE) stationssplit$fclass <- as.factor(stationssplit$fclass) #change the factor class to a factor

set.seed(1232) Index <- createDataPartition(stationssplit$fclass, #create a data partition of the stations p = 0.8, #80/20% list = FALSE) trainstations <- stationssplit[ Index, ] #create a train and test dataframe teststations <- stations_split[-Index, ]

qtm(trainstations, dots.col = "darkblue") + qtm(teststations, dots.col = "lightblue")

split traffic data timeseries into training and testing sets

dfsplit <- traffic %>% openair::selectByDate(year = 2022, month = 8:9) #Split up traffic timeseries dfsplit$split <- rep(x = c("training", "test"), times = c(floor(x = 0.8 * nrow(x = dfsplit)), #80 % for training ceiling(x = 0.2 * nrow(x = dfsplit)))) # 20 % for test traffictrain <- dfsplit[dfsplit$split == 'training',] #create a train and test dataframe traffictest <- dfsplit[dfsplit$split == 'test',]

trainstations$id <- as.character(trainstations$id) teststations$id <- as.character(teststations$id) traffictrain$id <- as.character(traffictrain$id) traffictest$id <- as.character(traffictest$id)

traindataset <- innerjoin(traffictrain, trainstations, by ="id") #create a traffic and stations by the "id". testdataset <- innerjoin(traffictest, teststations, by ="id")

```

image

Feature engineering and selection

This task involves imputing missing values and transforming the data to select the most relevant predictors using the R package recipe. Temporal predictors, such as time of day, weekdays, weekends, and holiday indicators, were generated using the Steptimeseriessignature function of the R package timetk, which converts the date-time column (e.g., 2023-01-01 01:00:00) into a set of indexes or new predictors. This task will result in train_recipe and test_recipe, which contain all spatial and temporal features and dependent variables of the model. In the present example, the dependent variables are the mean of traffic flow (icars) and the mean speed (ispeed) at the road link (oms_id). The weather object was joined by the column "date".

```{r}

featurestrain <- traindataset %>% #create a new dataframe with the train dataset groupby(date, osmid) %>% #group by date and osmid summarise(meancars = round(mean(icars),digits = 0), #calculate the mean of the cars and mean speed as depend variables meanspeed = round(mean(ispeed), digits = 0), .groups = "drop") %>% filter(meancars>10, meanspeed > 10) %>% #filter the dataframe by the mean of cars and speed innerjoin(roadsampled, by= "osmid") %>% #join the road sampled dataframe innerjoin(weather, by= "date") %>% #join the weather dataframe astibble() %>% dplyr::select(-Latitude, -Longitude, -id, -name, -osmid,-category, -geometry) %>% #Drop the unsual features mutateif(is.character, as.factor) #mutate the character variables to factor

featurestest <- testdataset %>% #create a new dataframe with the test dataset groupby(date, osmid) %>% #group by date and osmid summarise(meancars = round(mean(icars),digits = 0), #calculate the mean of the cars and round to 0 digits meanspeed = round(mean(ispeed), digits = 0), .groups = "drop") %>% #calculate the mean of the speed and round to 0 digits filter(meancars>10, meanspeed > 10) %>% #filter the dataframe by the mean of cars and speed innerjoin(roadsampled, by= "osmid") %>% #join the road sampled dataframe innerjoin(weather, by= "date") %>% #join the weather dataframe astibble() %>% dplyr::select(-Latitude, -Longitude, -id, -name, -osmid, -category, -geometry) %>% #select the features mutateif(is.character, as.factor) #mutate the character variables to factor

receipesteps <- recipe(meancars + meanspeed ~., data = featurestrain) %>% # Depend variable selected steptsimpute(allnumeric()) %>% #Impute values for numeric predictors and outcomes stepimputemode(lanes, maxspeed) %>% #Impute values for nominal/categorical variablescars stepunknown(allnominalpredictors()) %>% stepother(allnominalpredictors(), -lanes, -maxspeed) %>% steptimeseriessignature(date) %>% # creating indexes from date-time step_rm(date, contains("index.num"), contains("iso"), contains("xts"))

trainrecipe <- receipesteps %>% # create a recipe for the training data prep(featurestrain) %>% bake(featurestrain)

testrecipe <- receipesteps %>% # create a recipe for the test data prep(featurestest) %>% bake(featurestest)

```

Screenshot 2023-06-16 at 10 11 08

Selection and training of ML algorithm

To train and test the Ml model, we used Random Forest (RF), a popular ensemble learning technique known for its ability to combine a large number of decision trees for classification or regression (Breiman, 2001).The R package ranger was used to run the RF for traffic flow and speed predictions.

```{r}

trainprocessed <- trainrecipe %>% #Training the RF for traffic flow predictions dplyr::select(-meanspeed) #Delete meanspeed for training and test sets as RF runs the traffic flow testprocessed <- testrecipe %>% dplyr::select(-mean_speed)

set.seed(1234) rfModelcars <- ranger(dependent.variable.name = "meancars", data = train_processed, num.trees = 100, importance = "permutation")

rfModelpredcars <- predict(rfModelcars, data=testprocessed) #Make predictions for test data with trained model

rfModeldfcars <- rfModelpredcars$predictions %>% #Create the new dataframe with predictions and other variables bindcols(featurestest %>% dplyr::select(date)) %>% bindcols(testprocessed) %>% rename(predcars = ...1) writecsv(rfModeldfcars, "rfModeldf_cars.csv")

trainprocessed <- trainrecipe %>% #Training the RF for average speed predictions dplyr::select(-meancars) #Delete meancars for training and test sets as RF runs the average speed testprocessed <- testrecipe %>% dplyr::select(-mean_cars)

set.seed(1234) rfModelspeed <- ranger(dependent.variable.name = "meanspeed", data = train_processed, num.trees = 100, importance = "permutation")

rfModelpredspeed <- predict(rfModelspeed, data=testprocessed)

rfModeldfspeed <- rfModelpredspeed$predictions %>% bindcols(featurestest %>% select(date)) %>% bindcols(testprocessed) %>% rename(predspeed = ...1) writecsv(rfModeldfspeed, "rfModeldf_speed.csv")

```

Screenshot 2023-06-16 at 10 21 35

Model Evaluation and interpretabilty

As the RF is part of black-box models, we utilized the feature importance method (permutation), which measures the contribution of each feature to the final predictions. The R package openair was used to generate the plots and calculate the metrics from the rfModeldfcars and rfModeldfspeed.

ML model for traffic flow predictions

```{r}

rfModeldfcars %>% #Plot timeseries for observed and modelled values openair::timePlot(pollutant = c("mean_cars", "predcars"), group = TRUE, avg.time = "hour", name.pol = c("Observed", "ML-model"), auto.text = TRUE, cols = c("#4a8bad", "#ffa500"), fontsize = 16, lwd = 2, lty = 1, ylab = "Traffic flow", main = "")

rfModeldfcars %>% #Plot time variation openair::timeVariation(pollutant = c("mean_cars", "predcars"), name.pol = c("Observed", "Modelled"), cols = c("#FAAB18", "#1380A1"), ci = TRUE, lwd = 3, fontsize = 14, ylim = c(0, 800), key.position = "bottom", ylab ="Traffic flow")

metricscars <- openair::modStats(rfModeldfcars, mod= "predcars", obs = "meancars", type = "hour") #It provides a set of metrics by hour by the argument "type". writecsv(metricscars, "metrics_cars.csv") #Salve the metrics table.

variable importance

variablescars <- as.data.frame(importance(rfModelcars), type = 1) colnames(variablescars) [1] <- "importance" variablescars<- cbind(var.names = rownames(variablescars), variablescars) variablescars<- mutate(variablescars, importance = importance / sum(importance) * 100, importance = round(importance, digits = 1)) %>% arrange(desc(importance)) writecsv(variablescars, "importance_cars.csv")

variablescars %>% head(20) %>% #Top 20 features ggplot(aes(x=reorder(var.names, importance), y=importance, fill=importance))+ geombar(stat="identity", position="dodge", show.legend = FALSE)+ ylab("Contribution (%)")+ coordflip() + xlab("Top 20 features")+ labs( subtitle = "traffic flow predicitions") + geomtext(aes(label = importance), hjust = 0, size = 5) + #scaleycontinuous(limits = c(0, 20)) + scalefillviridisc(direction = -1) + themeclassic(base_size = 15)

ggsave("importancecarsplot.png", iplot) #Salve the plot

```

Screenshot 2023-06-16 at 10 24 53

Screenshot 2023-06-16 at 10 26 58

ML model for average speed predictions

```{r}

rfModeldfspeed %>% #plot the timeseries openair::timePlot(pollutant = c("meanspeed", "predspeed"), group = TRUE, avg.time = "hour", name.pol = c("Observed", "ML-model"), auto.text = TRUE, cols = c("forestgreen", "brown2"), fontsize = 16, lwd = 2, lty = 1, ylab = "Average speed [km/h]", main = "") metricscars <- modStats(rfModelplotcars, mod= "predspeed", obs = "meanspeed", type = "hour") #get the metrics and assess your model writecsv(metricscars, "metricsspeed.csv")

variablesspeed <- as.data.frame(importance(rfModelspeed), type = 1) colnames(variablesspeed) [1] <- "importance" variablesspeed<- cbind(var.names = rownames(variablesspeed), variablesspeed) variablesspeed<- mutate(variablesspeed, importance = importance / sum(importance) * 100, importance = round(importance, digits = 1)) %>% arrange(desc(importance)) writecsv(variablesspeed, "importance_speed.csv")

variablesspeed %>% head(20) %>% #Top 20 features ggplot(aes(x=reorder(var.names, importance), y=importance, fill=importance))+ geombar(stat="identity", position="dodge", show.legend = FALSE)+ ylab("Contribution (%)")+ coordflip() + xlab("Top 20 features")+ labs( subtitle = "Average speed predicitions") + geomtext(aes(label = importance), hjust = 0, size = 5) + #scaleycontinuous(limits = c(0, 20)) + scalefillviridisc(direction = -1, option = "E") + themeclassic(base_size = 15)

```

Deploy ML model

After fine-tuning and evaluating the ML model on the dataset, it is deployed to predict traffic flow and average speed for each road segment in the city using the DeployMLtraffic function. This function calculates traffic CO2 emissions at the street level and produces time series and maps of traffic predictions and CO2 emissions.

To use the DeployMLtraffic function, you need to input data such as traffic, stations, and weather, as well as the GIS_road object obtained from the Get OSM features section. The function performs all the steps described in the Lean ML model, except for data splitting and model evaluation.

The DeployMLtraffic has several arguments, including:

  • input: a data frame with the period defined as months and years for calculations.

  • traffic_data, stations_data, and weather_data:inputs for the function.

  • road_data: a shapefile that describes the road segments with OSM features, which is named GIS_road in this case.

  • n.trees: number of decision trees in Rondam Forest. The default is 100.

  • cityStreet: if TRUE, the function calculates all prediction values on each road segment within the city area and provides a dataframe.Rds for each day in the output_cityStreet folder.

  • cityCount: if TRUE, the function sums up all prediction values within the city area and provides a dataframe.csv for each day in the output_citycount folder.

  • cityMap: if TRUE, the function calculates all prediction values on each road segment within the city area and provides a stack raster with 100 meters resolution in a .tiff and shapfile .GPKG formats for each day in the output_citymap folder.

  • tempRes: the temporal resolution, which can be "sec", "min", "hour", "day", "DSTday", "week", "month", "quarter", or "year".

  • spatRes:: the spatial resolution of the cityMap. The default is 100 meters.

  • inuit: the unit of cityMap for CO2 emissions, which can be "micro" (CO2 emissions [micro mole per meter, square per second]), "grams" (CO2 emissions [grams per meter]), or "gramsCarbon" (Carbon emissions [grams per meter]). Note that cityStreet and cityCount includes all units.

  • ista: the statistic to apply when aggregating the data, which can be "mean", "max", "min", "median", "frequency", "sd", or "percentile". The default is the sum.

Once all arguments are defined, the DeployMLtraffic function can be run using the apply function for the selected period. In this example, the result is stored as myMLtraffic. If cityCount is TRUE and cityMap is FALSE, the do.call function can be used to merge the list of days of myMLtraffic into a unique dataframe with the complete time series, which is named CO2_count. If cityCount is FALSE and cityMap is TRUE, the unlist function can be used to obtain the stack raster, which is named CO2_map.

```{r}

define the period (inputDates)

imonth <- c("jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec") or c(1:12)

iyear <- c(2015:2020), c(2015, 2017, 2020) or (2020).

imonth <- c("aug","sep") iyear <- c(2022) input <- expand.grid(imonth, iyear)

Applying the DeployML traffic function for the selected period

myMLtraffic <- pbapply::pbapply(input, 1, DeployMLtraffic(city="Berlin", input, trafficdata = traffic, stationsdata = stations, weatherdata = weather, roaddata = iNetRoad, n.trees = 100, cityStreet = TRUE, cityCount = TRUE, cityMap = TRUE, tempRes = "hour", spatRes = 100, iunit = "grams", ista = "sum"))

Take your timeseries and salve based on the selected arguments:

CO2street <- do.call(rbind.data.frame, myMLtraffic) #Use for the cityStreet saveRDS(CO2street, "CO2streetBerlin202208_09.rds") #salve file

CO2count <- do.call(rbind.data.frame, myMLtraffic) #Use for the cityCount writecsv(CO2count, "CO2countBerlin20220809.csv") #salve file

CO2map <- unlist(myMLtraffic) ##Use for the cityMqp raster::writeRaster(CO2map,"CO2mapBerlin202208_09.TIF", format="GTiff", overwrite =TRUE) #salve file

```

Data - ZCCM::traffic outcomes

The ZCCM::traffic dataset consists of three data formats:

  • Geopackage (.gpkg): osm_id + hourly (0:23) CO2 emission values per day + road link geometries (The coordinate reference system EPSG:3246)

  • Raster (.tiff): stack raster with hourly (0:23) CO2 emission values per day with 100 meters resolution (EPSG:3246)

  • DataStreet (.Rds): hourly (0:23) per day with 100 meters resolution

  • CSV (.csv): timeseries of hourly CO2 emissions + attributes

Emission Geographic Information platform

To generate your dashboard, you can follow the dashboard section to get started.

Owner

  • Name: ZoomCityCarbonModel
  • Login: ByMaxAnjos
  • Kind: user
  • Location: Berlin, Germany
  • Company: Technische Universität Berlin

GitHub Events

Total
  • Watch event: 1
  • Push event: 5
  • Create event: 1
Last Year
  • Watch event: 1
  • Push event: 5
  • Create event: 1