greem

A repository to measure the energy impact of video processes.

https://github.com/cd-athena/greem

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.3%) to scientific vocabulary

Keywords

energy-consumption energy-monitor python3 video-processing
Last synced: 6 months ago · JSON representation

Repository

A repository to measure the energy impact of video processes.

Basic Info
Statistics
  • Stars: 6
  • Watchers: 2
  • Forks: 2
  • Open Issues: 5
  • Releases: 0
Topics
energy-consumption energy-monitor python3 video-processing
Created over 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.Docker.md

Docker

This section has the necessary instructions to create a Docker container with all necessary dependencies to execute benchmarks within GREEM.

Prerequisites

  • Docker
  • Docker Compose
  • NVIDIA Container Toolkit (or Runtime)

In order to use the Docker container with NVIDIA GPU support, the NVIDIA Container Toolkit/Runtime has to be installed. This toolkit installs drivers that enable the access of NVIDIA GPUs within Docker containers. Note: When running a Docker container, it is required to set the container runtime to nvidia.

Installing the NVIDIA Container Toolkit is the official guide by NVIDIA to install the required packages.

If you prefer to install NVIDIA Container Runtime you need to provide the flag --gpus instead of --runtime=nvidia to the docker run <cmd>.

To test if NVIDIA Container Tookkit is properly installed, use this sample container:

```bash

NVIDIA Container Toolkit

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

NVIDIA Container Runtime

sudo docker run --rm --gpus all ubuntu nvidia-smi ```

This should output something similar to:

```bash +-----------------------------------------------------------------------------+ | NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ```

This will start a Docker container with NVIDIA monitoring output. If an error occurrs when executing this command, this likely has to do with NVIDIA not being properly supported within Docker.

Pulling Docker Image

A prebuild Docker image is available at DockerHub GREEM and can be pulled via the command:

bash docker pull fendanez/greem:version1

Building and running GREEM

If you prefer to build the Docker image on your own system, or want to make some changes in the Dockerfile you can build the Docker image via:

bash docker compose up --build

Finally, to run the Docker container, execute the following command:

bash docker run --rm --runtime=nvidia -it gaiatools-greem bash

Deploying your application to the cloud

First, build your image, e.g.: docker build -t myapp .. If your cloud uses a different CPU architecture than your development machine (e.g., you are on a Mac M1 and your cloud provider is amd64), you'll want to build the image for that platform, e.g.: docker build --platform=linux/amd64 -t myapp ..

Then, push it to your registry, e.g. docker push myregistry.com/myapp.

Consult Docker's getting started docs for more detail on building and pushing.

References

Owner

  • Name: ATHENA Christian Doppler (CD) Laboratory
  • Login: cd-athena
  • Kind: organization
  • Location: Klagenfurt, Austria

Adaptive Streaming over HTTP and Emerging Networked Multimedia Services

GitHub Events

Total
  • Watch event: 3
Last Year
  • Watch event: 3

Issues and Pull Requests

Last synced: almost 2 years ago

All Time
  • Total issues: 18
  • Total pull requests: 0
  • Average time to close issues: 3 months
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.44
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 18
  • Pull requests: 0
  • Average time to close issues: 3 months
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.44
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • MyGodItsFull0fStars (8)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

setup.py pypi
.github/workflows/python-package-conda.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
environment.yml conda
  • codecarbon
  • dacite
  • ipython
  • jupyter
  • numpy
  • pandas
  • pip
  • python 3.10.*
Dockerfile docker
  • nvidia/cuda 11.6.2-base-ubuntu"${UBUNTU_VER}" build