diffmjstep

Custom Autograd Function for Differentiable MuJoCo Dynamics

https://github.com/eladsharony/diffmjstep

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.7%) to scientific vocabulary

Keywords

autograd finite-difference gradient mujoco pytorch rl
Last synced: 6 months ago · JSON representation ·

Repository

Custom Autograd Function for Differentiable MuJoCo Dynamics

Basic Info
  • Host: GitHub
  • Owner: EladSharony
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 43 KB
Statistics
  • Stars: 6
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
autograd finite-difference gradient mujoco pytorch rl
Created almost 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

DiffMjStep: Custom Autograd Function for Differentiable MuJoCo Dynamics

License: MIT

Description

An efficient integration between PyTorch and MuJoCo. Enables automatic differentiation through MuJoCo simulation trajectories, allowing for gradient-based optimization of control policies directly within PyTorch.

Features

Efficient Gradient Computations: Significantly more efficient than naive Jacobian finite differencing calculations as it utilizes the built-in finite difference method in MuJoCo mjd_transitionFD.

Multi-Step Calculations: Provides the ability to estimate gradients over multiple simulation steps, by propagating gradients through the entire trajectory.

Batch Simulation Support: Enables batched simulations and gradient computations, significantly improving computational efficiency for large-scale experiments.

Execution Benchmark

Benchmark Results

Usage

```python import torch import mujoco as mj from DiffMjStep import MjStep

Initialize MuJoCo model and data

xmlpath = 'path/to/your/model.xml' mjmodel = mj.MjModel.fromxmlpath(filename=xmlpath) mjdata = mj.MjData(mj_model)

Define initial state and control input tensors

state = torch.rand(mjmodel.nq + mjmodel.nv + mjmodel.na, requiresgrad=True) ctrl = torch.rand(mjmodel.nu, requiresgrad=True)

Compute next state and gradients

nextstate, dydx, dydu = MjStep.apply(state, ctrl, nsteps=4, mjmodel, mjmodel, mj_data) ```

Notes

  • As of MuJoCo 3.1.2 the initial state passed to rollout() must include a time-step, such that nstate = mj_stateSize(model, mjtState.mjSTATE_FULLPHYSICS).

Citation

If you use this package in your research, a citation would be appreciated:

@software{DiffMjStep2024, author = {Sharony, Elad}, title = {{DiffMjStep: Custom Autograd Function for Differentiable MuJoCo Dynamics}}, year = {2024}, version = {1.0}, howpublished = {\url{https://github.com/EladSharony/DiffMjStep}}, }

Owner

  • Name: Elad Sharony
  • Login: EladSharony
  • Kind: user

Citation (CITATION.cff)

@software{DiffMjStep2024,
  author = {Sharony, Elad},
  title = {{DiffMjStep: Custom Autograd Extension for Differentiable MuJoCo Dynamics}},
  year = {2024},
  version = {1.0},
  howpublished = {\url{https://github.com/EladSharony/DiffMjStep}},
}

GitHub Events

Total
  • Watch event: 4
  • Fork event: 1
Last Year
  • Watch event: 4
  • Fork event: 1