predicate_learning

Code for paper "On Learning Scene-aware Generative State Abstractions for Task-level Mobile Manipulation Planning"

https://github.com/ethz-asl/predicate_learning

Science Score: 52.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
    Organization ethz-asl has institutional domain (www.asl.ethz.ch)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.5%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Code for paper "On Learning Scene-aware Generative State Abstractions for Task-level Mobile Manipulation Planning"

Basic Info
  • Host: GitHub
  • Owner: ethz-asl
  • License: gpl-3.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 967 KB
Statistics
  • Stars: 6
  • Watchers: 6
  • Forks: 2
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation

README.md

Predicate Learning

This repository contains the code associated with our paper "On Learning Scene-aware Generative State Abstractions for Task-level Mobile Manipulation Planning" (link will be added as soon as it is available online).

If you are viewing a downloaded copy of this repository (ZIP archive), consider obtaining the latest version online from https://github.com/ethz-asl/predicate_learning.

Installation

Install torch, as per the instructions on the PyTorch website as well as torch_geometric via the PyTorch Geometric website (use pip for this).

Create a virtual environment and install the remaining dependencies:

bash python3 -m venv venv source venv/bin/activate pip install -r requirements.txt pip install -e .

Obtaining dataset

This step is optional. Instead of using our data, you can generate your own as described below.

To get you started quickly, we provide the dataset used in our paper. This includes both predicates on_clutter and inside_drawer. For each predicate, we provide:

  • Demonstrations (physics simulator states)
  • Extracted bounding box features
  • Point clouds of single objects that can be used to train point cloud auto-encoders
  • Point clouds of all objects in the demonstration scenes for encoding the scene
  • Encoded point clouds for all demonstration scenes, for a range of encoder models

Furthermore, each predicate comes with a training set of 20'000 samples, and a test set of 2'000 samples.

The data can be downloaded from the following link: http://hdl.handle.net/20.500.11850/634113

Usage instructions

There are two options for where to save data and models.

First, they can be stored in a directory within the repo. For this option, use the --paths local flag, or set PATHS_SELECTOR = "local" for scripts that do not parse arguments. Save the data you downloaded in the previous step into training/predicates/data (s.t. there is training/predicates/data/on_clutter, etc.) relative to this readme file.

Second, they can be stored in a data directory in your home directory. In that case, choose --paths home flag, or set PATHS_SELECTOR = "home", and extract the data into ~/Data/highlevel_planning/predicates/data. The paths can also be adjusted in src/highlevel_planning_py/tools_pl/path.py if desired.

Generate data

These steps are only needed if you want to generate data yourself. If you prefer to use the data we provide, refer to the description above for instructions on how to download, and skip this section.

  1. To generate demonstrations, use the script scripts/data_generation/on_clutter_data_generation.py for the on_clutter predicate, and scripts/data_generation/inside_data_generation.py for the inside_drawer predicate. Modify scripts to set options for data generation (# of samples, # of objects, object scales, etc.).
  2. Use scripts/predicate_learning/extract_features.py to extract features and/or point clouds. Modify script to select which features or point clouds to extract.

Train point cloud encoders

  1. Use scripts/predicate_learning/training_pointcloud_autoencoder.py to train point cloud autoencoders. Use the --help flag to see available options.
  2. Encode point clouds that were extracted in the previous step using scripts/predicate_learning/encode_pointclouds.py. Modify script to select which point clouds to encode, and which encoder to use.

Train predicate models

Proposed method

For training models based on bound box features, use the script scripts/predicate_learning/training_ctat_bb_features.py. An example command would be:

bash python training_ctat_bb_features.py --paths local --evaluate_gen --evaluate_class --predicate_name on_clutter --dataset_id 220831-175353_demonstrations_features --random_seed 12 --num_class_it 30000 --num_adversarial_it 30000 --batch_size 16 --learning_rate 0.001 --feature_version v1 --gen_loss_components disc --data_normalization_class first_arg --data_normalization_disc_gen first_arg --gen_normalize_output False --dataset_size -1 hybrid --model_version v2 --include_surrounding True --scene_encoding_dim 16 --class_encoder_type mlp --class_encoder_layers [64,32] --class_main_net_layers [64,32] --disc_encoder_type mlp --disc_encoder_layers [] --disc_main_net_layers [64,32,12] --gen_encoder_type mlp --gen_encoder_layers [64,32] --gen_main_net_layers [12,12]

Square brackets may need to be escaped on your shell. Use the --help flag to see available options.

For training models based on point cloud features, use the script scripts/predicate_learning/training_ctat_pc_features.py. The --help flag documents available options.

Baselines

For training decision trees, use the script scripts/predicate_learning/training_sklearn_class.py. The --help flag documents available options.

Uniform samplers do not need to be trained. Instead, only a configuration is needed that can then be evaluated directly. To create a configuration for later evaluation, use the script scripts/predicate_learning/create_uniform_sampler_config.py.

Evaluation

The script scripts/predicate_learning/evaluate.py is used to evaluate all models and baselines, classifiers and generators, as well as bounding box and point cloud based features. To evaluate a proposed classifier, run

bash python evaluate.py --model gen --training_type gan --autodetect_dataset_names --filter_str 230818_100813

The --training_type flag is used to select between propsed BB or PC based methods, as well as baseline. The --model flag is used to select between classifier and generator. The --filter_str flag is used to select which models to evaluate (all runs which have the string given here as a substring in their name will be evaluated). The --autodetect_dataset_names flag is used to automatically detect which datasets are available for evaluation. --dry_run can be used to see which runs would be evaluated for a given set of arguments. Use the --help flag to see all available options.

TAMP Integration

To use a trained generator in a simple task and motion planning (TAMP) pipeline, use the script scripts/integrated_planning/run_planner.py. The --help flag documents available options. An example command would be:

bash python3 run_planner.py --paths local --method direct --config_file_path ../../configs/config_tamp.yaml --num_repetitions 100 --random_seed 12 --budget_time_seconds 30 --placing_ignore_orientation false --keep_successful_samples true --sampler_id <sampler_id>

The config file config_tamp.yaml can be found in the configs directory and is among other settings used to specify the goal the agent shall achieve.

Owner

  • Name: ETHZ ASL
  • Login: ethz-asl
  • Kind: organization
  • Location: Zurich, Switzerland

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Förster"
  given-names: "Julian"
  orcid: "https://orcid.org/0000-0002-1163-1065"
- family-names: "Chung"
  given-names: "Jen Jen"
  orcid: "https://orcid.org/0000-0001-7828-0741"
- family-names: "Ott"
  given-names: "Lionel"
  orcid: "https://orcid.org/0000-0001-6554-0575"
- family-names: "Siegwart"
  given-names: "Roland"
  orcid: "https://orcid.org/0000-0002-2760-7983"
title: "On Learning Scene-aware Generative State Abstractions for Task-level Mobile Manipulation Planning - Code Release"
version: 1.0.0
// doi: 10.5281/zenodo.1234
date-released: 2023-09-27
url: "https://github.com/ethz-asl/predicate_learning"
preferred-citation:
  type: unpublished
  authors:
  - family-names: "Förster"
    given-names: "Julian"
    orcid: "https://orcid.org/0000-0002-1163-1065"
  - family-names: "Chung"
    given-names: "Jen Jen"
    orcid: "https://orcid.org/0000-0001-7828-0741"
  - family-names: "Ott"
    given-names: "Lionel"
    orcid: "https://orcid.org/0000-0001-6554-0575"
  - family-names: "Siegwart"
    given-names: "Roland"
    orcid: "https://orcid.org/0000-0002-2760-7983"
  // doi: "10.0000/00000"
  // journal: "Journal Title"
  month: 9
  // start: 1 # First page number
  // end: 10 # Last page number
  title: "On Learning Scene-aware Generative State Abstractions for Task-level Mobile Manipulation Planning"
  // issue: 1
  // volume: 1
  year: 2023

GitHub Events

Total
  • Watch event: 2
Last Year
  • Watch event: 2

Committers

Last synced: 12 months ago

All Time
  • Total Commits: 16
  • Total Committers: 1
  • Avg Commits per committer: 16.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Julian Förster f****n 16

Issues and Pull Requests

Last synced: 12 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • PyYAML *
  • Shapely *
  • argparse *
  • icecream *
  • igibson ==2.0.2
  • ipython *
  • matplotlib *
  • networkx *
  • numpy ==1.23.5
  • open3d *
  • pandas ==1.4.2
  • plyfile *
  • pybullet-svl ==3.1.6.4
  • pytest *
  • scikit-learn ==1.0.2
  • scipy ==1.10.1
  • seaborn *
  • sklearn ==0.0
  • tensorboardX *
  • tqdm *
  • urdf-parser-py ==0.0.3