3d-semideformable-objecttracking

This repository contains a suite of programs meant to aid in the detection of semi-deformable objects. The object used in this repository consisted of a bell pepper.

https://github.com/gustavodlra/3d-semideformable-objecttracking

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

This repository contains a suite of programs meant to aid in the detection of semi-deformable objects. The object used in this repository consisted of a bell pepper.

Basic Info
  • Host: GitHub
  • Owner: GustavoDLRA
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 31.3 MB
Statistics
  • Stars: 2
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created over 1 year ago · Last pushed 9 months ago
Metadata Files
Readme License Citation

README.md

3D-SemiDeformable-ObjectTracking

This repository contains a suite of programs meant to aid in the detection of semi-deformable objects. The object used in this repository consisted of a bell pepper.

Python Requirements

The code in general was developed in a Windows 11 machine. It was developed using the Anaconda python distribution. An Anaconda virtual environment with Python 3.10.13 was used. A requirements.txt file is included in order to allow for a recreation of the environment used for the development of this code.

The Reference Notebooks Folder

The Reference Notebooks Folder contains 6 Jupyter notebooks that detail the process that allowed to perform the detection and pose estimation of a bell pepper in 3-D. The notebooks should be read in the following order:

  1. imgsegmentationpc_creation.ipynb: This notebook illustrates the process necessary for the extraction of color and depth data from images captured by an Azure Kinect. The result of this notebook is a colored point cloud of the object of interest.
  2. createpcfrom_mesh.ipynb: The workflow in this notebook takes a 3D model in STL format as input. The model is then transformed into a point cloud with the a number of points specified in the notebook and saved in a specified directory.
  3. pcaligningand_scaling: This notebook performs the key part of this process. It illustrates and explains the series of steps necessary to accurately deform the point cloud of a canonical 3D model with recognizable characteristics of a deformable object. A centroid-to-furthest-point defining feature line is found in both the point cloud of the canonical model and in the cloud of the object scanned by the Kinect. These lines are then used in order to perform an orientation alignment, which is later refined. Once both clouds are properly aligned, the point cloud of the canonical model is then scaled to match the dimensions of the Kinect-scanned real-world object. The output of this notebook is an accurately scaled canonical model.
  4. pose_registration.ipynb: After having obtained the scaled and aligned model point cloud in the prior notebook, this notebook uses the RAndom SAmple Consensus (RANSAC) and Iterative Closest Points (ICP) algorithms to register and approximate the pose of the real world point cloud by aligning the scaled canonical model to it using these algorithms in the sequence they were mentioned. RANSAC performs a quality initial pose estimation which is then refined via the use of the ICP algorithm. This notebook outputs a transformation matrix that can be used to align the model point cloud with the best pose that this notebook could obtain.
  5. generate2D_rep.ipynb: The scaled and deformed point cloud is put into the pose given by the prior notebook. The point cloud is then transformed using an ortographic projection with matplotlib in order to generate a 2D representation that can be superimposed onto the color image.
  6. overlay2dog_img.ipynb: This code uses the orthogonal view generated in the prior notebook to superimpose it in the centroid of the object in the color image. Allowing for an in-context visualization.

Owner

  • Login: GustavoDLRA
  • Kind: user

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: 3D-SemiDeformable-ObjectTracking
message: >-
  This repository contains a suite of programs meant to aid
  in the detection of semi-deformable objects. The object
  used in this repository consisted of a bell pepper.
type: software
authors:
  - given-names: 'Gustavo '
    family-names: De Los Ríos Alatorre
    email: gustavodlra1999@gmail.com
    affiliation: ITESM
    orcid: 'https://orcid.org/0009-0000-1910-0691'
repository-code: >-
  https://github.com/GustavoDLRA/3D-SemiDeformable-ObjectTracking.git
abstract: >-
  This repository contains a suite of programs meant to aid
  in the detection of semi-deformable objects. The object
  used in this repository consisted of a bell pepper. 
keywords:
  - Computer Vision
  - Robot Vision
  - 3D
  - Pose Estimation
  - Object Detection
license: Apache-2.0
commit: ' 536ab17'
version: '1.0'
date-released: '2024-06-13'

GitHub Events

Total
  • Watch event: 2
  • Push event: 1
Last Year
  • Watch event: 2
  • Push event: 1