https://github.com/advrhumanoids/nn_laser_spot_tracking

A laser spot tracking employing NN and ROS

https://github.com/advrhumanoids/nn_laser_spot_tracking

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: sciencedirect.com, ieee.org, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.5%) to scientific vocabulary
Last synced: 7 months ago · JSON representation

Repository

A laser spot tracking employing NN and ROS

Basic Info
  • Host: GitHub
  • Owner: ADVRHumanoids
  • License: bsd-3-clause
  • Language: Python
  • Default Branch: master
  • Size: 2.05 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created about 2 years ago · Last pushed 9 months ago
Metadata Files
Readme License

README.md

nnlaserspot_detection

logic scheme

Detect (and practically track) in the ROS world a laser spot emitted from a common laser pointer.

Requirments:

  • It should work on cpu only pc, but gpu (only nvidia) is preferrable
  • Pytorch with matching version of cuda. conda (please miniconda) can be used but I probably you have to install also ros in the environent to run all
  • ROS (1)
  • If using yolov5, their requirments (see setup and running)

Setup and running

The tracking comprises two main nodes: 2Ddetection for 2D neural network detection and 3Dmatching for pixel-point matching and TF broadcasting.

In brief:
The 2Ddetection node will subscribe to /$(arg camera)/$(arg image)/$(arg transport)>
The 3D
tracking to /$(arg point_cloud) A TF will be broadcasted to the ROS tf tree from the point cloud reference to the $(arg laser_spot_frame).

  • Indentify the model/weights you want. Some are provided at https://zenodo.org/records/10471835. In any case they should have been trained on laser spots, obviously. Put the model you want to use in a folder, models folder of this repo is the default.

  • [Optional, but suggested] If Yolov5 is used, better to clone their repo, and provide its path to the yolo_path argument. Otherwise, pytorch will download it every time (since the default is "ultralytics/yolov5"). If cloning, go in the folder and install the requirments: pip install -r requirements.txt.

  • Run the launch file: roslaunch nn_laser_spot_tracking laser_tracking.launch model_name:=<> camera:=<> point_cloud:=<>

Launch File Arguments

Required

  • model_name: Name of the neural network model, a ".pt" file, located in model_path folder
  • camera: Camera "ROS" name. This is used to define the topic where images are published. It is the root of the image topic name.
  • point_cloud: The name of the topic where the PointCloud is published (pcl::PointCloud<pcl::PointXYZ>)

Optional

  • model_path (default: "$(find nnlaserspot_tracking)/models/"): Path to the neural network model directory.
  • yolo_path (default "ultralytics/yolov5"): Path to the yolo repo, better to download it so you have it locally.
  • image (default: "color/image_raw"): Image topic name.
  • transport (default: "compressed"): Image transport type.
  • dl_rate (default: 30): Rate of the 2D_detection node. Note that inference part is blocking, so the node may not reach this rate
  • tracking_rate (default: 100): Rate of the 3D_matching node.

  • detection_confidence_threshold (default: 0.55): Confidence threshold for 2D detections.

  • cloud_detection_max_sec_diff (default: 4): Maximum timestamp difference between 2D detections and point clouds. If it is bigger, results are discarded.

  • position_filter_enable (default: true): Enable laser spot position filtering to smooth erratic movements of the laser spot.

  • position_filter_bw (default: 9): Bandwidth of the position filter.

  • position_filter_damping (default: 1): Damping factor for the position filter. These are settable online with a ddynamic reconfigure server.

  • keypoint_topic (default: "/nnlaserspottracking/detectionoutputkeypoint"): Topic with which 2Dtracking and 3D_matching communicate. Better to not touch.

  • laser_spot_frame (default: "laserspotframe"): Frame name for laser spot.

  • pub_out_images (default: true): Publish 2D images with rectangle on the detected laser spot.

  • pub_out_images_all_keypoints (default: false): Publish RGB 2D with all keypoints (not only the best one above the confidence threshold).

  • pub_out_images_topic (default: "/detectionoutputimg"): Topic name for publishing output images.

  • gdb (default: false): Enable GDB debugging.

  • rviz (default: false): Launch RViz visualization.

Training new models

Testing/compaing models

Image dataset:

Available at https://zenodo.org/records/15230870 Two formats are given: - COCO format (for non-yolo models) as: - a folder containing images and annotation folders.
- in images, all the images (not divided by train, val, test, this is done by the code) - in annotations, an instances_default.json file

  • YOLOv5 pytorch format for YOLOV5 model
    • a folder containing data.yaml file which points to two folders in the same location:
    • train
    • valid Both have images and labels folders

Trained models:

Available at https://zenodo.org/records/10471835

Paper

https://www.sciencedirect.com/science/article/pii/S092188902500140X @article{TORIELLI2025105054, title = {An intuitive tele-collaboration interface exploring laser-based interaction and behavior trees}, journal = {Robotics and Autonomous Systems}, pages = {105054}, year = {2025}, issn = {0921-8890}, doi = {https://doi.org/10.1016/j.robot.2025.105054}, url = {https://www.sciencedirect.com/science/article/pii/S092188902500140X}, author = {Davide Torielli and Luca Muratore and Nikos Tsagarakis}, keywords = {Human–robot interface, Human-centered robotics, Visual servoing, Motion planning}, }

https://ieeexplore.ieee.org/document/10602529 @ARTICLE{10602529, author={Torielli, Davide and Bertoni, Liana and Muratore, Luca and Tsagarakis, Nikos}, journal={IEEE Robotics and Automation Letters}, title={A Laser-Guided Interaction Interface for Providing Effective Robot Assistance to People With Upper Limbs Impairments}, year={2024}, volume={9}, number={9}, pages={7653-7660}, keywords={Robots;Lasers;Task analysis;Keyboards;Magnetic heads;Surface emitting lasers;Grippers;Human-robot collaboration;physically assistive devices;visual servoing}, doi={10.1109/LRA.2024.3430709}}

Owner

  • Name: ADVR Humanoids
  • Login: ADVRHumanoids
  • Kind: organization
  • Email: luca.muratore@iit.it
  • Location: Istituto Italiano di Tecnologia

Humanoids & Human Centered Mechatronics, HHCM, IIT

GitHub Events

Total
  • Delete event: 2
  • Push event: 14
  • Fork event: 1
  • Create event: 2
Last Year
  • Delete event: 2
  • Push event: 14
  • Fork event: 1
  • Create event: 2

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 29
  • Total Committers: 1
  • Avg Commits per committer: 29.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 12
  • Committers: 1
  • Avg Commits per committer: 12.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Davide Torielli t****s@g****m 29

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

setup.py pypi