basketball_trainer

:basketball: BasketballDetector training utilities

https://github.com/peiva-git/basketball_trainer

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.8%) to scientific vocabulary

Keywords

cnn deep-learning image-segmentation paddlepaddle paddleseg segmentation semantic-segmentation
Last synced: 6 months ago · JSON representation

Repository

:basketball: BasketballDetector training utilities

Basic Info
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
cnn deep-learning image-segmentation paddlepaddle paddleseg segmentation semantic-segmentation
Created over 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

Basketball Detector Trainer

build and deploy docs build and test CPU build and test GPU License

This repository contains all the necessary tools to train the :basketball: BasketballDetector model.

Table Of Contents

  1. Description
  2. Using the PaddleSeg toolbox
  3. Environment setup
  4. Model training
  5. Model evaluation
  6. Results
    1. OHEM Cross-Entropy loss function evaluation results
    2. Weighted Cross-Entropy loss function evaluation results
    3. Cross-test set results
    4. Rancrop model results
  7. Credits

Description

This project uses the PaddleSeg toolkit to train a modified PPLiteSeg real-time semantic segmentation model. The configuration files used during training can be found here. In the following sections, you will find detailed instructions on how to set up a working environment and how to train a model.

Using the PaddleSeg toolbox

The segmentation model has been trained using a customized version of the sample configuration file for the PPLiteSeg model applied to the Cityscapes dataset found on the PaddleSeg repository.

Environment setup

Before being able to train the model, you must install Paddle and PaddleSeg. You can use one of the provided conda environments by running the following command: shell conda create --name myenv-pp --file pp-[cpu|gpu].yml It is recommended to have a CUDA enabled GPU in order to take advantage of GPU acceleration.

In case you're using the GPU version, don't forget to set up the required environment variables as well: shell export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH You can automate the process of adding the environment variables to execute automatically each time you activate your conda environment by running the following commands: shell mkdir -p $CONDA_PREFIX/etc/conda/activate.d echo 'export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

The currently available Paddle PyPi GPU version release requires the CUDA Runtime API version 10.2 to be installed in order to run correctly. This dependency is therefore listed in the provided conda environment. If you want to use a different CUDA version, refer to the official documentation.

Also, to avoid unexpected errors, the PaddleSeg package should be built from source using the provided repository, while being in the myenv-pp environment: shell cd PaddleSeg pip install -v -e .

Model training

To train the BasketballDetector segmentation model, run: shell python PaddleSeg/tools/train.py \ --configs configs/pp_liteseg_base_stdc1_ohem_10000_1024x512.yml \ --do_eval \ --use_vdl \ --save_interval 2500 \ --keep_checkpoint_max 20 \ --save_dir output The trained models will then be available in the output/ directory. More information on what these options do and on how to visualize the training process can be found here.

Model evaluation

To evaluate the obtained model, run: shell python PaddleSeg/tools/val.py \ --configs configs/pp_liteseg_base_stdc1_ohem_10000_1024x512.yml \ --model_path output/best_model/model.pdparams

For additional options refer to the official documentation.

Results

Various models have been trained using two different loss functions, in particular the OHEM Cross-Entropy loss function and the weighted Cross-Entropy loss function. All the used configurations with different parameters can be found here.

In the following tables you can find the summarized results.

OHEM Cross-Entropy loss function evaluation results

| min_kept parameter value | Ball class IoU | Ball class Precision | Ball class Recall | Kappa | Links | |--------------------------------|----------------|----------------------|-------------------|------------|-------------------------------------------------------------------------------| | 40 x 40 x batch_size = 6400 | 0,6590 | 0,8527 | 0,7437 | 0,7944 | config | | 50 x 50 x batch_size = 10000 | 0,6658 | 0,8770 | 0,7344 | 0,7993 | config | | 60 x 60 x batch_size = 14400 | 0,6601 | 0,8895 | 0,7191 | 0,7952 | config | | 70 x 70 x batch_size = 20000 | 0,6542 | 0,9090 | 0,7000 | 0,7979 | config |

Weighted Cross-Entropy loss function evaluation results

| Background class weight | Ball class IoU | Ball class precision | Ball class Recall | Kappa | Links | |-------------------------|----------------|----------------------|-------------------|------------|-----------------------------------------------------------------------------| | 0,001 | 0,2656 | 0,2700 | 0,9422 | 0,4195 | config | | 0,005 | 0,4703 | 0,4927 | 0,9117 | 0,6396 | config | | 0,01 | 0,5394 | 0,5729 | 0,9020 | 0,7007 | config |

The test set results were consistently better than the validation set results used during the evaluation phase, even after trying multiple dataset splits, possibly indicating an over-fitting problem due to the limited dataset size.

Therefore, a cross-test set has been evaluated as well, using images from a basketball field not seen during training, but still sufficiently similar (similar camera, similar camera angles). The complete report on this issue is available in the form of my master thesis here.

Cross-test set results

| Model name | Ball class IoU | Ball class Precision | Ball class Recall | Kappa | Links | |-----------------|----------------|----------------------|-------------------|--------|-----------------------------------------------------------------| | PP-LiteSeg-OHEM | 0,4555 | 0,6784 | 0,5810 | 0,6258 | config | | PP-LiteSeg-WCE | 0,3215 | 0,3639 | 0,7339 | 0,4863 | config |

Rancrop model results

The PPLiteSegRandomCrops model was validated using these configurations and these configurations, with the model described in this configuration used as the base model.

Various variants of the PPLiteSegRandomCrops model were tested, all available in this directory. In particular, different padding and aggregation methods were used. All the obtained results were worse than the chosen base PPLiteSeg model.

Credits

This project uses the PaddleSeg toolbox. All credits go to its authors. This project uses pdoc to generate its documentation. All credits go to its authors. The implemented PPLiteSegRandomCrops model takes inspiration from the paper Real-time CNN-based Segmentation Architecture for Ball Detection in a Single View Setup. All credits go to its authors.

Owner

  • Name: Ivan Pelizon
  • Login: peiva-git
  • Kind: user
  • Location: Italy

GitHub Events

Total
Last Year

Committers

Last synced: about 2 years ago

All Time
  • Total Commits: 113
  • Total Committers: 1
  • Avg Commits per committer: 113.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 113
  • Committers: 1
  • Avg Commits per committer: 113.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Ivan Pelizon 2****t 113

Issues and Pull Requests

Last synced: about 2 years ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels