Sports2D

Sports2D: Compute 2D human pose and angles from a video or a webcam - Published in JOSS (2024)

https://github.com/davidpagnon/sports2d

Science Score: 100.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 6 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org, zenodo.org
  • Committers with academic emails
    1 of 4 committers (25.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

biomechanics blazepose joint-angles kinematics kinovea markerless openpose pose-estimation rtmpose sports-analytics

Keywords from Contributors

3d-kinematics blender mocap motion-capture opensim sports

Scientific Fields

Engineering Computer Science - 40% confidence
Last synced: 4 months ago · JSON representation ·

Repository

Compute 2D human pose and angles from a video or a webcam.

Basic Info
  • Host: GitHub
  • Owner: davidpagnon
  • License: bsd-3-clause
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 32 MB
Statistics
  • Stars: 139
  • Watchers: 3
  • Forks: 23
  • Open Issues: 3
  • Releases: 55
Topics
biomechanics blazepose joint-angles kinematics kinovea markerless openpose pose-estimation rtmpose sports-analytics
Created over 2 years ago · Last pushed 5 months ago
Metadata Files
Readme License Citation

README.md

Continuous integration PyPI version \ Downloads Stars GitHub issues GitHub issues-closed \ status DOI License \ Discord

Sports2D

Sports2D automatically computes 2D joint positions, as well as joint and segment angles from a video or a webcam.


Announcements: - Select only the persons you want to analyze New in v0.8! - MarkerAugmentation and Inverse Kinematics for accurate 3D motion with OpenSim. New in v0.7! - Any detector and pose estimation model can be used. New in v0.6! - Results in meters rather than pixels. New in v0.5! - Faster, more accurate - Works from a webcam - Better visualization output - More flexible, easier to run

Run pip install sports2d pose2sim -U to get the latest version.

N.B.: As always, I am more than happy to welcome contributions (see How to contribute)! <!--User-friendly Colab version released! (and latest issues fixed, too)\ Works on any smartphone!**\ Open In Colab-->


https://github.com/user-attachments/assets/6a444474-4df1-4134-af0c-e9746fa433ad

Warning: Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\ Warning: Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\ If you need 3D research-grade markerless joint kinematics, consider using several cameras with Pose2Sim.

Contents

  1. Installation and Demonstration
    1. Installation
      1. Quick install
      2. Full install
    2. Demonstration
      1. Run the demo
      2. Visualize in OpenSim
      3. Visualize in Blender
    3. Play with the parameters
      1. Run on a custom video or on a webcam
      2. Run for a specific time range
      3. Select the persons you are interested in
      4. Get coordinates in meters
      5. Run inverse kinematics
      6. Run on several videos at once
      7. Use the configuration file or run within Python
      8. Get the angles the way you want
      9. Customize your output
      10. Use a custom pose estimation model
      11. All the parameters
  2. Go further
    1. Too slow for you?
    2. Run inverse kinematics
    3. How it works
  3. How to cite and how to contribute


Installation and Demonstration

Installation

Quick install

N.B.: Full install is required for OpenSim inverse kinematics.

Open a terminal. Type python -V to make sure python >=3.10 <=3.11 is installed. If not, install it from there.

Run: cmd pip install sports2d

Alternatively, build from source to test the last changes: cmd git clone https://github.com/davidpagnon/sports2d.git cd sports2d pip install .


Full install

N.B.: Only needed if you want to run inverse kinematics (--do_ik True).\ N.B.: If you already have a Pose2Sim conda environment, you can skip this step. Just run conda activate Pose2Sim and pip install sports2d.

  • Install Anaconda or Miniconda:\ Open an Anaconda prompt and create a virtual environment: cmd conda create -n Sports2D python=3.10 -y conda activate Sports2D
  • Install OpenSim:\ Install the OpenSim Python API (if you do not want to install via conda, refer to this page): conda install -c opensim-org opensim -y

  • Install Sports2D with Pose2Sim: cmd pip install sports2d


Demonstration

Run the demo:

Just open a command line and run: cmd sports2d

You should see the joint positions and angles being displayed in real time.

Check the folder where you run that command line to find the resulting video, images, TRC pose and MOT angle files (which can be opened with any spreadsheet software), and logs.

Important: If you ran the conda install, you first need to activate the environment: run conda activate sports2d in the Anaconda prompt.

Note:\ The Demo video is voluntarily challenging to demonstrate the robustness of the process after sorting, interpolation and filtering. It contains: - One person walking in the sagittal plane - One person doing jumping jacks in the frontal plane. This person then performs a flip while being backlit, both of which are challenging for the pose detection algorithm - One tiny person flickering in the background who needs to be ignored


Visualize in Blender

  1. Install the Pose2Sim_Blender add-on.\ Follow instructions on the Pose2Sim_Blender add-on page.
  2. Open your point coordinates.\ Add Markers: open your trc file(e.g., coords_m.trc) from your result_dir folder.

This will optionally create an animated rig based on the motion of the captured person. 3. Open your animated skeleton:\ Make sure you first set --do_ik True (full install required). See inverse kinematics section for more details. - Add Model: Open your scaled model (e.g., Model_Pose2Sim_LSTM.osim). - Add Motion: Open your motion file (e.g., angles.mot). Make sure the skeleton is selected in the outliner.

The OpenSim skeleton is not rigged yet. Feel free to contribute!


Visualize in OpenSim

  1. Install OpenSim GUI.
  2. Visualize point coordinates:\ File -> Preview experimental data: Open your trc file (e.g., coords_m.trc) from your result_dir folder.
  3. Visualize angles:\ To open an animated model and run further biomechanical analysis, make sure you first set --do_ik True (full install required). See inverse kinematics section for more details.
    • File -> Open Model: Open your scaled model (e.g., Model_Pose2Sim_LSTM.osim).
    • File -> Load Motion: Open your motion file (e.g., angles.mot).


Play with the parameters

For a full list of the available parameters, see this section of the documentation, check the Config_Demo.toml file, or type sports2d --help. All non specified are set to default values.


Run on a custom video or on a webcam:

cmd sports2d --video_input path_to_video.mp4

cmd sports2d --video_input webcam


Run for a specific time range:

cmd sports2d --time_range 1.2 2.7


Select the persons you are interested in:

If you only want to analyze a subset of the detected persons, you can use the --nb_persons_to_detect and --person_ordering_method parameters. The order matters if you want to convert coordinates in meters or run inverse kinematics.

cmd sports2d --nb_persons_to_detect 2 --person_ordering_method highest_likelihood

We recommend to use the on_click method if you can afford a manual input. This lets the user handle both the person number and their order in the same stage. When prompted, select the persons you are interested in in the desired order. In our case, lets slide to a frame where both people are visible, and select the woman first, then the man.

Otherwise, if you want to run Sports2D automatically for example, you can choose other ordering methods such as 'highestlikelihood', 'largestsize', 'smallestsize', 'greatestdisplacement', 'leastdisplacement', 'firstdetected', or 'last_detected'.

cmd sports2d --person_ordering_method on_click


Get coordinates in meters:

N.B.: Depth is estimated from a neutral pose.

You may need to convert pixel coordinates to meters.\ Just provide the height of the reference person (and their ID in case of multiple person detection).

You can also specify whether the visible side of the person is left, right, front, or back. Set it to 'auto' if you do not want to find it automatically (only works for motion in the sagittal plane), or to 'none' if you want to keep 2D instead of 3D coordinates (if the person goes right, and then left for example).

The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them. Note that y points down.\ Also note that distortions are not taken into account, and that results will be less accurate for motions in the frontal plane.

cmd sports2d --to_meters True --first_person_height 1.65 --visible_side auto front none cmd sports2d --to_meters True --first_person_height 1.65 --visible_side auto front none ` --person_ordering_method on_click ` --floor_angle 0 --xy_origin 0 940


Run inverse kinematics:

N.B.: Full install required.

N.B.: The person needs to be moving on a single plane for the whole selected time range.

OpenSim inverse kinematics allows you to set joint constraints, joint angle limits, to constrain the bones to keep the same length all along the motion and potentially to have equal sizes on left and right side. Most generally, it gives more biomechanically accurate results. It can also give you the opportunity to compute joint torques, muscle forces, ground reaction forces, and more, with MoCo for example.

This is done via Pose2Sim.\ Model scaling is done according to the mean of the segment lengths, across a subset of frames. We remove the 10% fastest frames (potential outliers), the frames where the speed is 0 (person probably out of frame), the frames where the average knee and hip flexion angles are above 45° (pose estimation is not precise when the person is crouching) and the 20% most extreme segment values after the previous operations (potential outliers). All these parameters can be edited in your Config.toml file.

cmd sports2d --time_range 1.2 2.7 ` --do_ik true --first_person_height 1.65 --visible_side auto front

You can optionally use the LSTM marker augmentation to improve the quality of the output motion.\ You can also optionally give the participants proper masses. Mass has no influence on motion, only on forces (if you decide to further pursue kinetics analysis).

cmd sports2d --time_range 1.2 2.7 ` --do_ik true --first_person_height 1.65 --visible_side left front ` --use_augmentation True --participant_mass 55.0 67.0


Run on several videos at once:

cmd sports2d --video_input demo.mp4 other_video.mp4 All videos analyzed with the same time range. cmd sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 Different time ranges for each video. cmd sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5


Use the configuration file or run within Python:

  • Run with a configuration file: cmd sports2d --config Config_demo.toml
  • Run within Python: python from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
  • Run within Python with a dictionary (for example, config_dict = toml.load('Config_demo.toml')): python from Sports2D import Sports2D; Sports2D.process(config_dict)


Get the angles the way you want:

  • Choose which angles you need: cmd sports2d --joint_angles 'right knee' 'left knee' --segment_angles None
  • Choose where to display the angles: either as a list on the upper-left of the image, or near the joint/segment, or both: cmd sports2d --display_angle_values_on body # OR none, or list
  • You can also decide not to calculate and display angles at all: cmd sports2d --calculate_angles false
  • Flip angles when the person faces the other side.\ N.B.: We consider that the person looks to the right if their toe keypoint is to the right of their heel. This is not always true when the person is sprinting, especially in the swing phase. Set it to false if you want timeseries to be continuous even when the participant switches their stance. cmd sports2d --flip_left_right true # Default
  • Correct segment angles according to the estimated camera tild angle.\ N.B.: The camera tilt angle is automatically estimated. Set to false if it is actually the floor which is tilted rather than the camera. cmd sports2d --correct_segment_angles_with_floor_angle true # Default

  • To run inverse kinematics with OpenSim, check this section


Customize your output:

  • Choose whether you want video, images, trc pose file, angle mot file, real-time display, and plots: cmd sports2d --save_vid false --save_img true ` --save_pose false --save_angles true ` --show_realtime_results false --show_graphs false
  • Save results to a custom directory, specify the slow-motion factor: cmd sports2d --result_dir path_to_result_dir


Use a custom pose estimation model:

  • Retrieve hand motion: cmd sports2d --pose_model whole_body
  • Use any custom (deployed) MMPose model cmd sports2d --pose_model BodyWithFeet : ` --mode """{'det_class':'YOLOX', ` 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip', ` 'det_input_size':[640, 640], ` 'pose_class':'RTMPose', ` 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip', ` 'pose_input_size':[192,256]}"""


All the parameters

For a full list of the available parameters, have a look at the Config_Demo.toml file or type:

cmd sports2d --help

``` 'config': ["C", "path to a toml configuration file"],

'videoinput': ["i", "webcam, or videopath.mp4, or video1path.avi video2path.mp4 ... Beware that images won't be saved if paths contain non ASCII characters"], 'nbpersonstodetect': ["n", "number of persons to detect. int or 'all'. 'all' if not specified"], 'personorderingmethod': ["", "'onclick', 'highestlikelihood', 'largestsize', 'smallestsize', 'greatestdisplacement', 'leastdisplacement', 'firstdetected', or 'lastdetected'. 'onclick' if not specified"], 'firstpersonheight': ["H", "height of the reference person in meters. 1.65 if not specified. Not used if a calibration file is provided"], 'visibleside': ["", "front, back, left, right, auto, or none. 'auto front none' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"], 'loadtrcpx': ["", "load trc file to avaid running pose estimation again. false if not specified"], 'compare': ["", "visually compare motion with trc file. false if not specified"], 'webcamid': ["w", "webcam ID. 0 if not specified"], 'timerange': ["t", "starttime endtime. In seconds. Whole video if not specified. starttime1 endtime1 starttime2 endtime2 ... if multiple videos with different time ranges"], 'videodir': ["d", "current directory if not specified"], 'resultdir': ["r", "current directory if not specified"], 'showrealtimeresults': ["R", "show results in real-time. true if not specified"], 'displayanglevalueson': ["a", '"body", "list", "body" "list", or "none". body list if not specified'], 'showgraphs': ["G", "show plots of raw and processed results. true if not specified"], 'jointangles': ["j", '"Right ankle" "Left ankle" "Right knee" "Left knee" "Right hip" "Left hip" "Right shoulder" "Left shoulder" "Right elbow" "Left elbow" if not specified'], 'segmentangles': ["s", '"Right foot" "Left foot" "Right shank" "Left shank" "Right thigh" "Left thigh" "Pelvis" "Trunk" "Shoulders" "Head" "Right arm" "Left arm" "Right forearm" "Left forearm" if not specified'], 'savevid': ["V", "save processed video. true if not specified"], 'saveimg': ["I", "save processed images. true if not specified"], 'savepose': ["P", "save pose as trc files. true if not specified"], 'calculateangles': ["c", "calculate joint and segment angles. true if not specified"], 'saveangles': ["A", "save angles as mot files. true if not specified"], 'slowmofactor': ["", "slow-motion factor. For a video recorded at 240 fps and exported to 30 fps, it would be 240/30 = 8. 1 if not specified"], 'posemodel': ["p", "bodywithfeet, wholebodywrist, wholebody, or body. bodywithfeet if not specified"], 'mode': ["m", 'light, balanced, performance, or a """{dictionary within triple quote}""". balanced if not specified. Use a dictionary to specify your own detection and/or pose estimation models (more about in the documentation).'], 'detfrequency': ["f", "run person detection only every N frames, and inbetween track previously detected bounding boxes. keypoint detection is still run on all frames.\n\ Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 1 if not specified: detection runs on all frames"], 'backend': ["", "Backend for pose estimation can be 'auto', 'cpu', 'cuda', 'mps' (for MacOS), or 'rocm' (for AMD GPUs)"], 'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"], 'tometers': ["M", "convert pixels to meters. true if not specified"], 'makec3d': ["", "Convert trc to c3d file. true if not specified"], 'floorangle': ["", "angle of the floor (degrees). 'auto' if not specified"], 'xyorigin': ["", "origin of the xy plane. 'auto' if not specified"], 'calibfile': ["", "path to calibration file. '' if not specified, eg no calibration file"], 'savecalib': ["", "save calibration file. true if not specified"], 'doik': ["", "do inverse kinematics. false if not specified"], 'useaugmentation': ["", "Use LSTM marker augmentation. false if not specified"], 'feetonfloor': ["", "offset marker augmentation results so that feet are at floor level. true if not specified"], 'usesimplemodel': ["", "IK 10+ times faster, but no muscles or flexible spine. false if not specified"], 'participantmass': ["", "mass of the participant in kg or none. Defaults to 70 if not provided. No influence on kinematics (motion), only on kinetics (forces)"], 'closetozerospeedm': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"], 'trackingmode': ["", "'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned"], 'deepsortparams': ["", 'Deepsort tracking parameters: """{dictionary between 3 double quotes}""". \n\ Default: maxage:30, ninit:3, nmsmaxoverlap:0.8, maxcosinedistance:0.3, nnbudget:200, maxioudistance:0.8, embeddergpu: True\n\ More information there: https://github.com/levan92/deepsortrealtime/blob/master/deepsortrealtime/deepsorttracker.py#L51'], 'inputsize': ["", "width, height. 1280, 720 if not specified. Lower resolution will be faster but less precise"], 'keypointlikelihoodthreshold': ["", "detected keypoints are not retained if likelihood is below this threshold. 0.3 if not specified"], 'averagelikelihoodthreshold': ["", "detected persons are not retained if average keypoint likelihood is below this threshold. 0.5 if not specified"], 'keypointnumberthreshold': ["", "detected persons are not retained if number of detected keypoints is below this threshold. 0.3 if not specified, i.e., i.e., 30 percent"], 'fastestframestoremovepercent': ["", "Frames with high speed are considered as outliers. Defaults to 0.1"], 'closetozerospeedpx': ["", "Sum for all keypoints: about 50 px/frame or 0.2 m/frame. Defaults to 50"], 'largehipkneeangles': ["", "Hip and knee angles below this value are considered as imprecise. Defaults to 45"], 'trimmedextremapercent': ["", "Proportion of the most extreme segment values to remove before calculating their mean. Defaults to 50"], 'fontSize': ["", "font size for angle values. 0.3 if not specified"], 'flipleftright': ["", "true or false. Flips angles when the person faces the other side. The person looks to the right if their toe keypoint is to the right of their heel. Set it to false if the person is sprinting or if you want timeseries to be continuous even when the participant switches their stance. true if not specified"], 'correctsegmentangleswithfloorangle': ["", "true or false. If the camera is tilted, corrects segment angles as regards to the floor angle. Set to false if it is actually the floor which is tilted, not the camera. True if not specified"], 'interpolate': ["", "interpolate missing data. true if not specified"], 'interpgapsmallerthan': ["", "interpolate sequences of missing data if they are less than N frames long. 10 if not specified"], 'filllargegapswith': ["", "lastvalue, nan, or zeros. lastvalue if not specified"], 'sectionstokeep': ["", "all, largest, first, or last. Keep 'all' valid sections even when they are interspersed with undetected chunks, or the 'largest' valid section, or the 'first' one, or the 'last' one"], 'rejectoutliers': ["", "reject outliers with Hampel filter before other filtering methods. true if not specified"], 'filter': ["", "filter results. true if not specified"], 'filtertype': ["", "butterworth, kalman, gcvspline, gaussian, median, or loess. butterworth if not specified"], 'order': ["", "order of the Butterworth filter. 4 if not specified"], 'cutofffrequency': ["", "cut-off frequency of the Butterworth filter. 3 if not specified"], 'trustratio': ["", "trust ratio of the Kalman filter: How much more do you trust triangulation results (measurements), than the assumption of constant acceleration(process)? 500 if not specified"], 'smooth': ["", "dual Kalman smoothing. true if not specified"], 'gcvcutofffrequency': ["", "cut-off frequency of the GCV spline filter. 'auto' if not specified"], 'smoothingfactor': ["", "smoothing factor of the GCV spline filter (>=0). Ignored if cutofffrequency != 'auto'. Biases results towards more smoothing (>1) or more fidelity to data (<1). 0.1 if not specified"], 'sigmakernel': ["", "sigma of the gaussian filter. 1 if not specified"], 'nbvaluesused': ["", "number of values used for the loess filter. 5 if not specified"], 'kernelsize': ["", "kernel size of the median filter. 3 if not specified"], 'butterspeedorder': ["", "order of the Butterworth filter on speed. 4 if not specified"], 'butterspeedcutofffrequency': ["", "cut-off frequency of the Butterworth filter on speed. 6 if not specified"], 'osimsetuppath': ["", "path to OpenSim setup. '../OpenSimsetup' if not specified"], 'rightleftsymmetry': ["", "right left symmetry. true if not specified"], 'defaultheight': ["", "default height for scaling. 1.70 if not specified"], 'removeindividualscalingsetup': ["", "remove individual scaling setup files generated during scaling. true if not specified"], 'removeindividualiksetup': ["", "remove individual IK setup files generated during IK. true if not specified"], 'fastestframestoremovepercent': ["", "Frames with high speed are considered as outliers. Defaults to 0.1"], 'closetozerospeedm': ["","Sum for all keypoints: about 0.2 m/frame. Defaults to 0.2"], 'closetozerospeedpx': ["", "Sum for all keypoints: about 50 px/frame. Defaults to 50"], 'largehipkneeangles': ["", "Hip and knee angles below this value are considered as imprecise and ignored. Defaults to 45"], 'trimmedextremapercent': ["", "Proportion of the most extreme segment values to remove before calculating their mean. Defaults to 50"], 'usecustomlogging': ["", "use custom logging. false if not specified"] ```


Go further

Too slow for you?

Quick fixes: - Use --save_vid false --save_img false --show_realtime_results false: Will not save images or videos, and will not display the results in real time. - Use --mode lightweight: Will use a lighter version of RTMPose, which is faster but less accurate.\ Note that any detection and pose models can be used (first deploy them with MMPose if you do not have their .onnx or .zip files), with the following formalism: --mode """{'det_class':'YOLOX', 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip', 'det_input_size':[416,416], 'pose_class':'RTMPose', 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip', 'pose_input_size':[192,256]}""" - Use --det_frequency 50: Will detect poses only every 50 frames, and track keypoints in between, which is faster. - Use --load_trc_px <path_to_file_px.trc>: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over. - Make sure you use --tracking_mode sports2d: Will use the default Sports2D tracker. Unlike DeepSort, it is faster, does not require any parametrization, and is as good in non-crowded scenes.


Use your GPU:\ Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.

  1. Run nvidia-smi in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information on this post).

Then go to the ONNXruntime requirement page, note the latest compatible CUDA and cuDNN requirements. Next, go to the pyTorch website and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example: cmd pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

  1. Finally, install ONNX Runtime with GPU support: pip uninstall onnxruntime pip install onnxruntime-gpu

  2. Check that everything went well within Python with these commands: ``` bash python -c 'import torch; print(torch.cuda.isavailable())' python -c 'import onnxruntime as ort; print(ort.getavailable_providers())'

    Should print "True ['CUDAExecutionProvider', ...]"

    ``` <!-- print(f'torch version: {torch.version}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.version}') -->



How it works

Sports2D: - Detects 2D joint centers from a video or a webcam with RTMLib. - Converts pixel coordinates to meters. - Computes selected joint and segment angles. - Optionally performs kinematic optimization via OpenSim. - Optionally saves processed image and video files.


Okay but how does it work, really?\ Sports2D:

  1. Reads stream from a webcam, from one video, or from a list of videos. Selects the specified time range to process.

  2. Sets up pose estimation with RTMLib. It can be run in lightweight, balanced, or performance mode, and for faster inference, keypoints can be tracked instead of detected for a certain number of frames. Any RTMPose model can be used.

  3. Tracks people so that their IDs are consistent across frames. A person is associated to another in the next frame when they are at a small distance. IDs remain consistent even if the person disappears from a few frames. We crafted a 'sports2D' tracker which gives good results and runs in real time, but it is also possible to use deepsort in particularly challenging situations.

  4. Chooses the right persons to keep. In single-person mode, only keeps the person with the highest average scores over the sequence. In multi-person mode, only retrieves the keypoints with high enough confidence, and only keeps the persons with high enough average confidence over each frame.

  5. Converts the pixel coordinates to meters. The user can provide a calibration file, or simply the size of a specified person. The floor angle and the coordinate origin can either be detected automatically from the gait sequence, or be manually specified. The depth coordinates are set to normative values, depending on whether the person is going left, right, facing the camera, or looking away.

  6. Computes the selected joint and segment angles, and flips them on the left/right side if the respective foot is pointing to the left/right.

  7. Draws the results on the image:\ Draws bounding boxes around each person and writes their IDs\ Draws the skeleton and the keypoints, with a green to red color scale to account for their confidence\ Draws joint and segment angles on the body, and writes the values either near the joint/segment, or on the upper-left of the image with a progress bar

  8. Interpolates and filters results: Missing pose and angle sequences are interpolated unless gaps are too large. Outliers are rejected with a Hampel filter. Results are filtered with a 6 Hz Butterworth filter. Many other filters are available, and all of the above can be configured or deactivated (see Config_Demo.toml)

  9. Optionally show processed images, saves them, or saves them as a video\ Optionally plots pose and angle data before and after processing for comparison\ Optionally saves poses for each person as a TRC file in pixels and meters, angles as a MOT file, and calibration data as a Pose2Sim TOML file


Joint angle conventions: - Ankle dorsiflexion: Between heel and big toe, and ankle and knee.\ -90° when the foot is aligned with the shank. - Knee flexion: Between hip, knee, and ankle.\ 0° when the shank is aligned with the thigh. - Hip flexion: Between knee, hip, and shoulder.\ 0° when the trunk is aligned with the thigh. - Shoulder flexion: Between hip, shoulder, and elbow.\ 180° when the arm is aligned with the trunk. - Elbow flexion: Between wrist, elbow, and shoulder.\ 0° when the forearm is aligned with the arm.

Segment angle conventions:\ Angles are measured anticlockwise between the horizontal and the segment. - Foot: Between heel and big toe - Shank: Between ankle and knee - Thigh: Between hip and knee - Pelvis: Between left and right hip - Trunk: Between hip midpoint and shoulder midpoint - Shoulders: Between left and right shoulder - Head: Between neck and top of the head - Arm: Between shoulder and elbow - Forearm: Between elbow and wrist


How to cite and how to contribute

How to cite

If you use Sports2D, please cite Pagnon, 2024.

 @article{Pagnon_Sports2D_Compute_2D_2024,
   author = {Pagnon, David and Kim, HunMin},
   doi = {10.21105/joss.06849},
   journal = {Journal of Open Source Software},
   month = sep,
   number = {101},
   pages = {6849},
   title = {{Sports2D: Compute 2D human pose and angles from a video or a webcam}},
   url = {https://joss.theoj.org/papers/10.21105/joss.06849},
   volume = {9},
   year = {2024}
 }

How to contribute

I would happily welcome any proposal for new features, code improvement, and more!\ If you want to contribute to Sports2D or Pose2Sim, please see this issue.\ You will be proposed a to-do list, but please feel absolutely free to propose your own ideas and improvements.

Here is a to-do list: feel free to complete it: - [x] Compute segment angles. - [x] Multi-person detection, consistent over time. - [x] Only interpolate small gaps. - [x] Filtering and plotting tools. - [x] Handle sudden changes of direction. - [x] Batch processing for the analysis of multiple videos at once. - [x] Option to only save one person (with the highest average score, or with the most frames and fastest speed) - [x] Run again without pose estimation with the option --load_trc_px for px .trc file. - [x] Convert positions to meters by providing the person height, a calibration file, or 3D points to click on the image - [x] Support any detection and/or pose estimation model. - [x] Optionally let user select the persons of interest. - [x] Perform Inverse kinematics and dynamics with OpenSim (cf. Pose2Sim, but in 2D). Update this model (add arms, markers, remove muscles and contact spheres). Add pipeline example.

  • [ ] Run with the option --compare_to to visually compare motion with a trc file. If run with a webcam input, the user can follow the motion of the trc file. Further calculation can then be done to compare specific variables.
  • [ ] Colab version: more user-friendly, usable on a smartphone.
  • [ ] GUI applications for Windows, Mac, and Linux, as well as for Android and iOS.


  • [ ] Track other points and angles with classic tracking methods (cf. Kinovea), or by training a model (cf. DeepLabCut).
  • [ ] Pose refinement. Click and move badly estimated 2D points. See DeepLabCut for inspiration.
  • [ ] Add tools for annotating images, undistort them, take perspective into account, etc. (cf. Kinovea).

Owner

  • Name: David PAGNON
  • Login: davidpagnon
  • Kind: user
  • Location: Grenoble, France
  • Company: CAMERA, University of Bath

Biomechanics and computer vision research Parkour artist

JOSS Publication

Sports2D: Compute 2D human pose and angles from a video or a webcam
Published
September 24, 2024
Volume 9, Issue 101, Page 6849
Authors
David Pagnon ORCID
Centre for the Analysis of Motion, Entertainment Research & Applications (CAMERA), University of Bath, Claverton Down, Bath, BA2 7AY, United Kingdom
HunMin Kim ORCID
Inha University, Yonghyeon Campus, 100 Inha-ro, Michuhol-gu, Incheon 22212, South Korea
Editor
Kevin M. Moerman ORCID
Tags
python markerless kinematics motion capture sports performance analysis rtmpose clinical gait analysis

Citation (CITATION.cff)

cff-version: "1.2.0"
authors:
- family-names: Pagnon
  given-names: David
  orcid: "https://orcid.org/0000-0002-6891-8331"
- family-names: Kim
  given-names: HunMin
  orcid: "https://orcid.org/0009-0007-7710-8051"
contact:
- family-names: Pagnon
  given-names: David
  orcid: "https://orcid.org/0000-0002-6891-8331"
doi: 10.5281/zenodo.7903962
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Pagnon
    given-names: David
    orcid: "https://orcid.org/0000-0002-6891-8331"
  - family-names: Kim
    given-names: HunMin
    orcid: "https://orcid.org/0009-0007-7710-8051"
  date-published: 2024-09-24
  doi: 10.21105/joss.06849
  issn: 2475-9066
  issue: 101
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6849
  title: "Sports2D: Compute 2D human pose and angles from a video or a
    webcam"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06849"
  volume: 9
title: "Sports2D: Compute 2D human pose and angles from a video or a
  webcam"

GitHub Events

Total
  • Create event: 37
  • Commit comment event: 2
  • Issues event: 17
  • Release event: 29
  • Watch event: 67
  • Issue comment event: 69
  • Push event: 131
  • Pull request event: 12
  • Fork event: 12
Last Year
  • Create event: 37
  • Commit comment event: 2
  • Issues event: 17
  • Release event: 29
  • Watch event: 68
  • Issue comment event: 69
  • Push event: 131
  • Pull request event: 12
  • Fork event: 12

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 494
  • Total Committers: 4
  • Avg Commits per committer: 123.5
  • Development Distribution Score (DDS): 0.126
Past Year
  • Commits: 236
  • Committers: 4
  • Avg Commits per committer: 59.0
  • Development Distribution Score (DDS): 0.047
Top Committers
Name Email Commits
David PAGNON c****t@d****m 432
Kim HunMin g****8@i****u 60
Laizo a****e@g****m 1
Joss Gitlin g****s@t****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 20
  • Total pull requests: 12
  • Average time to close issues: 3 months
  • Average time to close pull requests: 4 days
  • Total issue authors: 17
  • Total pull request authors: 4
  • Average comments per issue: 7.1
  • Average comments per pull request: 2.42
  • Merged pull requests: 11
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 9
  • Pull requests: 11
  • Average time to close issues: 15 days
  • Average time to close pull requests: 3 days
  • Issue authors: 8
  • Pull request authors: 4
  • Average comments per issue: 6.11
  • Average comments per pull request: 2.64
  • Merged pull requests: 10
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • naveenvenk17 (2)
  • p0mad (2)
  • 24victorl (2)
  • JoanCharmant (1)
  • anu120 (1)
  • tuliofalmeida (1)
  • koiker (1)
  • garrett-tuer (1)
  • ntd1683 (1)
  • st20042066 (1)
  • GUIARRUDA84 (1)
  • 92Acp (1)
  • arghhhhh (1)
  • jeffpagaduan (1)
  • komarjoh (1)
Pull Request Authors
  • hunminkim98 (6)
  • davidpagnon (5)
  • AurelienCoppee (4)
  • arghhhhh (2)
Top Labels
Issue Labels
bug (4) enhancement (4) help wanted (3) question (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 1,193 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 59
  • Total maintainers: 1
pypi.org: sports2d

Compute 2D human pose and angles from a video or a webcam.

  • Versions: 59
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 1,193 Last month
Rankings
Dependent packages count: 7.2%
Average: 21.5%
Downloads: 21.9%
Dependent repos count: 35.4%
Maintainers (1)
Last synced: 4 months ago

Dependencies

.github/workflows/continuous-integration.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
.github/workflows/publish-on-release.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
pyproject.toml pypi
.github/workflows/joss_pdf.yml actions
  • actions/checkout v2 composite
  • actions/upload-artifact v1 composite
  • openjournals/openjournals-draft-action master composite