DL_Track_US

DL_Track_US: a python package to analyse muscle ultrasonography images - Published in JOSS (2023)

https://github.com/PaulRitsche/DL_Track_US

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 6 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, ieee.org, joss.theoj.org
  • Committers with academic emails
    1 of 3 committers (33.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.4%) to scientific vocabulary

Scientific Fields

Artificial Intelligence and Machine Learning Computer Science - 73% confidence
Engineering Computer Science - 40% confidence
Last synced: 4 months ago · JSON representation ·

Repository

This repositories contains code for automatic analysis of human lower limb ultrasonography images.

Basic Info
Statistics
  • Stars: 7
  • Watchers: 1
  • Forks: 6
  • Open Issues: 1
  • Releases: 4
Created about 3 years ago · Last pushed 5 months ago
Metadata Files
Readme Changelog License Citation

README.md

DLTrackUS

DOI

DL_Track_US image

The DLTrackUS package provides an easy to use graphical user interface (GUI) for deep learning based analysis of muscle architectural parameters from longitudinal ultrasonography images of human lower limb muscles. Please take a look at our documentation for more information (note that aggressive ad-blockers might break the visualization of the repository description as well as the online documentation). This code is based on a previously published algorithm and replaces it. We have extended the functionalities of the previously proposed code. The previous code will not be updated and future updates will be included in this repository.

Getting started

For detailled information about installaion of the DLTrackUS python package we refer you to our documentation. There you will finde guidelines not only for the installation procedure of DLTrackUS, but also concerding conda and GPU setup.

Quickstart

Once installed, DLTrackUS can be started from the command prompt with the respective environment activated:

(DL_Track_US0.3.0) C:/User/Desktop/ python -m DL_Track_US

In case you have downloaded the executable, simply double-click the DLTrackUS icon.

Regardless of the used method, the GUI should open. For detailed the desciption of our GUI as well as usage examples, please take a look at the user instruction. An illustration of out GUI start window is presented below. It is here where users must specify input directories, choose the preferred analysis type, specify the analysis parameters or train thrain their own neural networks based on their own training data.

GUI

Testing

We have not yet integrated unit testing for DLTrackUS. Nonetheless, we have provided instructions to objectively test whether DLTrackUS, once installed, is functionable. To perform the testing procedures yourself, check out the test instructions.

Code documentation

In order to see the detailled scope and description of the modules and functions included in the DLTrackUS package, you can do so either directly in the code, or in the Documentation section of our online documentation.

Community guidelines

Wheter you want to contribute, report a bug or have troubles with the DLTrackUS package, take a look at the provided instructions how to best do so.

Research

v0.3.0

  • Major upgrades and bugfixes!
  • New features: anual scaling tool, resize Video tool, crop video length tool & remove video parts tool.
  • Faster model predictions & optional stacked (sequential) predictions.
  • Improved user interface with visualization of model predictions and filtering/plotting of results.
  • Automatic settings.json in GUI for easy switching of model parameters.
  • Filtering of fascicle length and pennation angle data using hampel sand savgol filters.

Faster model predicitions on GPU & CPU

In the new version, we reduced processing time per frame by 40% from version 0.2.1 on GPU and CPU to 0.6s and ... , respectively.

Improved user interface

In version 0.3.0 we improved the user interface and included real time visualization of model predictions as well as a results terminal at the end of analyis. The analysis process is now more transparent and felxibel, since we included more analysis options in the settings.

DL_Track_Main

New model with bi-directional short long term memory for video analysis

We further provide a new model with a new overall aproach for fascicle anylsis in videos. For the first time, we provide a model with memory and awareness of surrounding frames. The model is taken from Chanti et al. (2021) and is called IFSS_NET.

In our approach, we use a bi-directional short long term memory (BiLSTM) to capture the temporal context of the video. We excluded the siamese encoder from the orginal model. Furhtermore, we used a hybrid loss combination of the Dice loss and binary cross entropy loss, both weighted equally.

To reach this decision, we compared different models and their performance compared to a manual ground thruth and a kalman-filter based tracking apporach (UltraTimTrack) proposed by van der Zee et al. (2025).

Model Training results

We compard our previous vgg16unet model (Ritsche et al. (2024)) to SegFormer, uNet3+ and IFSS-Net architectures. The Results on a unseen test set of 120 images with examplary predictions can be seen below.

model_comparison

Moreover, we compared the models due to similar performance to the one of the validation videos from the original paper (Ritsche et al. (2024)). This video was recently used to compare the performance of different methods for fascicle tracking (van der Zee et al. (2025)). We demonstrate improvement in the results from DLTrackUS in terms of RMSD compared to manual annotation as displayed below. Of all networks, the IFSS-Net model performed best in a trade-off between pennation angle and fascicle length RMSD.

model_comparison_calf_raise

Note that, compared to v0.2.1, we introduced hampel-filtering of the fascicle values in each frame and additionally applied a savitzky-golay filter to the median fascicle data to furhter reduce root mean squared distance. The results for three different tasks are displayed below.

Calf Raise

model_comparison_calf_raise_savgol

VL fixed end maximal knee extentsion

model_comparison_mvc

🚨 More comparsions will follow in the upcoming publication.

🚨 We are currently working on implementing tracking of fascicles accounting for their curvature.

v0.2.1 and prior

The previously published algorithm was developed with the aim to compare the performance of the trained deep learning models with manual analysis of muscle fascicle length, muscle fascicle pennation angle and muscle thickness. The results were presented in a published preprint. The results demonstrated in the article described the DLTrackUS algorithm to be comparable with manual analysis of muscle fascicle length, muscle fascicle pennation angle and muscle thickness in ultrasonography images as well as videos. The results are briefly illustrated in the figures below.

Analysis process

Analysis process from original input image to output result for images of two muscles, gastrocnemius medialis (GM) and vastus lateralis (VL). Subsequent to inputting the original images into the models, predictions are generated by the models for the aponeuroses (apo) and fascicles as displayed in the binary images. Based on the binary image, the output result is calculated by post-processing operations, fascicles and aponeuroses are drawn and the values for fascicle length, pennation angle and muscle thickness are displayed.

Bland-altman Plot

Bland-Altman plots of the results obtained with our approach versus the results of manual analyses by the authors (mean of all 3). Results are shown for muscle fascicle length (A), pennation angle (B), and muscle thickness (C). For these plots, only the median fascicle values from the deep learning approach were used, and thickness was computed from the centre of the image. Solid and dotted lines depict bias and 95% limits of agreement, respectively.

Video comparison

A comparison of fascicle lengths computed using DLTrackUS with those from UltraTrack(Farris & Lichtwark, 2016, DOI:10.1016/j.cmpb.2016.02.016), a semi-automated method of identifying muscle fascicles. Each row shows trials from a particular task (3 examples per task from different individuals, shown in separate columns). For DLTrackUS, the length of each individual fascicle detected in every frame is denoted by a gray dot. Solid black lines denote the mean length of all detected fascicles by DLTrackUS. Red dashed lines show the results of tracking a single fascicle with Ultratrack.

Related Work

The DLTrackUS package can only be used for the automatic analysis of longitudinal muscle ultrasonography images containing muscle architectural parameters. However, in order to assess muscle anatomical cross-sectional area (ACSA), panoramic ultrasonography images in the transversal plane are required. We recently published DeepACSA, an open source algorithm for automatic analysis of muscle ACSA in panoramic ultrasonography images of the human vastus lateralis, rectus femoris and gastrocnemius medialis. The repository containing the code and installation as well as usage instructions is locate here.

Owner

  • Name: Paul Ritsche
  • Login: PaulRitsche
  • Kind: user
  • Location: Basel
  • Company: University of Basel - Department of Sport, Exercise and Health

Citation (CITATION.cff)

cff-version: "1.2.0"
authors:
- family-names: Ritsche
  given-names: Paul
  orcid: "https://orcid.org/0000-0001-9446-7872"
- family-names: Seynnes
  given-names: Olivier
  orcid: "https://orcid.org/0000-0002-1289-246X"
- family-names: Cronin
  given-names: Neil
  orcid: "https://orcid.org/0000-0002-5332-1188"
contact:
- family-names: Ritsche
  given-names: Paul
  orcid: "https://orcid.org/0000-0001-9446-7872"
doi: 10.5281/zenodo.7885378
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Ritsche
    given-names: Paul
    orcid: "https://orcid.org/0000-0001-9446-7872"
  - family-names: Seynnes
    given-names: Olivier
    orcid: "https://orcid.org/0000-0002-1289-246X"
  - family-names: Cronin
    given-names: Neil
    orcid: "https://orcid.org/0000-0002-5332-1188"
  date-published: 2023-05-02
  doi: 10.21105/joss.05206
  issn: 2475-9066
  issue: 85
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5206
  title: "DL_Track_US: a python package to analyse muscle
    ultrasonography images"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05206"
  volume: 8
title: "DL_Track_US: a python package to analyse muscle ultrasonography
  images"

GitHub Events

Total
  • Watch event: 1
  • Delete event: 6
  • Member event: 1
  • Push event: 17
  • Pull request event: 2
  • Fork event: 3
  • Create event: 7
Last Year
  • Watch event: 1
  • Delete event: 6
  • Member event: 1
  • Push event: 17
  • Pull request event: 2
  • Fork event: 3
  • Create event: 7

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 217
  • Total Committers: 3
  • Avg Commits per committer: 72.333
  • Development Distribution Score (DDS): 0.323
Past Year
  • Commits: 26
  • Committers: 3
  • Avg Commits per committer: 8.667
  • Development Distribution Score (DDS): 0.192
Top Committers
Name Email Commits
Paul Ritsche 7****e 147
Carla Zihlmann c****n@b****h 67
Noah Maximilian Bodenmüller n****r@u****h 3
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 4
  • Total pull requests: 4
  • Average time to close issues: 6 months
  • Average time to close pull requests: 2 minutes
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 4
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 2
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 minute
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • PaulRitsche (4)
Pull Request Authors
  • PaulRitsche (6)
Top Labels
Issue Labels
documentation (2) enhancement (2)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 18 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 7
  • Total maintainers: 1
pypi.org: dl-track-us

Automatic analysis of logitudinal muscle ultrasonography images

  • Versions: 7
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 18 Last month
Rankings
Dependent packages count: 6.6%
Average: 26.7%
Forks count: 30.5%
Dependent repos count: 30.6%
Stargazers count: 39.1%
Maintainers (1)
Last synced: 4 months ago

Dependencies

requirements.txt pypi
  • Keras ==2.10.0
  • Pillow ==9.2.0
  • jupyter ==1.0.0
  • matplotlib ==3.6.1
  • numpy ==1.23.4
  • opencv-contrib-python ==4.6.0.66
  • openpyxl ==3.0.10
  • pandas ==1.5.1
  • pre-commit ==2.17.0
  • scikit-image ==0.19.3
  • scikit-learn ==1.1.2
  • sewar ==0.4.5
  • tensorflow ==2.10.0
  • tqdm ==4.64.1
.github/workflows/draft-pdf.yml actions
  • actions/checkout v2 composite
  • actions/upload-artifact v1 composite
  • openjournals/openjournals-draft-action master composite
docs/source/requirements.txt pypi
  • Keras ==2.10.0
  • Pillow ==9.2.0
  • jupyter ==1.0.0
  • matplotlib ==3.6.1
  • numpy ==1.23.4
  • opencv-contrib-python ==4.6.0.66
  • openpyxl ==3.0.10
  • pandas ==1.5.1
  • pre-commit ==2.17.0
  • scikit-image ==0.19.3
  • scikit-learn ==1.1.2
  • sewar ==0.4.5
  • tensorflow ==2.10.0
  • tqdm ==4.64.1
setup.py pypi
  • Keras ==2.10.0
  • Pillow ==9.2.0
  • jupyter ==1.0.0
  • matplotlib ==3.6.1
  • numpy ==1.23.4
  • opencv-contrib-python ==4.6.0.66
  • openpyxl ==3.0.10
  • pandas ==1.5.1
  • pre-commit ==2.17.0
  • scikit-image ==0.19.3
  • scikit-learn ==1.1.2
  • sewar ==0.4.5
  • tensorflow ==2.10.0
  • tqdm ==4.64.1
environment.yml pypi
  • Keras ==2.10.0
  • Pillow ==9.2.0
  • jupyter ==1.0.0
  • matplotlib ==3.6.1
  • numpy ==1.23.4
  • opencv-contrib-python ==4.6.0.66
  • openpyxl ==3.0.10
  • pandas ==1.5.1
  • pre-commit ==2.17.0
  • scikit-image ==0.19.3
  • scikit-learn ==1.1.2
  • sewar ==0.4.5
  • tensorflow ==2.10.0
  • tqdm ==4.64.1
pyproject.toml pypi
  • Keras ==2.10.0
  • Pillow ==9.2.0
  • matplotlib ==3.6.1
  • numpy ==1.23.4
  • opencv-contrib-python ==4.6.0.66
  • openpyxl ==3.0.10
  • pandas ==1.5.1
  • pre-commit ==2.17.0
  • scikit-image ==0.19.3
  • scikit-learn ==1.1.2
  • sewar ==0.4.5
  • tensorflow ==2.10.0
  • tqdm ==4.64.1