freemocap_prealpha
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.8%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: aaroncherian
- License: agpl-3.0
- Language: Python
- Default Branch: main
- Size: 1.38 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
https://user-images.githubusercontent.com/15314521/124694557-8069ea00-deaf-11eb-9328-3be27a4b1ea4.mp4
This is all very much a work in progress! More to come!
(As of May 2022) We're currently working on building a proper API with documentation, but for now enjoy this pile of not-quite-spaghetti-but-definitely-lasagna-flavored code and sloppy ReadMe 😅
Prerequisites -
Required * Windows only for now (sorry! Mac and Linux support coming very soon!😅) * A Python 3.7 environment - * We recommend installing Anaconda from here (https://www.anaconda.com/products/individual#Downloads) to create your Python environment.
- Two or more USB webcams attached to viable USB ports
- ~~USB hubs typically don't work~~ I think they do now?
- Note that two cameras is the minimum required for 3d reconstruction. However, with just two views, many points will be occluded/not visible to both cameras. For better performance, use three (or four or more?) cameras
- Each camera must get a clean unobstructed view of a Charuco board at some (See below). ____ ____ # Installation Open an Anaconda-enabled command prompt or powershell window and perform the following steps:
1) Create a Python3.7 Anaconda environment
$ conda create -n freemocap-env python=3.7
2) Activate that newly created environment
$ conda activate freemocap-env
3) Install freemocap from PyPi using pip
$ pip install freemocap
That should be it!
Basic Usage
## HOW TO CREATE A NEW FreeMoCap RECORDING SESSION
tl;dr- Activate the the freemocap Python environment and run the following lines of code (either in a script or in a console)
python
import freemocap
freemocap.RunMe()
But COOL KIDS will install Blender (blender.org and generate an awesome .blend file animation by setting useBlender=True:
python
import freemocap
freemocap.RunMe(useBlender=True)
This two-line script is a copy of the freemocap_runme_script.py file, which can be run by entering the following command into a command prompt or powershell:
(freemoocap-env)$ python freemocap_runme_script.py
In a bit more detail-
### 1) In an Anaconda enabled Command Prompt, PowerShell, or Windows Terminal window
- You will know if it's Anaconda Enabled because you will see a little (base) to the left of each line, which denotes that your (base) environment is currently active.
- We recommend Windows Terminal so you can enjoy all the Rich✨ formatted text output, but you'll need to do a bit of work to connect it to Anaconda (e.g. these instructions )
- If that seems intimidating (or just too much work), just press the Windows key, type Anaconda Prompt and run everything from there.
### 2) Activate your freemocap environment
- e.g. if your freemocap environment is named freemocap-env, type:
(base)$ conda activate freemocap-env
- If successful, the (base) to the left of each line will change to (freemocap-env), indicating that your freemocap environment is now active (type conda info --envs or conda info -e for a list of all available environments)
3) Activate an ipython console
- Activate an instance of an
ipythonconsole by typingipythoninto the command window and pressing 'Enter'(freemocap-env)$ ipython### 4) Within theipythonconsole, import thefreemocappackage
Python
[1]: import freemocap
5) Execute the freemocap.RunMe() command (with default parameters, see #runme-input-parameters for more info)
python
[2]: freemocap.RunMe() #<-this is where the magic happens!
6) Follow instructions in the Command window and pop-up GUI windows!
---✨💀✨---.
HOW TO REPROCESS A PREVIOUSLY RECORDED FreeMoCap RECORDING SESSION
You can re-start the processing pipeline from any of the following processing stages (defined below)by specifying the SessionID desired stage in the call to freemocap.RunMe()
So to process the session named sesh_2021-11-21_19_42_07 starting from stage 3 (aka, skipping the 1- recording and 2- synchronization stages), run:
python
import freemocap
freemocap.RunMe(sessionID="sesh_2021-11-21_19_42_07", stage=3)
Note - if you leave sessionID unspecified but set stage to a number higher than 1, it will attempt to use the last recorded session (but this can be buggy atm)
Processing stages -
Stage 1 - Record Videos
- Record raw videos from attached USB webcams and timestamps for each frame
- Raw Videos saved to
FreeMoCap_Data/[Session Folder]/RawVideos
Stage 2 - Synchronize Videos
- Use recorded timestamps to re-save raw videos as synchronized videos (same start and end and same number of frames). Videos saved to
- Synchronized Videos saved to
FreeMoCap_Data/[Session Folder]/SynchedVideos
Stage 3 - Calibrate Capture Volume
- Use Anipose's Charuco-based calibration method to determine the location of each camera during a recording session and calibrate the capture volume
- Calibration info saved to
[sessionID]_calibration.tomland[sessionID]_calibration.pickle - Stage 4 - Track 2D points in videos and Reconstruct 3D <-This is where the magic happens ✨
- Apply user specified tracking algorithms to Synchronized videos (currently supporting MediaPipe, OpenPose, and DeepLabCut) to generate 2D data
- Save to
FreeMoCap_Data/[Session Folder]/DataArrays/folder (e.g.mediaPipeData_2d.npy)
- Save to
- Combine 2d data from each camera with calibration data from Stage 3 to reconstruct the 3d trajectory of each tracked point
- Save to
/DataArraysfolder (e.g.openPoseSkel_3d.npy)
- Save to
- NOTE - you might think it would make sense to separate the 2d tracking and 3d reconstruction into different stages, but the way the code is currently set up it's cleaner to combine them into the same processing stage ¯\_(ツ)_/¯
- Stage 5 - Use Blender to generate output data files (optional, requires Blender installed. set
freemocap.RunMe(useBlender=True)to use) - Hijack a user-installed version of Blender to format raw mocap data into a
.blendfile including the raw data as keyframed emtpies with a (sloppy, inexpertly) rigged and meshed armatured based on the Rigify Human Metarig - Save
.blendfile to[Session_Folder]/[Session_ID]/[Session_ID].blend - You can double click that
.blendfile to open it in Blender. - For instructions on how to navigate a Blender Scene, try this YouTube Tutorial
- Stage 6 - Save Skeleton Animation!
- Create a Matplotlib based output animation video.
- Saves Animation video to:
[Session Folder]/[SessionID]_animVid.mp4 - Note - This part takes for-EVER 😅
- Saves Animation video to:
freemocap.RunMe() Specify recording session paramters
The freemocap.RunMe() function takes a number of parameters that can be used to alter it's default behavior in important ways. Here are the default parameters along with a followed by a brief description of each one.
RunMe - Default parameters
```python
in freemocap/fmc_runme.py
def RunMe(sessionID=None, stage=1, useOpenPose=False, runOpenPose = True, useMediaPipe=True, runMediaPipe=True, useDLC=False, dlcConfigPath=None, debug=False, setDataPath = False, userDataPath = None, recordVid = True, showAnimation = True, reconstructionConfidenceThreshold = .7, charucoSquareSize = 36, #mm calVideoFrameLength = .5, startFrame = 0, useBlender = False, resetBlenderExe = False, getsyncedunixtimestamps = True, goodcleanframenumber = 0, bundleadjust3d_points=False ): ```
RunMe input parameters
sessionID- Type - (str)
- [Default] - None.
- Indentifying string to use for this session.
- If creating a new session, default behavior is to autogerate SessionID is based on date and time that the session was recorded
- If re-processing a previously recorded session, this value specifies which session to reprocess (must be the name of a folder within the
FreeMoCap_Datafolder)stage- [Type] - Int
- [Default] - 1
- Which processing stage to start from. Processing stages are deined in more detail in #processing-stages
stage 1 - Record Raw Videos
stage 2 - Synchronize Videos
stage 3 - Camera Calibration
stage 4 - 2d Tracking and 3d Calibration
stage 5 - Create output files (using Blender)
stage 6 - Create output animation (Matplotlib)
- useMediaPipe
- [Type] - BOOL
- [Default] - False,
- Whether or not to use the MediaPipe tracking method in stage=4
runMediaPipe- [Type] - BOOL
- [Default] - False,
- Whether or not to RUN the MediaPipe tracking method in
stage=4(will use previously processed data. This can save a lot of time when re-processing long videos)
useOpenPose- [Type] - BOOL
- [Default] - False,
- Whether or not to use the OpenPose tracking method in
stage=4
runOpenPose- [Type] - BOOL
- [Default] - False,
- Whether or not to RUN the OpenPose tracking method in
stage=4(will use previously processed data. This can save a lot of time when re-processing long videos)
useDeepLabCut- [Type] - BOOL
- [Default] - False,
- Whether or not to use the DeepLabCut model/project specified at
dlcConfigPathto track objects instage=4
setDataPath- [Type] - BOOL
- [Default] - False,
- Trigger the GUI that prompts user to specify location of
FreeMoCap_Data
userDataPath- [Type] - BOOL
- [Default] - False,
- path to the location of
FreeMoCap_Data
- [Type] - BOOL
recordVid- [Type] - BOOL
- [Default] - False,
- wehether to save the matplotlib animation to an
.mp4file
- [Type] - BOOL
showAnimation- [Type] - BOOL
- [Default] - False,
- wehether to save the matplotlib animation to an
.mp4file
- [Type] - BOOL
reconstructionConfidenceThreshold- [Type] - float in range(0,1),
- [Default] - .7
- Threshold 'confidence' value to include a point in the 3d reconstruction step
charucoSquareSize- [Type] = int
- [Default] = 36,
- The size of a side of a black square in the Charuco board used in this calibration. The default value of 36 is approximately appropriate for a print out on an 8 in bu 10 in paper (US Letter, approx A4)
calVideoLength- [Type] = int, float in range (0,1), or [int, int]
- [Default] = .5,
- What portion of the videos to use in the Anipose calibration step in
stage=3.-1uses the whole recording, a number between 0 and 1 defines a proprotion of the video to use, and a tuple of two numbers defines the start and end frame
startFrame- [Type] = int
- [Default] = 0,
- what frame of the video to start the animation in
stage=6
useBlender- [Type] = BOOL
- [Default] = True,
- Whether to use Blender to create output
.blend,.fbx,.usd,and.gltffiles
resetBlenderExe- [Type] = BOOL
- [Default] = False,
- Whether to launch GUI to set Blender .exe path (usually something like
C:/Program Files/Blender Foundation/2.95/)
get_synced_unix_timestamps- [Type] = BOOL
- [Default] = True,
- Whether to save camera timestamps in
Unix Epoch Timein addition to the default 'counting up from zero' timestamps. Very helpful for synchronizing FreeMoCap with other softwares
good_clean_frame_number- [Type] = int
- [Default] = 0,
- A frame where the subject is standing in something like a T-pose or an A-pose, which will be used to scale the armature created via the
useBlender=Trueoption. If set to default (0) the software will attempt to locate this frame automatically by looking for a frame where all markers are visible with highconfidencevalues (but this is buggy)
bundle_adjust_3d_points[EXPERIMENTAL as of May 2022]- [Type] = BOOL
- [Default] = False,
- When set to
True, the system will run a bundle adjust optimization of all recorded 3d points produced instage=4usinganiposelib'soptim_pointsmethod. This takes a rather long time, but can signicantly clean up the resulting recordings. However,it may also "over smooth" the data. We're in the process of testing this method out now
use_previous_calibration- [Type] = BOOL
- [Default] = False,
- Choose whether to use a calibration file from a previous session. When
False, FreeMoCap will automatically save out calibration data whenever stage 3 is successfully completed. Only one saved calibration file is stored, so running another session will overwrite the currently saved calibration file. WhenTrue, FreeMoCap will instead load in the saved calibration data, which allows users to create recordings without needing to show the cameras the Charuco board.
Charuco Board Information
Our calibration method relies on Anipose's Charuco-based calibration method to determine the location of each camera during a recording session. This information is later used to create the 3d reconstruction of the tracked points
IMPORTANT The Charuco board shown to the camera MUST be generated with the
cv2.aruco.DICT_4X4_250dictionary!Ah high resoultion
pngof this Charuco board is in this repository at/charuco_board_image_highRes.pngTo generate your own board, use the following python commands (or equivalent). DO NOT CHANGE THE PARAMETERS OR THE CALIBRATION WILL NOT WORK: ``` python import cv2
arucodict = cv2.aruco.Dictionaryget(cv2.aruco.DICT4X4250) #note
cv2.arucocan be installed viapip install opencv-contrib-pythonboard = cv2.aruco.CharucoBoardcreate(7, 5, 1, .8, arucodict)
charucoboardimage = board.draw((2000,2000)) #
2000is the resolution of the resulting image. Increase this number if printing a large board (bigger is better! Esp for large spaces!cv2.imwrite('charucoboardimage.png',charucoboardimage)
```
Optional
Both Deeplabcut and OpenPose are technically supported, but both are rather under-tested at the moment.
To use DeepLabCut, install with set
freemocap.RunMe(useDLC=True)- Installation instructions for DeepLabCut may be found on their github - https://github.com/DeepLabCut/DeepLabCut
If you would like to use OpenPose for body tracking, install Cuda and the Windows Portable Demo of OpenPose and set
freemocap.RunMe(useOpenPose=True).- Install CUDA: https://developer.nvidia.com/cuda-downloads
- Install OpenPose (Windows Portable Demo): https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.6.0
Follow the GitHub Repository and/or Join the Discord (https://discord.gg/HX7MTprYsK) for updates!
Stay Tuned for more soon!
✨💀✨
Owner
- Name: Aaron Cherian
- Login: aaroncherian
- Kind: user
- Company: Northeastern University
- Repositories: 17
- Profile: https://github.com/aaroncherian
Currently PhD-ing at Northeastern University as part of the FreeMoCap Project
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: Matthis
given-names: Jonathan Samir
orcid: https://orcid.org/my-orcid?orcid=0000-0003-3683-646X
- family-names: Cherian
given-names: Aaron
title: "FreeMoCap: A free, open source markerless motion capture system"
version: 0.0.52
GitHub Events
Total
Last Year
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 0
- Total pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: less than a minute
- Total issue authors: 0
- Total pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: less than a minute
- Issue authors: 0
- Pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- aaroncherian (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- python 3.8-slim build
- postgres 14-alpine
- 1216 dependencies
- @testing-library/jest-dom ^5.16.1 development
- @testing-library/react ^12.1.2 development
- @testing-library/user-event ^13.5.0 development
- @types/axios ^0.14.0 development
- @types/debounce ^1.2.1 development
- @types/jest ^27.4.0 development
- @types/lodash ^4.14.165 development
- @types/node ^16.11.21 development
- @types/react ^17.0.38 development
- @types/react-dom ^17.0.11 development
- @types/react-redux ^7.1.11 development
- @types/react-router-dom ^5.1.6 development
- @emotion/react ^11.7.1
- @emotion/styled ^11.6.0
- @mui/icons-material ^5.2.5
- @mui/material ^5.2.8
- @mui/styled-engine ^5.2.6
- @mui/system ^5.2.8
- @reduxjs/toolkit ^1.7.1
- @types/dom-mediacapture-record ^1.0.11
- axios ^0.25.0
- class-transformer 0.5.1
- react ^17.0.2
- react-dom ^17.0.2
- react-redux ^7.2.6
- react-router ^6.2.1
- react-router-dom ^6.2.1
- react-scripts 5.0.0
- react-use ^17.3.2
- react-use-websocket ^2.9.1
- react-webcam ^6.0.0
- reflect-metadata ^0.1.13
- typescript 4.5.5
- codecov * development
- coverage * development
- flake8 * development
- ipython * development
- matplotlib * development
- numpydoc * development
- pytest * development
- sphinx * development
- sphinx-copybutton * development
- sphinx_rtd_theme * development
- twine * development
- aniposelib ==0.4.3
- h5py >=3.1.0
- imutils *
- ipython *
- matplotlib *
- mediapipe ==0.8.8
- moviepy *
- numpy <=1.19
- opencv-contrib-python ==3.4.14.51
- protobuf *
- rich *
- ruamel.yaml >=0.15.0
- tqdm *
- typed-ast <1.5,>=1.4.0