jetson-yolov5-optimization
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.0%) to scientific vocabulary
Last synced: 6 months ago
·
JSON representation
·
Repository
Basic Info
- Host: GitHub
- Owner: hk-garg
- License: agpl-3.0
- Language: Python
- Default Branch: main
- Size: 76.4 MB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Created 8 months ago
· Last pushed 8 months ago
Metadata Files
Readme
Contributing
License
Citation
README.markdown
NVIDIA Jetson Xavier NX Setup Guide
This guide outlines the steps to set up the NVIDIA Jetson Xavier NX Developer Kit, run a YOLOv5 model, and execute an optimized TensorRT inference file.
Prerequisites
- NVIDIA Jetson Xavier NX Developer Kit
- MicroSD card (16GB or larger, UHS-I speed class recommended)
- Laptop with internet connection and SD card reader
- Micro USB cable
- HDMI monitor, USB keyboard, and mouse
- Power supply (12V, 2A or higher, barrel connector)
- NVIDIA Developer account
- Conda installed on Jetson for environment management
Setup Jetson Xavier NX
Download JetPack SD Card Image
- Visit NVIDIA Developer Downloads.
- Select Jetson > Jetson Xavier NX Developer Kit under SD Card Image Method.
- Log in or register for a free NVIDIA Developer account.
- Download the latest Jetson Xavier NX SD Card Image (JetPack).
Flash MicroSD Card
- Insert the microSD card into your laptop.
- Format the card using SD Memory Card Formatter.
- Use Balena Etcher:
- Select the downloaded JetPack image (.zip file).
- Choose the microSD card as the target.
- Click Flash to write the image.
- Alternatively, use command line (Linux/macOS example):
bash unzip ~/Downloads/jetson-nx-developer-kit-sd-card-image.zip | sudo dd of=/dev/sdx bs=1M status=progressReplace/dev/sdxwith your microSD card’s device name.
Set Up Jetson Xavier NX
- Insert the flashed microSD card into the slot on the underside of the Jetson Xavier NX module (label facing up, until it clicks).
- Connect the Jetson to:
- HDMI monitor via HDMI port.
- USB keyboard and mouse via USB ports.
- Power supply via barrel connector (9-16V, not the 19V supply included).
- Connect the micro USB cable to your laptop for data transfer (not power).
Boot and Initial Configuration
- Power on the Jetson (green LED near micro USB port lights up).
- Follow on-screen prompts to:
- Accept the EULA.
- Select language, keyboard, and time zone.
- Set a username and password (e.g., username:
nvidia, password:nvidia).
- The Jetson boots to the Ubuntu desktop (18.04 or 20.04, depending on JetPack version).
Install JetPack Components
- On your laptop, download and install NVIDIA SDK Manager.
- Run SDK Manager:
bash sudo dpkg -i sdkmanager_<version>_amd64.deb - Log in with your NVIDIA account.
- Select Jetson Xavier NX as target hardware and the matching JetPack version.
- Keep the micro USB cable connected for virtual Ethernet (IP: 192.168.55.1 for Jetson, 192.168.55.100 for laptop).
- Follow prompts to install libraries and drivers (ensure username/password match Jetson setup).
- Reboot the Jetson after installation.
Verify Setup
- Log in to the Ubuntu desktop.
- Open a terminal and check system status:
bash nvcc --versionThis confirms CUDA installation. - Configure WiFi (optional):
bash nmcli r wifi on nmcli d wifi list nmcli d wifi connect <SSID> password <PASSWORD>
Running YOLOv5 Model
Set Up Environment
- Create a Conda environment to isolate dependencies:
bash conda create -n yolov5 python=3.8 conda activate yolov5 - Install PyTorch and other dependencies compatible with Jetson’s CUDA:
bash pip install torch torchvision --index-url https://download.pytorch.org/whl/cu113 pip install opencv-python numpy pyyaml tqdmPurpose: Ensures a clean environment with required libraries for YOLOv5.
- Create a Conda environment to isolate dependencies:
Clone YOLOv5 Repository
- Clone the official YOLOv5 repository from Ultralytics:
bash git clone https://github.com/ultralytics/yolov5.git cd yolov5 pip install -r requirements.txtPurpose: Downloads YOLOv5 code and installs additional dependencies.
- Clone the official YOLOv5 repository from Ultralytics:
Run Inference
- Download a pre-trained YOLOv5 model (e.g.,
yolov5s.ptfor small model) from YOLOv5 releases. - Run inference on a folder of images (e.g., five images in
data/images/):bash python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/Purpose: Performs object detection on images, saving results with bounding boxes inruns/detect/exp/. Options:--weights: Choose other models likeyolov5m.pt(medium),yolov5l.pt(large), oryolov5x.pt(extra-large) for higher accuracy but slower inference.--img: Adjust input image size (e.g., 320 for faster inference, 1280 for higher accuracy).--conf: Set confidence threshold (e.g., 0.4 for stricter detections).
- Download a pre-trained YOLOv5 model (e.g.,
Verify Results
- Check output images in
runs/detect/exp/for detected objects (e.g., persons, cars) with bounding boxes. - Average inference time: ~42 ms per image on Jetson Xavier NX.
- Purpose: Confirms model detects objects correctly.
- Check output images in
Running Optimized TensorRT Inference
Install TensorRT
- Verify TensorRT is installed via JetPack:
bash dpkg -l | grep tensorrtPurpose: Ensures TensorRT libraries are available for optimized inference.
- Verify TensorRT is installed via JetPack:
Set Up TensorRT Environment
- In the
yolov5Conda environment:bash pip install tensorrt pycudaPurpose: Installs TensorRT Python bindings and PyCUDA for engine execution.
- In the
Convert YOLOv5 to TensorRT
- Navigate to the YOLOv5 directory:
bash cd yolov5 - Export the YOLOv5 model to ONNX format:
bash python export.py --weights yolov5s.pt --include onnxPurpose: Converts PyTorch model to ONNX for TensorRT compatibility. - Convert ONNX to TensorRT engine with FP16 precision:
bash trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.trt --fp16Purpose: Builds a TensorRT engine optimized for Jetson Xavier NX GPU. Options:--fp16: Uses 16-bit floating-point precision for faster inference with minimal accuracy loss.--int8: Uses 8-bit integer precision for maximum speed but requires calibration data for accuracy (not included here; see NVIDIA TensorRT Docs).--fp32: Uses 32-bit floating-point precision for highest accuracy but slower inference (default if no flag specified).
- Navigate to the YOLOv5 directory:
Run TensorRT Inference
- Run inference using the TensorRT engine:
bash python detect.py --weights yolov5s.trt --img 640 --conf 0.25 --source data/images/Purpose: Executes optimized inference, saving results inruns/detect/expX/. - Fix bounding box scaling issues by updating the postprocess function:
python # In detect.py, update postprocess function scale_coords(img.shape[2:], det[:, :4], img0.shape).round()Purpose: Ensures bounding boxes align with original image dimensions. - Average inference time: ~30-35 ms per image (faster than PyTorch).
- Run inference using the TensorRT engine:
Troubleshooting
- If predictions skip large bounding boxes, verify
--img 640matches input image size. - Rebuild the TensorRT engine if errors occur:
bash rm yolov5s.trt trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.trt --fp16Purpose: Regenerates the engine to resolve compatibility issues. - For INT8, prepare a calibration dataset and use
--int8 --calib=<calibration_file>withtrtexec.
- If predictions skip large bounding boxes, verify
Notes
- Use a 12V 2A (or higher, up to 16V) power supply. The included 19V supply may not be compatible.
- For NVMe SSD booting, follow this guide.
- Refer to NVIDIA Jetson Xavier NX Getting Started for setup details.
- For YOLOv5 and TensorRT issues, consult YOLOv5 GitHub and NVIDIA TensorRT Docs.
- Monitor GPU usage with
nvidia-smito ensure efficient resource allocation.
Owner
- Name: Harsh Garg
- Login: hk-garg
- Kind: user
- Repositories: 1
- Profile: https://github.com/hk-garg
Citation (CITATION.cff)
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use YOLOv5, please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
title: "YOLOv5 by Ultralytics"
version: 7.0
doi: 10.5281/zenodo.3908559
date-released: 2020-5-29
license: AGPL-3.0
url: "https://github.com/ultralytics/yolov5"
GitHub Events
Total
- Issue comment event: 3
- Push event: 2
- Pull request event: 2
- Create event: 2
Last Year
- Issue comment event: 3
- Push event: 2
- Pull request event: 2
- Create event: 2
Dependencies
.github/workflows/ci-testing.yml
actions
- actions/checkout v4 composite
- actions/setup-python v5 composite
- astral-sh/setup-uv v6 composite
- slackapi/slack-github-action v2.1.0 composite
.github/workflows/cla.yml
actions
- contributor-assistant/github-action v2.6.1 composite
.github/workflows/docker.yml
actions
- actions/checkout v4 composite
- docker/build-push-action v6 composite
- docker/login-action v3 composite
- docker/setup-buildx-action v3 composite
- docker/setup-qemu-action v3 composite
.github/workflows/format.yml
actions
- ultralytics/actions main composite
.github/workflows/links.yml
actions
- actions/checkout v4 composite
- ultralytics/actions/retry main composite
.github/workflows/merge-main-into-prs.yml
actions
- actions/checkout v4 composite
- actions/setup-python v5 composite
.github/workflows/stale.yml
actions
- actions/stale v9 composite
utils/docker/Dockerfile
docker
- pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
utils/google_app_engine/Dockerfile
docker
- gcr.io/google-appengine/python latest build
pyproject.toml
pypi
- matplotlib >=3.3.0
- numpy >=1.22.2
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0
- ultralytics >=8.1.47
requirements.txt
pypi
- PyYAML >=5.3.1
- gitpython >=3.1.30
- matplotlib >=3.3
- numpy >=1.23.5
- opencv-python >=4.1.1
- pandas >=1.1.4
- pillow >=10.3.0
- psutil *
- requests >=2.32.2
- scipy >=1.4.1
- seaborn >=0.11.0
- setuptools >=70.0.0
- thop >=0.1.1
- torchvision >=0.9.0
- tqdm >=4.66.3
utils/google_app_engine/additional_requirements.txt
pypi
- Flask ==2.3.2
- gunicorn ==23.0.0
- pip ==23.3
- werkzeug >=3.0.1
- zipp >=3.19.1