tiny-cuda-nn

Lightning fast C++/CUDA neural network framework

https://github.com/nvlabs/tiny-cuda-nn

Science Score: 64.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    1 of 29 committers (3.4%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (7.1%) to scientific vocabulary

Keywords

cuda deep-learning gpu mlp nerf neural-network pytorch real-time rendering
Last synced: 6 months ago · JSON representation ·

Repository

Lightning fast C++/CUDA neural network framework

Basic Info
  • Host: GitHub
  • Owner: NVlabs
  • License: other
  • Language: C++
  • Default Branch: master
  • Homepage:
  • Size: 19.1 MB
Statistics
  • Stars: 4,193
  • Watchers: 50
  • Forks: 520
  • Open Issues: 242
  • Releases: 7
Topics
cuda deep-learning gpu mlp nerf neural-network pytorch real-time rendering
Created almost 5 years ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

Tiny CUDA Neural Networks

This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning fast "fully fused" multi-layer perceptron (technical paper), a versatile multiresolution hash encoding (technical paper), as well as support for various other input encodings, losses, and optimizers.

Performance

Image Fully fused networks vs. TensorFlow v2.5.0 w/ XLA. Measured on 64 (solid line) and 128 (dashed line) neurons wide multi-layer perceptrons on an RTX 3090. Generated by `benchmarks/benchours.cuandbenchmarks/benchtensorflow.pyusingdata/configoneblob.json`._

Usage

Tiny CUDA neural networks have a simple C++/CUDA API:

```cpp

include

// Configure the model nlohmann::json config = { {"loss", { {"otype", "L2"} }}, {"optimizer", { {"otype", "Adam"}, {"learningrate", 1e-3}, }}, {"encoding", { {"otype", "HashGrid"}, {"nlevels", 16}, {"nfeaturesperlevel", 2}, {"log2hashmapsize", 19}, {"baseresolution", 16}, {"perlevelscale", 2.0}, }}, {"network", { {"otype", "FullyFusedMLP"}, {"activation", "ReLU"}, {"outputactivation", "None"}, {"nneurons", 64}, {"nhiddenlayers", 2}, }}, };

using namespace tcnn;

auto model = createfromconfig(ninputdims, noutputdims, config); model->setjitfusion(supportsjitfusion()); // Optional: accelerate with JIT fusion

// Train the model (batchsize must be a multiple of tcnn::BATCHSIZEGRANULARITY) GPUMatrix trainingbatchinputs(ninputdims, batchsize); GPUMatrix trainingbatchtargets(noutputdims, batch_size);

for (int i = 0; i < ntrainingsteps; ++i) { generatetrainingbatch(&trainingbatchinputs, &trainingbatchtargets); // <-- your code

float loss;
model.trainer->training_step(training_batch_inputs, training_batch_targets, &loss);
std::cout << "iteration=" << i << " loss=" << loss << std::endl;

}

// Use the model GPUMatrix inferenceinputs(ninputdims, batchsize); generateinputs(&inferenceinputs); // <-- your code

GPUMatrix inferenceoutputs(noutputdims, batchsize); model.network->inference(inferenceinputs, inferenceoutputs); ```

JIT fusion

JIT fusion is a new, optional feature with tiny-cuda-nn v2.0 and later. It is almost always recommended to enable automatic JIT fusion for a performance boost of 1.5x to 2.5x, depending on the model and GPU. Newer GPUs exhibit larger speedups.

If your model has very large hash grids (~20 million+ parameters) or MLPs (layer sizes larger than 128 neurons), or when your GPU is an RTX 3000 series or earlier, JIT fusion can slow down training. Rarely inference, too. It this case, it is recommended to try enabling JIT fusion separately for training and inference to measure whether it is faster.

Please open an issue if you encounter a slowdown in a different situation or other problems with JIT fusion enabled.

Automatic JIT fusion

To enable JIT fusion, set the jit_fusion property of your model to true. All future uses of the model, whether inference or training, will then use JIT mode. Note that if there is an error during JIT compilation, a warning will be emitted and JIT compilation mode automatically turned off. Your code will still run using the tiny-cuda-nn 1.X code path.

cpp auto model = tcnn::create_from_config(...); model->set_jit_fusion(tcnn::supports_jit_fusion()); // Enable JIT if the system supports it

JIT fusion can also be enabled via the PyTorch bindings but the speed-up will be lower, particularly during training. This is because the JIT compiler does not have access to the whole compute graph and can therefore fuse and optimize less.

```python import tinycudann as tcnn

model = tcnn.NetworkWithInputEncoding(...) # Or any other tcnn model model.jitfusion = tcnn.supportsjit_fusion() # Enable JIT if the system supports it ```

Manual JIT fusion

Even larger speed-ups are possible when applications integrate more tightly with JIT fusion. For example, Instant NGP achieves a 5x speedup by fusing the entire NeRF ray marcher into a single kernel.

JIT fusion works by converting a given tiny-cuda-nn model to a CUDA device function and then compiling it into a kernel using CUDA's runtime compilation (RTC) feature.

To integrate a tiny-cuda-nn model with a larger kernel in your app, you need to 1. turn your kernel into a string, 2. prepend the tiny-cuda-nn model's device function, 3. pass the result to tiny-cuda-nn's runtime compilation API.

Here is an example that implements a minimal kernel using a tiny-cuda-nn model with 32 input dimensions and 16 output dimensions: ```cpp

include

auto model = tcnn::createfromconfig(32 /* input dims /, 16 / output dims */, ...); auto fusedkernel = tcnn::CudaRtcKernel( "yourkernel", fmt::format(R" {MODELDEVICEFUNCTION} global void yourkernel(...) { // Get input to model from either registers or memory. tcnn::hvec<32> input = ...; // Call tiny-cuda-nn model. All 32 threads of the warp must be active here. tcnn::hvec<16> output = modelfun(nerfin, params); // Do something with the model output. }", fmt::arg("MODELDEVICEFUNCTION", model->generatedevicefunction("modelfun")), ) );

uint32t blocks = 1; uint32t threads = 128; // Must be multiple of 32 for neural networks to work. uint32t shmemsize = 0; // Can be any size that yourkernel needs. cudaStreamt stream = nullptr; // Can be any stream. fusedkernel.launch(blocks, threads, shmemsize, stream, ... /* params of your_kernel */); ```

And here is Instant NGP's NeRF integration with the JIT compiler for reference: - src/testbed_nerf.cu - include/neural-graphics-primitives/fusedkernels/rendernerf.cuh

Example: learning a 2D image

We provide a sample application where an image function (x,y) -> (R,G,B) is learned. It can be run via sh tiny-cuda-nn$ ./build/mlp_learning_an_image data/images/albert.jpg data/config_hash.json producing an image every couple of training steps. Each 1000 steps should take a bit over 1 second with the default configuration on an RTX 4090.

| 10 steps | 100 steps | 1000 steps | Reference image | |:---:|:---:|:---:|:---:| | 10steps | 100steps | 1000steps | reference |

Requirements

  • An NVIDIA GPU; tensor cores increase performance when available. All shown results come from an RTX 3090.
  • A C++14 capable compiler. The following choices are recommended and have been tested:
    • Windows: Visual Studio 2019 or 2022
    • Linux: GCC/G++ 8 or higher
  • A recent version of CUDA. The following choices are recommended and have been tested:
    • Windows: CUDA 11.5 or higher
    • Linux: CUDA 10.2 or higher
  • CMake v3.21 or higher.
  • The fully fused MLP component of this framework requires a very large amount of shared memory in its default configuration. It will likely only work on an RTX 3090, an RTX 2080 Ti, or higher-end GPUs. Lower end cards must reduce the n_neurons parameter or use the CutlassMLP (better compatibility but slower) instead.

If you are using Linux, install the following packages sh sudo apt-get install build-essential git

We also recommend installing CUDA in /usr/local/ and adding the CUDA installation to your PATH. For example, if you have CUDA 12.6.3, add the following to your ~/.bashrc sh export PATH="/usr/local/cuda-12.6.3/bin:$PATH" export LD_LIBRARY_PATH="/usr/local/cuda-12.6.3/lib64:$LD_LIBRARY_PATH"

Compilation (Windows & Linux)

Begin by cloning this repository and all its submodules using the following command: sh $ git clone --recursive https://github.com/nvlabs/tiny-cuda-nn $ cd tiny-cuda-nn

Then, use CMake to build the project: (on Windows, this must be in a developer command prompt) sh tiny-cuda-nn$ cmake . -B build -DCMAKE_BUILD_TYPE=RelWithDebInfo tiny-cuda-nn$ cmake --build build --config RelWithDebInfo -j

If compilation fails inexplicably or takes longer than an hour, you might be running out of memory. Try running the above command without -j in that case.

PyTorch extension

tiny-cuda-nn comes with a PyTorch extension that allows using the fast MLPs and input encodings from within a Python context. These bindings can be significantly faster than full Python implementations; in particular for the multiresolution hash encoding.

The overheads of Python/PyTorch can nonetheless be extensive if the batch size is small. For example, with a batch size of 64k, the bundled mlp_learning_an_image example is ~2x slower through PyTorch than native CUDA. With a batch size of 256k and higher (default), the performance is much closer.

Begin by setting up a Python 3.X environment with a recent, CUDA-enabled version of PyTorch. Then, invoke sh pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Alternatively, if you would like to install from a local clone of tiny-cuda-nn, invoke sh tiny-cuda-nn$ cd bindings/torch tiny-cuda-nn/bindings/torch$ python setup.py install

Upon success, you can use tiny-cuda-nn models as in the following example: ```py import commentjson as json import tinycudann as tcnn import torch

with open("data/config_hash.json") as f: config = json.load(f)

Option 1: efficient Encoding+Network combo.

model = tcnn.NetworkWithInputEncoding( ninputdims, noutputdims, config["encoding"], config["network"] )

Option 2: separate modules. Slower but more flexible.

encoding = tcnn.Encoding(ninputdims, config["encoding"]) network = tcnn.Network(encoding.noutputdims, noutputdims, config["network"]) model = torch.nn.Sequential(encoding, network)

model.jitfusion = tcnn.supportsjit_fusion() # Optional: accelerate with JIT fusion ```

See samples/mlp_learning_an_image_pytorch.py for an example.

Components

Following is a summary of the components of this framework. The JSON documentation lists configuration options.

| Networks |   |   | :--- | :---------- | :----- | Fully fused MLP | src/fully_fused_mlp.cu | Lightning fast implementation of small multi-layer perceptrons (MLPs). | CUTLASS MLP | src/cutlass_mlp.cu | MLP based on CUTLASS' GEMM routines. Slower than fully-fused, but handles larger networks and still is reasonably fast.

| Input encodings |   |   | :--- | :---------- | :----- | Composite | include/tiny-cuda-nn/encodings/composite.h | Allows composing multiple encodings. Can be, for example, used to assemble the Neural Radiance Caching encoding [Müller et al. 2021]. | Frequency | include/tiny-cuda-nn/encodings/frequency.h | NeRF's [Mildenhall et al. 2020] positional encoding applied equally to all dimensions. | Grid | include/tiny-cuda-nn/encodings/grid.h | Encoding based on trainable multiresolution grids. Used for Instant Neural Graphics Primitives [Müller et al. 2022]. The grids can be backed by hashtables, dense storage, or tiled storage. | Identity | include/tiny-cuda-nn/encodings/identity.h | Leaves values untouched. | Oneblob | include/tiny-cuda-nn/encodings/oneblob.h | From Neural Importance Sampling [Müller et al. 2019] and Neural Control Variates [Müller et al. 2020]. | SphericalHarmonics | include/tiny-cuda-nn/encodings/spherical_harmonics.h | A frequency-space encoding that is more suitable to direction vectors than component-wise ones. | TriangleWave | include/tiny-cuda-nn/encodings/triangle_wave.h | Low-cost alternative to the NeRF's encoding. Used in Neural Radiance Caching [Müller et al. 2021].

| Losses |   |   | :--- | :---------- | :----- | L1 | include/tiny-cuda-nn/losses/l1.h | Standard L1 loss. | Relative L1 | include/tiny-cuda-nn/losses/l1.h | Relative L1 loss normalized by the network prediction. | MAPE | include/tiny-cuda-nn/losses/mape.h | Mean absolute percentage error (MAPE). The same as Relative L1, but normalized by the target. | SMAPE | include/tiny-cuda-nn/losses/smape.h | Symmetric mean absolute percentage error (SMAPE). The same as Relative L1, but normalized by the mean of the prediction and the target. | L2 | include/tiny-cuda-nn/losses/l2.h | Standard L2 loss. | Relative L2 | include/tiny-cuda-nn/losses/relative_l2.h | Relative L2 loss normalized by the network prediction [Lehtinen et al. 2018]. | Relative L2 Luminance | include/tiny-cuda-nn/losses/relative_l2_luminance.h | Same as above, but normalized by the luminance of the network prediction. Only applicable when network prediction is RGB. Used in Neural Radiance Caching [Müller et al. 2021]. | Cross Entropy | include/tiny-cuda-nn/losses/cross_entropy.h | Standard cross entropy loss. Only applicable when the network prediction is a PDF. | Variance | include/tiny-cuda-nn/losses/variance_is.h | Standard variance loss. Only applicable when the network prediction is a PDF.

| Optimizers |   |   | :--- | :---------- | :----- | Adam | include/tiny-cuda-nn/optimizers/adam.h | Implementation of Adam [Kingma and Ba 2014], generalized to AdaBound [Luo et al. 2019]. | Novograd | include/tiny-cuda-nn/optimizers/lookahead.h | Implementation of Novograd [Ginsburg et al. 2019]. | SGD | include/tiny-cuda-nn/optimizers/sgd.h | Standard stochastic gradient descent (SGD). | Shampoo | include/tiny-cuda-nn/optimizers/shampoo.h | Implementation of the 2nd order Shampoo optimizer [Gupta et al. 2018] with home-grown optimizations as well as those by Anil et al. [2020]. | Average | include/tiny-cuda-nn/optimizers/average.h | Wraps another optimizer and computes a linear average of the weights over the last N iterations. The average is used for inference only (does not feed back into training). | Batched | include/tiny-cuda-nn/optimizers/batched.h | Wraps another optimizer, invoking the nested optimizer once every N steps on the averaged gradient. Has the same effect as increasing the batch size but requires only a constant amount of memory. | | Composite | include/tiny-cuda-nn/optimizers/composite.h | Allows using several optimizers on different parameters. | EMA | include/tiny-cuda-nn/optimizers/average.h | Wraps another optimizer and computes an exponential moving average of the weights. The average is used for inference only (does not feed back into training). | Exponential Decay | include/tiny-cuda-nn/optimizers/exponential_decay.h | Wraps another optimizer and performs piecewise-constant exponential learning-rate decay. | Lookahead | include/tiny-cuda-nn/optimizers/lookahead.h | Wraps another optimizer, implementing the lookahead algorithm [Zhang et al. 2019].

License and Citation

This framework is licensed under the BSD 3-clause license. Please see LICENSE.txt for details.

If you use it in your research, we would appreciate a citation via bibtex @software{tiny-cuda-nn, author = {M\"uller, Thomas}, license = {BSD-3-Clause}, month = {4}, title = {{tiny-cuda-nn}}, url = {https://github.com/NVlabs/tiny-cuda-nn}, version = {2.0}, year = {2021} }

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Publications & Software

Among others, this framework powers the following publications:

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
ACM Transactions on Graphics (SIGGRAPH), July 2022
Website / Paper / Code / Video / BibTeX

Extracting Triangular 3D Models, Materials, and Lighting From Images
Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler
CVPR (Oral), June 2022
Website / Paper / Video / BibTeX

Real-time Neural Radiance Caching for Path Tracing
Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller
ACM Transactions on Graphics (SIGGRAPH), August 2021
Paper / GTC talk / Video / Interactive results viewer / BibTeX

As well as the following software:

NerfAcc: A General NeRF Accleration Toolbox
Ruilong Li, Matthew Tancik, Angjoo Kanazawa
https://github.com/KAIR-BAIR/nerfacc

Nerfstudio: A Framework for Neural Radiance Field Development
Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
https://github.com/nerfstudio-project/nerfstudio

Please feel free to make a pull request if your publication or software is not listed.

Acknowledgments

Special thanks go to the NRC authors for helpful discussions and to Nikolaus Binder for providing part of the infrastructure of this framework, as well as for help with utilizing TensorCores from within CUDA.

Owner

  • Name: NVIDIA Research Projects
  • Login: NVlabs
  • Kind: organization

Citation (CITATION.cff)

cff-version: 1.2.0
title: tiny-cuda-nn
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Thomas
    email: thomas94@gmx.net
    family-names: Müller
    affiliation: NVIDIA
repository-code: 'https://github.com/NVlabs/tiny-cuda-nn'
abstract: >-
  This is a small, self-contained framework for training and querying neural
  networks. Most notably, it contains a lightning fast "fully fused" multi-
  layer perceptron (technical paper), a versatile multiresolution hash encoding
  (technical paper), as well as support for various other input encodings,
  losses, and optimizers.
keywords:
  - 'neural network, tiny, tensor cores, cuda'
license: BSD-3-Clause
license-url: https://github.com/NVlabs/tiny-cuda-nn/blob/master/LICENSE.txt
version: 2.0
date-released: '2021-04-21'

GitHub Events

Total
  • Create event: 4
  • Release event: 1
  • Issues event: 40
  • Watch event: 418
  • Issue comment event: 90
  • Push event: 21
  • Pull request review comment event: 2
  • Pull request review event: 2
  • Pull request event: 10
  • Fork event: 74
Last Year
  • Create event: 4
  • Release event: 1
  • Issues event: 40
  • Watch event: 418
  • Issue comment event: 90
  • Push event: 21
  • Pull request review comment event: 2
  • Pull request review event: 2
  • Pull request event: 10
  • Fork event: 74

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 442
  • Total Committers: 29
  • Avg Commits per committer: 15.241
  • Development Distribution Score (DDS): 0.124
Past Year
  • Commits: 16
  • Committers: 5
  • Avg Commits per committer: 3.2
  • Development Distribution Score (DDS): 0.563
Top Committers
Name Email Commits
Thomas Müller t****4@g****t 387
Jianfei (Ventus) Guo f****s@g****m 8
Solonets s****s@g****m 7
Johnny j****4@g****m 6
hturki t****m@g****m 5
JC 2****1 4
Wenda Zhou w****7@n****u 2
James Sharpe j****e@z****m 2
christoph c****0@g****m 1
Aaron Miller a****3@g****m 1
Ana Dodik 5****k 1
Cedric CHEDALEUX c****x@o****m 1
Crazyang i****g@g****m 1
David Holtz 5****z 1
mgt m****t@o****g 1
hiyyg 6****g 1
dingchenjing d****g@s****m 1
Vladyslav Zavadskyi 1****y 1
Vasu Agrawal v****l@f****m 1
Udon u****7@g****m 1
Thierry CANTENOT t****t@g****m 1
Philippe Weier p****r@g****m 1
Matthias Blaicher m****s@b****m 1
Li Chenchen 3****t 1
Kacper Kania k****a@g****m 1
JamesPerlman j****l@g****m 1
Ilya i****v@g****m 1
Ikko Eltociear Ashimine e****r@g****m 1
Francesco Ferroni f****1@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 134
  • Total pull requests: 38
  • Average time to close issues: 28 days
  • Average time to close pull requests: about 11 hours
  • Total issue authors: 124
  • Total pull request authors: 13
  • Average comments per issue: 3.04
  • Average comments per pull request: 1.71
  • Merged pull requests: 32
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 34
  • Pull requests: 8
  • Average time to close issues: 4 days
  • Average time to close pull requests: about 15 hours
  • Issue authors: 31
  • Pull request authors: 3
  • Average comments per issue: 1.32
  • Average comments per pull request: 4.13
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ashawkey (3)
  • dumpinfo (3)
  • half-potato (2)
  • jc211 (2)
  • johnnynunez (2)
  • ZJLi2013 (2)
  • BinglunWang (2)
  • julcst (2)
  • theFilipko (2)
  • wzds2015 (2)
  • kommander (2)
  • HLJT (1)
  • evearmadillo (1)
  • mingxuancai (1)
  • lzhnb (1)
Pull Request Authors
  • Tom94 (24)
  • johnnynunez (3)
  • amiller27 (2)
  • dmholtz (2)
  • SusanLiu0709 (1)
  • fedeceola (1)
  • ventusff (1)
  • Enter-tainer (1)
  • JamesPerlman (1)
  • fantasy-fish (1)
  • hiyyg (1)
  • mickeyding (1)
  • lsongx (1)
  • pierre-wilmot (1)
  • VasuAgrawal (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

bindings/torch/setup.py pypi
.github/workflows/main.yml actions
  • actions/checkout v3 composite