Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
15 of 222 committers (6.8%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.8%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
CUDA Core Compute Libraries
Basic Info
- Host: GitHub
- Owner: NVIDIA
- License: other
- Language: C++
- Default Branch: main
- Homepage: https://nvidia.github.io/cccl/
- Size: 132 MB
Statistics
- Stars: 1,891
- Watchers: 35
- Forks: 263
- Open Issues: 1,199
- Releases: 19
Topics
Metadata Files
README.md
|Contributor Guide|Dev Containers|Discord|Godbolt|GitHub Project|Documentation| |-|-|-|-|-|-|
CUDA Core Compute Libraries (CCCL)
Welcome to the CUDA Core Compute Libraries (CCCL) where our mission is to make CUDA more delightful.
This repository unifies three essential CUDA C++ libraries into a single, convenient repository:
The goal of CCCL is to provide CUDA C++ developers with building blocks that make it easier to write safe and efficient code. Bringing these libraries together streamlines your development process and broadens your ability to leverage the power of CUDA C++. For more information about the decision to unify these projects, see the announcement here.
Overview
The concept for the CUDA Core Compute Libraries (CCCL) grew organically out of the Thrust, CUB, and libcudacxx projects that were developed independently over the years with a similar goal: to provide high-quality, high-performance, and easy-to-use C++ abstractions for CUDA developers. Naturally, there was a lot of overlap among the three projects, and it became clear the community would be better served by unifying them into a single repository.
Thrust is the C++ parallel algorithms library which inspired the introduction of parallel algorithms to the C++ Standard Library. Thrust's high-level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs via configurable backends that allow using multiple parallel programming frameworks (such as CUDA, TBB, and OpenMP).
CUB is a lower-level, CUDA-specific library designed for speed-of-light parallel algorithms across all GPU architectures. In addition to device-wide algorithms, it provides cooperative algorithms like block-wide reduction and warp-wide scan, providing CUDA kernel developers with building blocks to create speed-of-light, custom kernels.
libcudacxx is the CUDA C++ Standard Library. It provides an implementation of the C++ Standard Library that works in both host and device code. Additionally, it provides abstractions for CUDA-specific hardware features like synchronization primitives, cache control, atomics, and more.
The main goal of CCCL is to fill a similar role that the Standard C++ Library fills for Standard C++: provide general-purpose, speed-of-light tools to CUDA C++ developers, allowing them to focus on solving the problems that matter. Unifying these projects is the first step towards realizing that goal.
Example
This is a simple example demonstrating the use of CCCL functionality from Thrust, CUB, and libcudacxx.
It shows how to use Thrust/CUB/libcudacxx to implement a simple parallel reduction kernel.
Each thread block computes the sum of a subset of the array using cub::BlockReduce.
The sum of each block is then reduced to a single value using an atomic add via cuda::atomic_ref from libcudacxx.
It then shows how the same reduction can be done using Thrust's reduce algorithm and compares the results.
```cpp
include
include
include
include
include
include
include
template
int const index = threadIdx.x + blockIdx.x * blockDim.x; int sum = 0; if (index < data.size()) { sum += data[index]; } sum = BlockReduce(temp_storage).Sum(sum);
if (threadIdx.x == 0) { cuda::atomicref<int, cuda::threadscopedevice> atomicresult(result.front()); atomicresult.fetchadd(sum, cuda::memoryorderrelaxed); } }
int main() {
// Allocate and initialize input data
int const N = 1000;
thrust::device_vector
// Allocate output data
thrust::devicevector
// Compute the sum reduction of data using a custom kernel
constexpr int blocksize = 256;
int const numblocks = cuda::ceildiv(N, blocksize);
reduce
auto const err = cudaDeviceSynchronize(); if (err != cudaSuccess) { std::cout << "Error: " << cudaGetErrorString(err) << std::endl; return -1; }
int const customresult = kernelresult[0];
// Compute the same sum reduction using Thrust int const thrust_result = thrust::reduce(thrust::device, data.begin(), data.end(), 0);
// Ensure the two solutions are identical std::printf("Custom kernel sum: %d\n", customresult); std::printf("Thrust reduce sum: %d\n", thrustresult); assert(kernelresult[0] == thrustresult); return 0; } ```
Getting Started
Users
Everything in CCCL is header-only. Therefore, users need only concern themselves with how they get the header files and how they incorporate them into their build system.
CUDA Toolkit
The easiest way to get started using CCCL is via the CUDA Toolkit which includes the CCCL headers.
When you compile with nvcc, it automatically adds CCCL headers to your include path so you can simply #include any CCCL header in your code with no additional configuration required.
If compiling with another compiler, you will need to update your build system's include search path to point to the CCCL headers in your CTK install (e.g., /usr/local/cuda/include).
```cpp
include
include
include
```
GitHub
Users who want to stay on the cutting edge of CCCL development are encouraged to use CCCL from GitHub. Using a newer version of CCCL with an older version of the CUDA Toolkit is supported, but not the other way around. For complete information on compatibility between CCCL and the CUDA Toolkit, see our platform support.
Everything in CCCL is header-only, so cloning and including it in a simple project is as easy as the following:
bash
git clone https://github.com/NVIDIA/cccl.git
nvcc -Icccl/thrust -Icccl/libcudacxx/include -Icccl/cub main.cu -o main
Note Use
-Iand not-isystemto avoid collisions with the CCCL headers implicitly included bynvccfrom the CUDA Toolkit. All CCCL headers use#pragma system_headerto ensure warnings will still be silenced as if using-isystem, see https://github.com/NVIDIA/cccl/issues/527 for more information.
Installation
The default CMake options generate only installation rules, so the familiar
cmake . && make install workflow just works:
bash
git clone https://github.com/NVIDIA/cccl.git
cd cccl
cmake . -DCMAKE_INSTALL_PREFIX=/usr/local
make install
A convenience script is also provided:
bash
ci/install_cccl.sh /usr/local
Advanced installation using presets
CMake presets are also available with options for including experimental libraries:
bash
cmake --preset install -DCMAKE_INSTALL_PREFIX=/usr/local
cmake --build --preset install --target install
Use the install-unstable preset to include experimental libraries, or
install-unstable-only to install only experimental libraries.
Conda
CCCL also provides conda packages of each release via the conda-forge channel:
bash
conda config --add channels conda-forge
conda install cccl
This will install the latest CCCL to the conda environment's $CONDA_PREFIX/include/ and $CONDA_PREFIX/lib/cmake/ directories.
It is discoverable by CMake via find_package(CCCL) and can be used by any compilers in the conda environment.
For more information, see this introduction to conda-forge.
If you want to use the same CCCL version that shipped with a particular CUDA Toolkit, e.g. CUDA 12.4, you can install CCCL with:
bash
conda config --add channels conda-forge
conda install cuda-cccl cuda-version=12.4
The cuda-cccl metapackage installs the cccl version that shipped with the CUDA Toolkit corresponding to cuda-version.
If you wish to update to the latest cccl after installing cuda-cccl, uninstall cuda-cccl before updating cccl:
bash
conda uninstall cuda-cccl
conda install -c conda-forge cccl
Note There are also conda packages with names like
cuda-cccl_linux-64. Those packages contain the CCCL versions shipped as part of the CUDA Toolkit, but are designed for internal use by the CUDA Toolkit. Installccclorcuda-ccclinstead, for compatibility with conda compilers. For more information, see the cccl conda-forge recipe.
CMake Integration
CCCL uses CMake for all build and installation infrastructure, including tests as well as targets to link against in other CMake projects. Therefore, CMake is the recommended way to integrate CCCL into another project.
For a complete example of how to do this using CMake Package Manager see our basic example project.
Other build systems should work, but only CMake is tested. Contributions to simplify integrating CCCL into other build systems are welcome.
Contributors
Interested in contributing to making CCCL better? Check out our Contributing Guide for a comprehensive overview of everything you need to know to set up your development environment, make changes, run tests, and submit a PR.
Platform Support
Objective: This section describes where users can expect CCCL to compile and run successfully.
In general, CCCL should work everywhere the CUDA Toolkit is supported, however, the devil is in the details. The sections below describe the details of support and testing for different versions of the CUDA Toolkit, host compilers, and C++ dialects.
CUDA Toolkit (CTK) Compatibility
Summary: - The latest version of CCCL is backward compatible with the current and preceding CTK major version series - CCCL is never forward compatible with any version of the CTK. Always use the same or newer than what is included with your CTK. - Minor version CCCL upgrades won't break existing code, but new features may not support all CTK versions
CCCL users are encouraged to capitalize on the latest enhancements and "live at head" by always using the newest version of CCCL. For a seamless experience, you can upgrade CCCL independently of the entire CUDA Toolkit. This is possible because CCCL maintains backward compatibility with the latest patch release of every minor CTK release from both the current and previous major version series. In some exceptional cases, the minimum supported minor version of the CUDA Toolkit release may need to be newer than the oldest release within its major version series.
When a new major CTK is released, we drop support for the oldest supported major version.
| CCCL Version | Supports CUDA Toolkit Version | |--------------|------------------------------------------------| | 2.x | 11.1 - 11.8, 12.x (only latest patch releases) | | 3.x | 12.x, 13.x (only latest patch releases) |
Well-behaved code using the latest CCCL should compile and run successfully with any supported CTK version. Exceptions may occur for new features that depend on new CTK features, so those features would not work on older versions of the CTK.
Users can integrate a newer version of CCCL into an older CTK, but not the other way around. This means an older version of CCCL is not compatible with a newer CTK. In other words, CCCL is never forward compatible with the CUDA Toolkit.
The table below summarizes compatibility of the CTK and CCCL:
| CTK Version | Included CCCL Version | Desired CCCL | Supported? | Notes |
|:-----------:|:---------------------:|:--------------------:|:----------:|:--------------------------------------------------------:|
| CTK X.Y | CCCL MAJOR.MINOR | CCCL MAJOR.MINOR+n | ✅ | Some new features might not work |
| CTK X.Y | CCCL MAJOR.MINOR | CCCL MAJOR+1.MINOR | ✅ | Possible breaks; some new features might not be available|
| CTK X.Y | CCCL MAJOR.MINOR | CCCL MAJOR+2.MINOR | ❌ | CCCL supports only two CTK major versions |
| CTK X.Y | CCCL MAJOR.MINOR | CCCL MAJOR.MINOR-n | ❌ | CCCL isn't forward compatible |
| CTK X.Y | CCCL MAJOR.MINOR | CCCL MAJOR-n.MINOR | ❌ | CCCL isn't forward compatible |
For more information on CCCL versioning, API/ABI compatibility, and breaking changes see the Versioning section below.
Operating Systems
Unless otherwise specified, CCCL supports all the same operating systems as the CUDA Toolkit, which are documented here: - Linux - Windows
Host Compilers
Unless otherwise specified, CCCL supports the same host compilers as the latest CUDA Toolkit, which are documented here: - Linux - Windows
For GCC on Linux, at least 7.x is required.
When using older CUDA Toolkits, we also only support the host compilers of the latest CUDA Toolkit, but at least the most recent host compiler of any supported older CUDA Toolkit.
We may retain support of additional compilers and will accept corresponding patches from the community with reasonable fixes. But we will not invest significant time in triaging or fixing issues for older compilers.
In the spirit of "You only support what you test", see our CI Overview for more information on exactly what we test.
GPU Architectures
While some features may be specific to certain architectures, CCCL generally supports all GPU architectures that are supported by the current major CUDA Toolkit (CTK).
To be clear, while CCCL supports compilation with two CTK major versions, this does not mean we continue to support all architectures from the older CTK.
For example, CCCL 3.0 supports compiling with CTK 12.x and 13.x where
- CUDA Toolkit 13.x supports >=sm_75
- CUDA Toolkit 12.x supports >=sm_50
So even when compiling CCCL 3.0 with CTK 12.x, only the architectures supported by the current CTK (13.x) are available (sm_75+).
C++ Dialects
- C++17
- C++20
Testing Strategy
CCCL's testing strategy strikes a balance between testing as many configurations as possible and maintaining reasonable CI times.
For CUDA Toolkit versions, testing is done against both the oldest and the newest supported versions. For instance, if the latest version of the CUDA Toolkit is 12.6, tests are conducted against 11.1 and 12.6. For each CUDA version, builds are completed against all supported host compilers with all supported C++ dialects.
The testing strategy and matrix are constantly evolving.
The matrix defined in the ci/matrix.yaml file is the definitive source of truth.
For more information about our CI pipeline, see here.
Versioning
Objective: This section describes how CCCL is versioned, API/ABI stability guarantees, and compatibility guidelines to minimize upgrade headaches.
Summary
- The entirety of CCCL's API shares a common semantic version across all components
- Only the most recently released version is supported and fixes are not backported to prior releases
- API breaking changes and incrementing CCCL's major version will only coincide with a new major version release of the CUDA Toolkit
- Not all source breaking changes are considered breaking changes of the public API that warrant bumping the major version number
- Do not rely on ABI stability of entities in the cub:: or thrust:: namespaces
- ABI breaking changes for symbols in the cuda:: namespace may happen at any time, but will be reflected by incrementing the ABI version which is embedded in an inline namespace for all cuda:: symbols. Multiple ABI versions may be supported concurrently.
Note: Prior to merging Thrust, CUB, and libcudacxx into this repository, each library was independently versioned according to semantic versioning. Starting with the 2.1 release, all three libraries synchronized their release versions in their separate repositories. Moving forward, CCCL will continue to be released under a single semantic version, with 2.2.0 being the first release from the nvidia/cccl repository.
Breaking Change
A Breaking Change is a change to explicitly supported functionality between released versions that would require a user to do work in order to upgrade to the newer version.
In the limit, any change has the potential to break someone somewhere. As a result, not all possible source breaking changes are considered Breaking Changes to the public API that warrant bumping the major semantic version.
The sections below describe the details of breaking changes to CCCL's API and ABI.
Application Programming Interface (API)
CCCL's public API is the entirety of the functionality intentionally exposed to provide the utility of the library.
In other words, CCCL's public API goes beyond just function signatures and includes (but is not limited to): - The location and names of headers intended for direct inclusion in user code - The namespaces intended for direct use in user code - The declarations and/or definitions of functions, classes, and variables located in headers and intended for direct use in user code - The semantics of functions, classes, and variables intended for direct use in user code
Moreover, CCCL's public API does not include any of the following:
- Any symbol prefixed with _ or __
- Any symbol whose name contains detail including the detail:: namespace or a macro
- Any header file contained in a detail/ directory or sub-directory thereof
- The header files implicitly included by any header part of the public API
In general, the goal is to avoid breaking anything in the public API. Such changes are made only if they offer users better performance, easier-to-understand APIs, and/or more consistent APIs.
Any breaking change to the public API will require bumping CCCL's major version number. In keeping with CUDA Minor Version Compatibility, API breaking changes and CCCL major version bumps will only occur coinciding with a new major version release of the CUDA Toolkit.
Anything not part of the public API may change at any time without warning.
API Versioning
The public API of all CCCL's components share a unified semantic version of MAJOR.MINOR.PATCH.
Only the most recently released version is supported. As a rule, features and bug fixes are not backported to previously released version or branches.
The preferred method for querying the version is to use CCCL_[MAJOR/MINOR/PATCH_]VERSION as described below.
For backwards compatibility, the Thrust/CUB/libcudacxxx version definitions are available and will always be consistent with CCCL_VERSION.
Note that Thrust/CUB use a MMMmmmpp scheme whereas the CCCL and libcudacxx use MMMmmmppp.
| | CCCL | libcudacxx | Thrust | CUB |
|------------------------|----------------------------------------|-------------------------------------------|------------------------------|---------------------------|
| Header | <cuda/version> | <cuda/std/version> | <thrust/version.h> | <cub/version.h> |
| Major Version | CCCL_MAJOR_VERSION | _LIBCUDACXX_CUDA_API_VERSION_MAJOR | THRUST_MAJOR_VERSION | CUB_MAJOR_VERSION |
| Minor Version | CCCL_MINOR_VERSION | _LIBCUDACXX_CUDA_API_VERSION_MINOR | THRUST_MINOR_VERSION | CUB_MINOR_VERSION |
| Patch/Subminor Version | CCCL_PATCH_VERSION | _LIBCUDACXX_CUDA_API_VERSION_PATCH | THRUST_SUBMINOR_VERSION | CUB_SUBMINOR_VERSION |
| Concatenated Version | CCCL_VERSION (MMMmmmppp) | _LIBCUDACXX_CUDA_API_VERSION (MMMmmmppp)| THRUST_VERSION (MMMmmmpp) | CUB_VERSION (MMMmmmpp) |
Application Binary Interface (ABI)
The Application Binary Interface (ABI) is a set of rules for: - How a library's components are represented in machine code - How those components interact across different translation units
A library's ABI includes, but is not limited to: - The mangled names of functions and types - The size and alignment of objects and types - The semantics of the bytes in the binary representation of an object
An ABI Breaking Change is any change that results in a change to the ABI of a function or type in the public API. For example, adding a new data member to a struct is an ABI Breaking Change as it changes the size of the type.
In CCCL, the guarantees about ABI are as follows:
- Symbols in the
thrust::andcub::namespaces may break ABI at any time without warning. - The ABI of
thrust::andcub::symbols includes the CUDA architectures used for compilation. Therefore, athrust::orcub::symbol may have a different ABI if:- compiled with different architectures
- compiled as a CUDA source file (
-x cu) vs C++ source (-x cpp)
- Symbols in the
cuda::namespace may also break ABI at any time. However,cuda::symbols embed an ABI version number that is incremented whenever an ABI break occurs. Multiple ABI versions may be supported concurrently, and therefore users have the option to revert to a prior ABI version. For more information, see here.
Who should care about ABI?
In general, CCCL users only need to worry about ABI issues when building or using a binary artifact (like a shared library) whose API directly or indirectly includes types provided by CCCL.
For example, consider if libA.so was built using CCCL version X and its public API includes a function like:
c++
void foo(cuda::std::optional<int>);
If another library, libB.so, is compiled using CCCL version Y and uses foo from libA.so, then this can fail if there was an ABI break between version X and Y.
Unlike with API breaking changes, ABI breaks usually do not require code changes and only require recompiling everything to use the same ABI version.
To learn more about ABI and why it is important, see What is ABI, and What Should C++ Do About It?.
Compatibility Guidelines
As mentioned above, not all possible source breaking changes constitute a Breaking Change that would require incrementing CCCL's API major version number.
Users are encouraged to adhere to the following guidelines in order to minimize the risk of disruptions from accidentally depending on parts of CCCL that are not part of the public API:
- Do not add any declarations to, or specialize any template from, the
thrust::,cub::,nv::, orcuda::namespaces unless an exception is noted for a specific symbol, e.g., specializingcuda::std::iterator_traits- Rationale: This would cause conflicts if a symbol or specialization is added with the same name.
- Do not take the address of any API in the
thrust::,cub::,cuda::, ornv::namespaces.- Rationale: This would prevent adding overloads of these APIs.
- Do not forward declare any API in the
thrust::,cub::,cuda::, ornv::namespaces.- Rationale: This would prevent adding overloads of these APIs.
- Do not directly reference any symbol prefixed with
_,__, or withdetailanywhere in its name including adetail::namespace or macro- Rationale: These symbols are for internal use only and may change at any time without warning.
- Include what you use. For every CCCL symbol that you use, directly
#includethe header file that declares that symbol. In other words, do not rely on headers implicitly included by other headers.- Rationale: Internal includes may change at any time.
Portions of this section were inspired by Abseil's Compatibility Guidelines.
Deprecation Policy
We will do our best to notify users prior to making any breaking changes to the public API, ABI, or modifying the supported platforms and compilers.
As appropriate, deprecations will come in the form of programmatic warnings which can be disabled.
The deprecation period will depend on the impact of the change, but will usually last at least 2 minor version releases.
Mapping to CTK Versions
| CCCL version | CTK version | |--------------|-------------| | 3.1 | 13.1 | | 3.0 | 13.0 | | 2.8 | 12.9 | | 2.7 | 12.8 | | 2.5 | 12.6 | | 2.4 | 12.5 | | 2.3 | 12.4 |
Test yourself: https://cuda.godbolt.org/z/K818M4Y9f
CTKs before 12.4 shipped Thrust, CUB and libcudacxx as individual libraries.
| Thrust/CUB/libcudacxx version | CTK version | |-------------------------------|-------------| | 2.2 | 12.3 | | 2.1 | 12.2 | | 2.0/2.0/1.9 | 12.1 | | 2.0/2.0/1.9 | 12.0 |
CI Pipeline Overview
For a detailed overview of the CI pipeline, see ci-overview.md.
Related Projects
Projects that are related to CCCL's mission to make CUDA more delightful: - cuCollections - GPU accelerated data structures like hash tables - NVBench - Benchmarking library tailored for CUDA applications - stdexec - Reference implementation for Senders asynchronous programming model
Projects Using CCCL
Does your project use CCCL? Open a PR to add your project to this list!
- AmgX - Multi-grid linear solver library
- ColossalAI - Tools for writing distributed deep learning models
- cuDF - Algorithms and file readers for ETL data analytics
- cuGraph - Algorithms for graph analytics
- cuML - Machine learning algorithms and primitives
- CuPy - NumPy & SciPy for GPU
- cuSOLVER - Dense and sparse linear solvers
- GooFit - Library for maximum-likelihood fits
- HeavyDB - SQL database engine
- HOOMD - Monte Carlo and molecular dynamics simulations
- HugeCTR - GPU-accelerated recommender framework
- Hydra - High-energy Physics Data Analysis
- Hypre - Multigrid linear solvers
- LightSeq - Training and inference for sequence processing and generation
- MatX - Numerical computing library using expression templates to provide efficient, Python-like syntax
- PyTorch - Tensor and neural network computations
- Qiskit - High performance simulator for quantum circuits
- QUDA - Lattice quantum chromodynamics (QCD) computations
- RAFT - Algorithms and primitives for machine learning
- TensorFlow - End-to-end platform for machine learning
- TensorRT - Deep learning inference
- tsne-cuda - Stochastic Neighborhood Embedding library
- Visualization Toolkit (VTK) - Rendering and visualization library
- XGBoost - Gradient boosting machine learning algorithms
Owner
- Name: NVIDIA Corporation
- Login: NVIDIA
- Kind: organization
- Location: 2788 San Tomas Expressway, Santa Clara, CA, 95051
- Website: https://nvidia.com
- Repositories: 342
- Profile: https://github.com/NVIDIA
Citation (CITATION.md)
# Citation Guide
## To Cite CCCL
If you use CCCL in a publication, please use citations in the following format (BibTeX entry for LaTeX):
```tex
@Manual{,
title = {{CCCL}: {CUDA} {C++} {C}ore {L}ibraries},
author = {{CCCL Development Team}},
year = {2023},
url = {https://github.com/NVIDIA/cccl},
}
```
Committers
Last synced: 8 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Jared Hoberock | j****k@g****m | 2,134 |
| Michael Schellenberger Costa | m****o@n****m | 839 |
| Allison Vacanti | a****6@g****m | 788 |
| Bryce Adelstein Lelbach aka wash | b****h@g****m | 717 |
| dumerrill | d****l@n****m | 611 |
| Jake Hemstad | j****d@n****m | 602 |
| Nathan Bell | w****l@g****m | 593 |
| Georgy Evtushenko | e****y@g****m | 490 |
| Wesley Maxey | w****y@g****m | 488 |
| Bernhard Manfred Gruber | b****r@g****m | 417 |
| Michał 'Griwes' Dominiak | g****s@g****o | 235 |
| Eric Niebler | e****r@n****m | 143 |
| Elias Stehle | 3****e | 132 |
| David Bayer | 4****r | 129 |
| Olivier Giroux | o****x@g****m | 125 |
| dumerrill | d****l@m****t | 122 |
| Federico Busato | 5****o | 111 |
| Evghenii Gaburov | e****v@n****m | 103 |
| Filipe Maia | f****a@g****m | 66 |
| Cédric Augonnet | 1****t | 57 |
| pciolkosz | p****z@n****m | 54 |
| Allard Hendriksen | a****n@n****m | 41 |
| dumerrill | d****l@D****m | 36 |
| Ashwin Srinath | 3****a | 35 |
| Duane Merrill | d****l@n****m | 32 |
| Oleksandr Pavlyk | 2****k | 29 |
| Robert Maynard | r****d@n****m | 29 |
| gonzalobg | 6****g | 26 |
| Giannis Gonidelis | g****s@h****m | 25 |
| Nader Al Awar | n****r@g****m | 24 |
| and 192 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 1,396
- Total pull requests: 4,156
- Average time to close issues: 5 months
- Average time to close pull requests: 7 days
- Total issue authors: 159
- Total pull request authors: 88
- Average comments per issue: 0.75
- Average comments per pull request: 3.83
- Merged pull requests: 2,977
- Bot issues: 1
- Bot pull requests: 86
Past Year
- Issues: 867
- Pull requests: 3,494
- Average time to close issues: 17 days
- Average time to close pull requests: 5 days
- Issue authors: 102
- Pull request authors: 69
- Average comments per issue: 0.55
- Average comments per pull request: 3.94
- Merged pull requests: 2,506
- Bot issues: 1
- Bot pull requests: 85
Top Authors
Issue Authors
- bernhardmgruber (164)
- jrhemstad (148)
- gevtushenko (97)
- miscco (69)
- shwina (66)
- alliepiper (65)
- fbusato (55)
- elstehle (51)
- brycelelbach (51)
- pciolkosz (45)
- NaderAlAwar (44)
- oleksandr-pavlyk (38)
- leofang (34)
- gonidelis (32)
- ahendriksen (31)
Pull Request Authors
- bernhardmgruber (824)
- miscco (709)
- davebayer (345)
- ericniebler (335)
- fbusato (293)
- alliepiper (288)
- pciolkosz (134)
- caugonnet (134)
- wmaxey (121)
- elstehle (108)
- shwina (105)
- github-actions[bot] (81)
- oleksandr-pavlyk (73)
- gonidelis (60)
- NaderAlAwar (60)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 5
-
Total downloads:
- pypi 36 last-month
-
Total dependent packages: 1
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 125
- Total maintainers: 4
proxy.golang.org: github.com/nvidia/cccl
- Documentation: https://pkg.go.dev/github.com/nvidia/cccl#section-documentation
- License: other
-
Latest release: v3.0.2+incompatible
published 5 months ago
Rankings
proxy.golang.org: github.com/NVIDIA/cccl
- Documentation: https://pkg.go.dev/github.com/NVIDIA/cccl#section-documentation
- License: other
-
Latest release: v3.0.2+incompatible
published 5 months ago
Rankings
pypi.org: cuda-cooperative
cuda.cooperative: (experimental) CUDA cooperative algorithms for Python
- Documentation: https://cuda-cooperative.readthedocs.io/
- License: Apache Software License
-
Latest release: 0.0.1.dev0
published 10 months ago
Rankings
pypi.org: cuda-parallel
cuda.parallel: (experimental) CUDA parallel algorithms for Python
- Documentation: https://cuda-parallel.readthedocs.io/
- License: Apache Software License
-
Latest release: 0.0.1.dev0
published 10 months ago
Rankings
anaconda.org: cccl
- Homepage: https://github.com/NVIDIA/cccl
- License: Apache-2.0 AND BSD-3-Clause AND BSD-2-Clause AND BSL-1.0 AND NCSA AND MIT AND LicenseRef-NVIDIA-Software-License
-
Latest release: 2.3.2
published over 1 year ago
Rankings
Dependencies
- aws-actions/configure-aws-credentials v2 composite
- actions/checkout v3 composite
- tibdex/github-app-token v1.8.0 composite
- tibdex/github-app-token v1.8.0 composite
- tibdex/github-app-token v1.8.0 composite
- tibdex/github-app-token v1.8.0 composite
- tibdex/github-app-token v1.8.0 composite
- ./cccl/.github/actions/configure_cccl_sccache * composite
- actions/checkout v3 composite
- ericwf/builder-base latest
- ericwf/gcc-5 latest
- ericwf/gcc-tot latest
- ericwf/libcxx-buildbot-base latest
- ericwf/llvm-4 latest
- ericwf/llvm-tot latest
- jekyll/jekyll 4.0 build
- ericwf/builder-base latest
- ericwf/gcc-5 latest
- ericwf/gcc-tot latest
- ericwf/libcxx-buildbot-base latest
- ericwf/llvm-4 latest
- ericwf/llvm-tot latest
- github-pages >= 0 development
- jekyll-default-layout >= 0 development
- jekyll-optional-front-matter >= 0 development
- jekyll-relative-links >= 0 development
- jekyll-remote-theme >= 0 development
- jekyll-titles-from-headings >= 0 development
- webrick >= 0 development
- just-the-docs >= 0
- github-pages >= 0 development
- jekyll-default-layout >= 0 development
- jekyll-include-cache >= 0 development
- jekyll-optional-front-matter >= 0 development
- jekyll-relative-links >= 0 development
- jekyll-titles-from-headings >= 0 development
- just-the-docs >= 0