mscclpp
MSCCL++: A GPU-driven communication stack for scalable AI applications
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.8%) to scientific vocabulary
Keywords from Contributors
Repository
MSCCL++: A GPU-driven communication stack for scalable AI applications
Basic Info
- Host: GitHub
- Owner: microsoft
- License: mit
- Language: C++
- Default Branch: main
- Homepage: https://microsoft.github.io/mscclpp/
- Size: 7.81 MB
Statistics
- Stars: 407
- Watchers: 22
- Forks: 65
- Open Issues: 35
- Releases: 12
Metadata Files
README.md
MSCCL++
| Testing Pipelines | Build Status |
|--------------------------|-------------------|
| Unit Tests (CUDA) | |
| Integration Tests (CUDA) |
|
| Integration Tests (ROCm) |
|
A GPU-driven communication stack for scalable AI applications.
| Quick Start | Tutorials | API Reference | Paper |
Overview
MSCCL++ redefines inter-GPU communication interfaces, thereby delivering a highly efficient and customizable communication stack for distributed GPU applications. Its design is specifically tailored to accommodate diverse performance optimization scenarios often encountered in state-of-the-art AI applications. Figure below provides a high-level overview of MSCCL++ abstractions in CUDA, C, and Python.
|
|
The following highlight the key features of MSCCL++.
Light-weight and multi-layer abstractions. MSCCL++ provides communication abstractions at lowest level close to hardware and at the highest level close to application API. The lowest level of abstraction is ultra light weight which enables a user to implement logics of data movement for a collective operation such as AllReduce inside a GPU kernel extremely efficiently without worrying about memory ordering of different ops. The modularity of MSCCL++ enables a user to construct the building blocks of MSCCL++ in a high level abstraction in Python and feed them to a CUDA kernel in order to facilitate the user's productivity.
1-sided 0-copy synchronous and asynchronous abstracts. MSCCL++ provides fine-grained synchronous and asynchronous 0-copy 1-sided abstracts for communication primitives such as
put(),get(),signal(),flush(), andwait(). The 1-sided abstractions allows a user to asynchronouslyput()their data on the remote GPU as soon as it is ready without requiring the remote side to issue any receive instruction. This enables users to easily implement flexible communication logics, such as overlapping communication with computation, or implementing customized collective communication algorithms without worrying about potential deadlocks. Additionally, the 0-copy capability enables MSCCL++ to directly transfer data between user's buffers without using intermediate internal buffers which saves GPU bandwidth and memory capacity.Unified abstractions for different interconnection hardware. MSCCL++ provides consistent abstractions regardless of the location of the remote GPU (either on the local node or on a remote node) or the underlying link (either NVLink/xGMI or InfiniBand). This simplifies the code for inter-GPU communication, which is often complex due to memory ordering of GPU/CPU read/writes and therefore, is error-prone.
Performance
While the power of MSCCL++ is fully realized with application-specific optimization, it still delivers performance benefits even for collective communication operations. The following figures provide a comparison of the AllReduce throughput of MSCCL++ against NCCL 2.19.3. This benchmark was tested over two Azure NDmv4 SKUs (8 A100-80G GPUs per node).
The key motivation behind these results is scaling of inference for LLM models using tensor parallelism. LLM requests usually are executed in two phases: prompt processing and token sampling. The prompt processing uses a large batch size that is usually equal to a request context length and the corresponding AllReduce size is len_context*dim_hidden*sizeof(fp16). For a context length of 2048 with a hidden dimension of 12288 (GPT-3 size), the AllReduce size is 48MB. The token sampling uses a smaller batch size which corresponds to concurrent user requests in the system and therefore, the AllReduce size is batch_size*dim_hidden*sizeof(fp16). For a concurrency of 16 users, the AllReduce size is 384KB. As the figures below demonstrates, MSCCL++ provides significant speed up over NCCL which is crucial for efficiency of serving LLMs at large scale.
|
|
|
Key Concepts
The following highlights key concepts of MSCCL++.
On-GPU Communication Interfaces: Channels
MSCCL++ provides peer-to-peer communication methods between GPUs. A peer-to-peer connection between two GPUs is called a Channel. Channels are constructed by MSCCL++ host-side interfaces and copied to GPUs during initialization. Channels provide GPU-side interfaces, which means that all communication methods are defined as a device function to be called from a GPU kernel code. For example, the put() method in the following example copies 1KB data from the local GPU to a remote GPU.
cpp
// `PortChannel` will be explained in the following section.
__device__ mscclpp::DeviceHandle<mscclpp::PortChannel> channel;
__global__ void gpuKernel() {
...
// Only one thread is needed for this method.
channel.put(/*dstOffset=*/ 0, /*srcOffset=*/ 0, /*size=*/ 1024);
...
}
MSCCL++ also provides efficient synchronization methods, signal(), flush(), and wait(). For example, we can implement a simple barrier between two ranks (peer-to-peer connected through channel) as follows. Explanation of each method is inlined.
cpp
// Only one thread is needed for this function.
__device__ void barrier() {
// Inform the peer GPU that I have arrived at this point and
// all previous memory operations are done.
channel.signal();
// One may call flush() to make sure all previous channel operations
// are complete from the local device's perspective.
// flush() is unnecessary in this example.
channel.flush();
// Wait for the peer GPU to call signal().
channel.wait();
// Now this thread is synchronized with the remote GPU’s thread.
// Users may call a local synchronize functions (e.g., __syncthreads())
// to synchronize other local threads as well with the remote side.
}
MSCCL++ provides consistent interfaces, i.e., the above interfaces are used regardless of the location of the remote GPU (either on the local node or on a remote node) or the underlying link (either NVLink/xGMI or InfiniBand).
PortChannel and MemoryChannel
MSCCL++ delivers two types of channels, PortChannel and MemoryChannel. PortChannel provides port-mapping-based data copy and synchronization methods. When called, these methods send/receive a signal to/from a host-side proxy, which will trigger (R)DMA (such as cudaMemcpy* or ibv_post_send) or issue synchronization methods (such as cudaStreamSynchronize or ibv_poll_cq). Since the key functionalities are run by the proxy, PortChannel requires only a single GPU thread to call its methods. See all PortChannel methods from here.
On the other hand, MemoryChannel provides memory-mapping-based copy and synchronization methods. When called, these methods will directly use GPU threads to read/write from/to the remote GPU's memory space. Comparing against PortChannel, MemoryChannel is especially performant for low-latency scenarios, while it may need many GPU threads to call copying methods at the same time to achieve high copying bandwidth. See all MemoryChannel methods from here.
Host-Side Communication Proxy
MSCCL++ provides a default implementation of a host-side proxy for PortChannels, which is a background host thread that busy polls triggers from GPUs and conducts functionalities accordingly. For example, the following is a typical host-side code for MSCCL++.
cpp
// Bootstrap: initialize control-plane connections between all ranks
auto bootstrap = std::make_shared<mscclpp::TcpBootstrap>(rank, world_size);
// Create a communicator for connection setup
mscclpp::Communicator comm(bootstrap);
// Setup connections here using `comm`
...
// Construct the default proxy
mscclpp::ProxyService proxyService();
// Start the proxy
proxyService.startProxy();
// Run the user application, i.e., launch GPU kernels here
...
// Stop the proxy after the application is finished
proxyService.stopProxy();
While the default implementation already enables any kinds of communication, MSCCL++ also supports users to easily implement their own customized proxies for further optimization. For example, the following example re-defines how to interpret triggers from GPUs.
```cpp // Proxy FIFO is obtained from mscclpp::Proxy on the host and copied to the device. device mscclpp::FifoDeviceHandle fifo; global void gpuKernel() { ... // Only one thread is needed for the followings mscclpp::ProxyTrigger trigger; // Send a custom request: "1" trigger.fst = 1; fifo.push(trigger); // Send a custom request: "2" trigger.fst = 2; fifo.push(trigger); // Send a custom request: "0xdeadbeef" trigger.fst = 0xdeadbeef; fifo.push(trigger); ... }
// Host-side custom proxy service class CustomProxyService { private: mscclpp::Proxy proxy; public: CustomProxyService() : proxy(& { // Custom trigger handler if (trigger.fst == 1) { // Handle request "1" } else if (trigger.fst == 2) { // Handle request "2" } else if (trigger.fst == 0xdeadbeef) { // Handle request "0xdeadbeef" } }, & { /* Empty proxy initializer */ }) {} void startProxy() { proxy.start(); } void stopProxy() { proxy.stop(); } }; ```
Customized proxies can be used for conducting a series of pre-defined data transfers within only a single trigger from GPU at runtime. This would be more efficient than sending a trigger for each data transfer one by one.
Python Interfaces
MSCCL++ provides Python bindings and interfaces, which simplifies integration with Python applications.
Projects using MSCCL++
MSCCL++ is being used in many amazing projects to power their communication needs. Some projects include:
- ARK: A GPU-driven system framework for scalable AI applications [Paper link], Accepted at NSDI 2023
- FlashInfer: A Kernel Library for LLM Serving
- ForestColl: Throughput-Optimal Collective Communications on Heterogeneous Network Fabrics [Paper link]
- LMDeploy: A toolkit for compressing, deploying, and serving LLMs
- Nanoflow: A throughput-oriented high-performance serving framework for LLMs [Paper link]
- ROCm Communication Collectives Library (RCCL)
- Splitwise: Efficient generative LLM inference using phase splitting [Paper link], Accepted at ISCA 2024, Best Paper Nominee
- TVM: Open deep learning compiler stack for cpu, gpu and specialized accelerators
- SGLang: A fast serving framework for large language models and vision language models.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Citation
If you use this project for your work, please cite our paper:
bibtex
@misc{ShahJLRHJMSCZDMY2025,
title={MSCCL++: Rethinking GPU Communication Abstractions for Cutting-edge AI Applications},
author={Aashaka Shah and Abhinav Jangda and Binyang Li and Caio Rocha and Changho Hwang and Jithin Jose and Madan Musuvathi and Olli Saarikivi and Peng Cheng and Qinghua Zhou and Roshan Dathathri and Saeed Maleki and Ziyue Yang},
year={2025},
eprint={2504.09014},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2504.09014},
}
Owner
- Name: Microsoft
- Login: microsoft
- Kind: organization
- Email: opensource@microsoft.com
- Location: Redmond, WA
- Website: https://opensource.microsoft.com
- Twitter: OpenAtMicrosoft
- Repositories: 7,257
- Profile: https://github.com/microsoft
Open source projects and samples from Microsoft
Citation (CITATION.cff)
cff-version: 1.2.0
title: >-
MSCCL++: Rethinking GPU Communication Abstractions for
Cutting-edge AI Applications
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Aashaka
family-names: Shah
affiliation: Microsoft Research
- given-names: Abhinav
family-names: Jangda
affiliation: Microsoft Research
- given-names: Binyang
family-names: Li
affiliation: Microsoft Azure
- given-names: Caio
family-names: Rocha
affiliation: Microsoft Azure
- given-names: Changho
family-names: Hwang
affiliation: Microsoft Research
- given-names: Jithin
family-names: Jose
affiliation: Microsoft Azure
- given-names: Madan
family-names: Musuvathi
affiliation: Microsoft Research
- given-names: Olli
family-names: Saarikivi
affiliation: Microsoft Research
- given-names: Peng
family-names: Cheng
affiliation: Microsoft Research
- given-names: Qinghua
family-names: Zhou
affiliation: Microsoft Azure
- given-names: Roshan
family-names: Dathathri
affiliation: Microsoft Research
- given-names: Saeed
family-names: Maleki
affiliation: Microsoft Research
- given-names: Ziyue
family-names: Yang
affiliation: Microsoft Research
identifiers:
- type: other
value: 'arxiv:2504.09014'
repository-code: 'https://github.com/microsoft/mscclpp'
url: 'https://microsoft.github.io/mscclpp/index.html'
abstract: >-
MSCCL++ redefines the interface for inter-GPU communication, thereby
delivering a highly efficient and customizable communication stack
tailored for distributed GPU applications.
license: MIT
license-url: https://github.com/microsoft/mscclpp/blob/main/LICENSE
Committers
Last synced: 11 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Changho Hwang | c****g@m****m | 196 |
| Saeed Maleki | s****l@m****m | 146 |
| Binyang Li | b****i@m****m | 117 |
| Olli Saarikivi | o****k@m****m | 56 |
| Caio Rocha | 1****r | 29 |
| Crutcher Dunnavant | c****r@o****m | 23 |
| Ziyue Yang | z****g@m****m | 18 |
| Saeed Maleki | 3****i | 15 |
| Madan Musuvathi | m****m@m****m | 14 |
| Crutcher Dunnavant | c****r@g****m | 8 |
| Qinghua Zhou | q****u@m****m | 7 |
| v-xiaoxshi | v****i@m****m | 6 |
| Roshan Dathathri | r****i@m****m | 5 |
| Felipe Petroski Such | f****e@o****m | 5 |
| Microsoft Open Source | m****e | 5 |
| aashaka | a****6@g****m | 4 |
| Saeed Maleki (saemal) | s****l@a****l | 4 |
| Angelica Moreira | 4****a | 2 |
| Bin Wang | 5****w | 1 |
| David Sidler | d****r@a****m | 1 |
| Jeff Rasley | j****5@g****m | 1 |
| Nusrat Islam | N****m@a****m | 1 |
| Pedram Alizadeh | p****a@a****m | 1 |
| SreevatsaAnantharamu | s****g@g****m | 1 |
| Ubuntu | a****r@c****t | 1 |
| Yang Wang | y****1@m****m | 1 |
| lambda7xx | l****x@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 79
- Total pull requests: 506
- Average time to close issues: 4 months
- Average time to close pull requests: 6 days
- Total issue authors: 34
- Total pull request authors: 26
- Average comments per issue: 1.89
- Average comments per pull request: 1.07
- Merged pull requests: 386
- Bot issues: 1
- Bot pull requests: 0
Past Year
- Issues: 44
- Pull requests: 334
- Average time to close issues: 12 days
- Average time to close pull requests: 5 days
- Issue authors: 24
- Pull request authors: 18
- Average comments per issue: 2.18
- Average comments per pull request: 1.52
- Merged pull requests: 248
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- liangyuRain (13)
- chhwang (12)
- FC-Li (6)
- saeedmaleki (6)
- Binyang2014 (4)
- TonyWu199 (4)
- rajagond (3)
- jhlee508 (2)
- cubele (2)
- chenhongyu2048 (2)
- zyksir (2)
- hidva (1)
- caiomcbr (1)
- corey-derochie-amd (1)
- yq33victor (1)
Pull Request Authors
- chhwang (157)
- Binyang2014 (155)
- caiomcbr (75)
- yzygitzh (21)
- saeedmaleki (19)
- seagater (16)
- SreevatsaAnantharamu (10)
- nusislam (7)
- crutcher (6)
- roshandathathri (5)
- pash-msft (4)
- liangyuRain (4)
- aashaka (3)
- olsaarik (3)
- angelica-moreira (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v4 composite
- github/codeql-action/analyze v2 composite
- github/codeql-action/init v2 composite
- actions/checkout v4 composite
- actions/checkout v4 composite
- actions/setup-python v4 composite
- actions/checkout v4 composite
- cuda-python *
- cupy-cuda11x *
- matplotlib *
- mpi4py *
- netifaces *
- numpy *
- prettytable *
- pytest *
- cuda-python *
- cupy-cuda12x *
- matplotlib *
- mpi4py *
- netifaces *
- numpy *
- prettytable *
- pytest *