composable-kernel

Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators

https://github.com/rocm/composable_kernel

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    3 of 116 committers (2.6%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.0%) to scientific vocabulary

Keywords from Contributors

meshes jax cryptocurrency transformer parallel interpretability sequences generic projection interactive
Last synced: 6 months ago · JSON representation ·

Repository

Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators

Basic Info
Statistics
  • Stars: 452
  • Watchers: 27
  • Forks: 228
  • Open Issues: 132
  • Releases: 20
Created over 4 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog Contributing License Citation Codeowners

README.md

Composable Kernel

[!NOTE] The published documentation is available at Composable Kernel in an organized, easy-to-read format, with search and a table of contents. The documentation source files reside in the docs folder of this repository. As with all ROCm projects, the documentation is open source. For more information on contributing to the documentation, see Contribute to ROCm documentation.

The Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads across multiple architectures (GPUs, CPUs, etc.). The CK library uses general purpose kernel languages, such as HIP C++.

CK uses two concepts to achieve performance portability and code maintainability:

  • A tile-based programming model
  • Algorithm complexity reduction for complex machine learning (ML) operators. This uses an innovative technique called Tensor Coordinate Transformation.

ALT

The current CK library is structured into four layers:

  • Templated Tile Operators
  • Templated Kernel and Invoker
  • Instantiated Kernel and Invoker
  • Client API

ALT

General information

CK is released under the MIT license.

Building CK

We recommend building CK inside Docker containers, which include all necessary packages. Pre-built Docker images are available on DockerHub.

  1. To build a new Docker image, use the Dockerfile provided with the source code:

    bash DOCKER_BUILDKIT=1 docker build -t ck:latest -f Dockerfile .

  2. Launch the Docker container:

    bash docker run \ -it \ --privileged \ --group-add sudo \ -w /root/workspace \ -v ${PATH_TO_LOCAL_WORKSPACE}:/root/workspace \ ck:latest \ /bin/bash

  3. Clone CK source code from the GitHub repository and start the build:

    bash git clone https://github.com/ROCm/composable_kernel.git && \ cd composable_kernel && \ mkdir build && \ cd build

    You must set the GPU_TARGETS macro to specify the GPU target architecture(s) you want to run CK on. You can specify single or multiple architectures. If you specify multiple architectures, use a semicolon between each; for example, gfx908;gfx90a;gfx942.

    bash cmake \ -D CMAKE_PREFIX_PATH=/opt/rocm \ -D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc \ -D CMAKE_BUILD_TYPE=Release \ -D GPU_TARGETS="gfx908;gfx90a" \ ..

    If you don't set GPU_TARGETS on the cmake command line, CK is built for all GPU targets supported by the current compiler (this may take a long time). Tests and examples will only get built if the GPU_TARGETS is set by the user on the cmake command line.

    NOTE: If you try setting GPU_TARGETS to a list of architectures, the build will only work if the architectures are similar, e.g., gfx908;gfx90a, or gfx1100;gfx1101;gfx11012. Otherwise, if you want to build the library for a list of different architectures, you should use the GPU_ARCHS build argument, for example GPU_ARCHS=gfx908;gfx1030;gfx1100;gfx942.

  4. Build the entire CK library:

    bash make -j"$(nproc)"

  5. Install CK:

    bash make -j install See Note on -j

Optional post-install steps

  • Build examples and tests:

    bash make -j examples tests

  • Build and run all examples and tests:

    bash make -j check

    You can find instructions for running each individual example in example.

  • Build and run smoke/regression examples and tests:

    bash make -j smoke # tests and examples that run for < 30 seconds each bash make -j regression # tests and examples that run for >= 30 seconds each

  • Build ckProfiler:

    bash make -j ckProfiler

    You can find instructions for running ckProfiler in profiler.

  • Build our documentation locally:

    bash cd docs pip3 install -r sphinx/requirements.txt python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html

Notes

The -j option for building with multiple threads in parallel, which speeds up the build significantly. However, -j launches unlimited number of threads, which can cause the build to run out of memory and crash. On average, you should expect each thread to use ~2Gb of RAM. Depending on the number of CPU cores and the amount of RAM on your system, you may want to limit the number of threads. For example, if you have a 128-core CPU and 128 Gb of RAM it's advisable to use -j32.

Additional cmake flags can be used to significantly speed-up the build:

  • DTYPES (default is not set) can be set to any subset of "fp64;fp32;fp16;fp8;bf16;int8" to build instances of select data types only. The main default data types are fp32 and fp16; you can safely skip other data types.

  • DISABLE_DL_KERNELS (default is OFF) must be set to ON in order not to build instances, such as gemm_dl or batched_gemm_multi_d_dl. These instances are useful on architectures like the NAVI2x, as most other platforms have faster instances, such as xdl or wmma, available.

  • DISABLE_DPP_KERNELS (default is OFF) must be set to ON in order not to build instances, such as gemm_dpp. These instances offer a slightly better performance of fp16 gemms on NAVI2x. But on other architectures faster alternatives are available.

  • CK_USE_FP8_ON_UNSUPPORTED_ARCH (default is OFF) must be set to ON in order to build instances, such as gemm_universal, gemm_universal_streamk and gemm_multiply_multiply for fp8 data type for GPU targets which do not have native support for fp8 data type, such as gfx908 or gfx90a. These instances are useful on architectures like the MI100/MI200 for the functional support only.

Using sccache for building

The default CK Docker images come with a pre-installed version of sccache, which supports clang being used as hip-compiler (" -x hip"). Using sccache can help reduce the time to re-build code from hours to 1-2 minutes. In order to invoke sccache, you need to run:

bash sccache --start-server

then add the following flags to the cmake command line:

bash -DCMAKE_CXX_COMPILER_LAUNCHER=sccache -DCMAKE_C_COMPILER_LAUNCHER=sccache

You may need to clean up the build folder and repeat the cmake and make steps in order to take advantage of the sccache during subsequent builds.

Using CK as pre-built kernel library

You can find instructions for using CK as a pre-built kernel library in client_example.

Contributing to CK

When you contribute to CK, make sure you run clang-format on all changed files. We highly recommend using git hooks that are managed by the pre-commit framework. To install hooks, run:

bash sudo script/install_precommit.sh

With this approach, pre-commit adds the appropriate hooks to your local repository and automatically runs clang-format (and possibly additional checks) before any commit is created.

If you need to uninstall hooks from the repository, you can do so by running the following command:

bash script/uninstall_precommit.sh

If you need to temporarily disable pre-commit hooks, you can add the --no-verify option to the git commit command.

Owner

  • Name: ROCm GPU Computing Platform
  • Login: ROCm
  • Kind: organization
  • Location: Austin, Tx

Open Source ROCm GPU Computing Platform

Citation (CITATION.cff)

cff-version: 1.2.0
title: Composable Kernel
message: If you use this software, please cite using the following metadata.
type: software
authors:
  - given-names: Chao
    family-names: Liu
    email: chao.liu2@amd.com
    affiliation: AMD
  - given-names: Jing
    family-names: Zhang
    email: jing.zhang3@amd.com
    affiliation: AMD
  - given-names: Letao
    family-names: Qin
    email: letao.qin@amd.com
    affiliation: AMD
  - given-names: Qianfeng
    family-names: Zhang
    email: qianfeng.zhang@amd.com
    affiliation: AMD
  - given-names: Liang
    family-names: Huang
    email: carlus.huang@amd.com
    affiliation: AMD
  - given-names: Shaojie
    family-names: Wang
    email: shaojie.wang@amd.com
    affiliation: AMD
  - given-names: Anthony
    family-names: Chang
    email: antc@amd.com
    affiliation: AMD
  - given-names: Chunyu
    family-names: Lai
    email: chunyu.lai@amd.com
    affiliation: AMD
  - given-names: Illia
    family-names: Silin
    email: illia.silin@amd.com
    affiliation: AMD
  - given-names: Adam
    family-names: Osewski
    email: adam.osewski@amd.com
    affiliation: AMD
  - given-names: Poyen
    family-names: Chen
    email: poyen.chen@amd.com
    affiliation: AMD
  - given-names: Rosty
    family-names: Geyyer
    email: rosty.geyyer@amd.com
    affiliation: AMD
  - given-names: Hanwen
    family-names: Chen
  - given-names: Tejash
    family-names: Shah
  - given-names: Xiaoyan
    family-names: Zhou
  - given-names: Jianfeng
    family-names: Yan
repository-code: 'https://github.com/ROCm/composable_kernel'
abstract: Composable Kernel (CK) library aims to provide a programming model for writing performance critical kernels for Machine Learning workloads across multiple architectures including GPUs, CPUs, etc, through general purpose kernel progarmming languages, like HIP C++.
keywords:
  - 'CK, Composable Kernel, Tensor Coordinate Transformation'
license: MIT
license-url: https://github.com/ROCm/composable_kernel/blob/7fc3ed761aa35709d87c8fbbe41dd368648b3541/LICENSE

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 1,911
  • Total Committers: 116
  • Avg Commits per committer: 16.474
  • Development Distribution Score (DDS): 0.812
Past Year
  • Commits: 663
  • Committers: 85
  • Avg Commits per committer: 7.8
  • Development Distribution Score (DDS): 0.811
Top Committers
Name Email Commits
Chao Liu l****6@g****m 360
Illia Silin 9****n 281
Bartłomiej Kocot b****t@a****m 137
Chao Liu c****2@a****m 102
zjing14 z****4@g****m 81
rocking5566 C****i@a****m 68
Rostyslav Geyyer 4****r 66
dependabot[bot] 4****] 60
Po Yen Chen P****n@a****m 60
Adam Osewski 1****i 47
Qianfeng q****g@a****m 46
Anthony Chang a****g@o****m 38
carlushuang c****g@a****m 37
ltqin l****n@a****m 33
Haocong WANG h****g@a****m 29
Max Podkorytov 4****t 26
jakpiase j****i@a****m 22
Thomas Ning T****g@a****m 22
Bartlomiej Wroblewski b****0@g****m 21
zjing14 j****n@a****m 20
Andriy Roshchenko 1****a 19
Jing Zhang j****3@a****m 18
Khushbu Agarwal k****w@a****m 18
arai713 6****3 16
Shaojie WANG s****g@a****m 14
Jun Liu L****n@a****m 13
feli f****i@a****m 13
Jianfeng Yan j****8@g****m 13
aledudek a****k@a****m 12
Mateusz Ozga 1****d 11
and 86 more...
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 161
  • Total pull requests: 1,969
  • Average time to close issues: 10 months
  • Average time to close pull requests: 14 days
  • Total issue authors: 74
  • Total pull request authors: 168
  • Average comments per issue: 2.58
  • Average comments per pull request: 0.48
  • Merged pull requests: 1,295
  • Bot issues: 0
  • Bot pull requests: 64
Past Year
  • Issues: 63
  • Pull requests: 1,516
  • Average time to close issues: 18 days
  • Average time to close pull requests: 6 days
  • Issue authors: 39
  • Pull request authors: 141
  • Average comments per issue: 1.63
  • Average comments per pull request: 0.44
  • Merged pull requests: 974
  • Bot issues: 0
  • Bot pull requests: 37
Top Authors
Issue Authors
  • asroy (18)
  • junliume (14)
  • ZJLi2013 (11)
  • zjing14 (8)
  • poyenc (7)
  • rosenrodt (5)
  • trixirt (5)
  • aosewski (4)
  • cloudhan (4)
  • carlushuang (4)
  • iq136boy (3)
  • j4yan (3)
  • rocking5566 (3)
  • xiabo123 (2)
  • LeiWang1999 (2)
Pull Request Authors
  • illsilin (268)
  • bartekxk (163)
  • poyenc (76)
  • zjing14 (69)
  • carlushuang (68)
  • dependabot[bot] (64)
  • ThomasNing (60)
  • geyyer (50)
  • rocking5566 (46)
  • AviralGoelAMD (44)
  • aska-0096 (44)
  • tenpercent (44)
  • amd-khushbu (41)
  • jakpiase (39)
  • asroy (38)
Top Labels
Issue Labels
Under Investigation (44) code quality (15) bug (8) enhancement (6) good first issue (4) urgency_high (4) question (3) ci:docs-only (2) help wanted (2) quality (2) feature request (2) CI - Failed (2) post-merge (1) documentation (1) Performance Issue (1) urgency_blocker (1)
Pull Request Labels
documentation (104) ci:docs-only (93) dependencies (64) CI - Pass (49) enhancement (17) CI - Testing (10) compilation time (9) bug (8) urgency_high (8) noCI (7) WIP (5) urgency_blocker (4) on hold (4) code quality (3) feature request (2) quality (2) good first issue (1) priority (1) external contribution (1) urgency_low (1)

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 2
  • Total dependent repositories: 0
  • Total versions: 17
  • Total maintainers: 2
spack.io: composable-kernel

Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators.

  • Versions: 17
  • Dependent Packages: 2
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Forks count: 14.3%
Stargazers count: 16.0%
Average: 21.7%
Dependent packages count: 56.4%
Maintainers (2)
Last synced: 6 months ago

Dependencies

Dockerfile docker
  • ubuntu 20.04 build
dev-requirements.txt pypi
  • ROCmSoftwarePlatform * development
  • RadeonOpenCompute * development
  • danmar * development
docs/sphinx/requirements.in pypi
  • rocm-docs-core >=0.20.0
  • sphinxcontrib-bibtex ==2.5.0
docs/sphinx/requirements.txt pypi
  • accessible-pygments ==0.0.3
  • alabaster ==0.7.13
  • babel ==2.12.1
  • beautifulsoup4 ==4.11.2
  • breathe ==4.34.0
  • certifi ==2022.12.7
  • cffi ==1.15.1
  • charset-normalizer ==3.1.0
  • click ==8.1.3
  • cryptography ==40.0.2
  • deprecated ==1.2.13
  • docutils ==0.16
  • fastjsonschema ==2.18.0
  • gitdb ==4.0.10
  • gitpython ==3.1.31
  • idna ==3.4
  • imagesize ==1.4.1
  • jinja2 ==3.1.2
  • latexcodec ==2.0.1
  • markdown-it-py ==2.2.0
  • markupsafe ==2.1.2
  • mdit-py-plugins ==0.3.5
  • mdurl ==0.1.2
  • myst-parser ==1.0.0
  • packaging ==23.0
  • pybtex ==0.24.0
  • pybtex-docutils ==1.0.2
  • pycparser ==2.21
  • pydata-sphinx-theme ==0.13.3
  • pygithub ==1.58.2
  • pygments ==2.14.0
  • pyjwt ==2.6.0
  • pynacl ==1.5.0
  • pyyaml ==6.0
  • requests ==2.28.2
  • rocm-docs-core >=0.20.0
  • six ==1.16.0
  • smmap ==5.0.0
  • snowballstemmer ==2.2.0
  • soupsieve ==2.4
  • sphinx ==5.3.0
  • sphinx-book-theme ==1.0.1
  • sphinx-copybutton ==0.5.1
  • sphinx-design ==0.3.0
  • sphinx-external-toc ==0.3.1
  • sphinx-notfound-page ==0.8.3
  • sphinxcontrib-applehelp ==1.0.4
  • sphinxcontrib-bibtex ==2.5.0
  • sphinxcontrib-devhelp ==1.0.2
  • sphinxcontrib-htmlhelp ==2.0.1
  • sphinxcontrib-jsmath ==1.0.1
  • sphinxcontrib-qthelp ==1.0.3
  • sphinxcontrib-serializinghtml ==1.1.5
  • typing-extensions ==4.5.0
  • urllib3 ==1.26.15
  • wrapt ==1.15.0
requirements.txt pypi