lc0

Open source neural network chess engine with GPU acceleration and broad hardware support.

https://github.com/leelachesszero/lc0

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    4 of 106 committers (3.8%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.8%) to scientific vocabulary

Keywords

alphazero alphazero-inspired chess chess-ai chess-engine cuda deep-learning deep-reinforcement-learning gpu leela-chess-zero neural-networks uci

Keywords from Contributors

deep-neural-networks distributed numerical julialang programming-language embedded
Last synced: 6 months ago · JSON representation ·

Repository

Open source neural network chess engine with GPU acceleration and broad hardware support.

Basic Info
  • Host: GitHub
  • Owner: LeelaChessZero
  • License: gpl-3.0
  • Language: C++
  • Default Branch: master
  • Homepage: https://lczero.org/
  • Size: 38.8 MB
Statistics
  • Stars: 2,762
  • Watchers: 91
  • Forks: 583
  • Open Issues: 194
  • Releases: 87
Topics
alphazero alphazero-inspired chess chess-ai chess-engine cuda deep-learning deep-reinforcement-learning gpu leela-chess-zero neural-networks uci
Created over 7 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog Contributing License Citation Authors

README.md

CircleCI AppVeyor

Lc0

Lc0 is a UCI-compliant chess engine designed to play chess via neural network, specifically those of the LeelaChessZero project.

Downloading source

Lc0 can be acquired either via a git clone or an archive download from GitHub. Be aware that there is a required submodule which isn't included in source archives.

For essentially all purposes, including selfplay game generation and match play, we highly recommend using the latest release/version branch (for example release/0.32), which is equivalent to using the latest version tag.

Versioning follows the Semantic Versioning guidelines, with major, minor and patch sections. The training server enforces game quality using the versions output by the client and engine.

Download using git:

shell git clone -b release/0.32 --recurse-submodules https://github.com/LeelaChessZero/lc0.git

If you have cloned already an old version, fetch, view and checkout a new branch: shell git fetch --all git branch --all git checkout -t remotes/origin/release/0.32

If you prefer to download an archive, you need to also download and place the submodule: * Download the .zip file (.tar.gz archive is also available) * Extract * Download https://github.com/LeelaChessZero/lczero-common/archive/master.zip (also available as .tar.gz) * Move the second archive into the first archive's libs/lczero-common/ folder and extract * The final form should look like <TOP>/libs/lczero-common/proto/

Having successfully acquired Lc0 via either of these methods, proceed to the build section below and follow the instructions for your OS.

Building and running Lc0

Building should be easier now than it was in the past. Please report any problems you have.

Aside from the git submodule, lc0 requires the Meson build system and at least one backend library for evaluating the neural network, as well as a few libraries. If your system already has these libraries installed, they will be used; otherwise Meson will generate its own copy (a "subproject"), which in turn requires that git is installed (yes, separately from cloning the actual lc0 repository). Meson also requires python and Ninja.

Backend support includes (in theory) any CBLAS-compatible library for CPU usage, but OpenBLAS or Intel's DNNL are the main ones. For GPUs, the following are supported: CUDA (with optional cuDNN), various flavors of onnxruntime, and Apple's Metal Performance Shaders. There is also experimental SYCL support for AMD and Intel GPUs.

Finally, lc0 requires a compiler supporting C++20. Minimal versions tested are g++ v10.0, clang v12.0 and Visual Studio 2019 version 16.11.

Given those basics, the OS and backend specific instructions are below.

Linux

Generic

  1. Install backend (also read the detailed instructions in later sections):
    • If you want to use NVidia graphics cards Install CUDA (and optionally cuDNN).
    • If you want to use AMD or Intel graphics cards you can try SYCL.
    • if you want BLAS install either OpenBLAS or DNNL.
  2. Install ninja build (ninja-build), meson, and (optionally) gtest (libgtest-dev).
  3. Go to lc0/
  4. Run ./build.sh
  5. lc0 will be in lc0/build/release/ directory
  6. Download a neural network in the same directory as the binary (no need to unpack it).

If you want to build with a different compiler, pass the CC and CXX environment variables: shell CC=clang CXX=clang++ ./build.sh

Ubuntu 20.04

For Ubuntu 20.04 you need meson, ninja and gcc-10 before performing the steps above. The following should work: shell apt-get update apt-get -y install git python3-pip gcc-10 g++-10 zlib1g zlib1g-dev pip3 install meson pip3 install ninja CC=gcc-10 CXX=g++-10 INSTALL_PREFIX=~/.local ./build.sh

Make sure that ~/.local/bin is in your PATH environment variable. You can now type lc0 --help and start.

Windows

Here are the brief instructions for CUDA/cuDNN, for details and other options see windows-build.md and the instructions in the following sections.

  1. Install Microsoft Visual Studio (2019 version 16.11 or later)
  2. Install CUDA
  3. (Optionally install cuDNN).
  4. Install Python3 if you didn't install it with Visual Studio.
  5. Install Meson: pip3 install --upgrade meson
  6. If CUDA_PATH is not set (run the set command to see the full list of variables), edit build.cmd and set the CUDA_PATH with your CUDA directory
  7. If you also want cuDNN, set CUDNN_PATH with your cuDNN directory (not needed if it is the same with CUDA_PATH).

  8. Run build.cmd. It will ask permission to delete the build directory, then generate MSVS project and pause.

Then either:

  1. Hit Enter to build it.
  2. Resulting binary will be build/lc0.exe

Or.

  1. Open generated solution build/lc0.sln in Visual Studio and build it yourself.

Mac

You will need xcode and python3 installed. Then you need to install some required packages through Terminal:

  1. Install meson: pip3 install meson
  2. Install ninja: pip3 install ninja

Now download the lc0 source, if you haven't already done so, following the instructions earlier in the page. Don't forget the submodule.

  1. Go to the lc0 directory.
  2. Run ./build.sh -Dgtest=false

The compiled Lc0 will be in build/release

Starting with v0.32.0, we are also offering a pre-compiled version that can be downloaded from the release page.

CUDA

CUDA can be downloaded and installed following the instructions in from https://developer.nvidia.com/cuda-downloads. The build in most cases will pick it up with no further action. However if the cuda compiler (nvcc) is not found you can call the build like this: PATH=/usr/local/cuda/bin:$PATH ./build.sh, replacing the path with the correct one for nvcc.

Note that CUDA uses the system compiler and stops if it doesn't recognize the version, even if newer. This is more of an issue with new Linux versions, but you can get around with the nvcc_ccbin build option to specify a different compiler just for cuda. As an example, adding -Dnvcc_ccbin=g++-11 to the build command line will use g++-11 with cuda instead of the system compiler.

ONNX

Lc0 offers several ONNX based backends, namely onnx-cpu, onnx-cuda, onnx-trt, onnx-rocm and on Windows onnx-dml, utilizing the execution providers offered by onnxruntime.

Some Linux systems are starting to offer onnxruntime packages, so after installing this there is a good chance the Lc0 build will pick it up with no further action required. Otherwise you can set the onnx_libdir and onnx_include build options to point to the onnxruntime libraries and include directories respectively. The same options are used if you unpack a package downloaded from https://github.com/microsoft/onnxruntime/releases.

For Windows, we offer pre-compiled packages for onnx-dml and onnx-trt, see the included README for installation instructions.

SYCL

Note that SYCL support is new in v0.32.0 and as such is still considered experimental.

You will need the Intel "oneAPI DPC++/C++ Compiler", "DPC++ Compatibility Tool" and (for an Intel GPU) "oneAPI Math Kernel Library (oneMKL)" or (for an AMD GPU) hipBLAS.

The Intel tools can be found in either the "oneAPI Base Toolkit" or "C++ Essentials" packages that can be downloaded from https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html, while hipBLAS can be downloaded from https://rocm.docs.amd.com/projects/hipBLAS/en/latest/

The compiler for C code is icx and for C++ code is icx on Windows but icpx on Linux.

To build Lc0 with SYCL you need to set the sycl build option using -Dsycl=l0 (that is el zero) for an Intel GPU or -Dsycl=amd for (you guessed it) an AMD GPU.

You may also have to set the dpct_include option to point to the DPC++ Compatibility Tool includes, the onemkl_include similarly for the oneMKL includes, or hip_libdirs and hip_include to the AMD HIP libraries and includes respectively.

On Linux, a typical session would go like this: shell . /opt/intel/oneapi/setvars.sh --include-intel-llvm CC=icx CXX=icpx AR=llvm-ar ./build.sh release -Dgtest=false -Dsycl=l0 The first line is to initialize the build environment and is only needed once per session, while the build line may need modification as described above.

On windows you will have to build using ninja, this is provided by Visual Studio if you install the CMake component. We provide a build-sycl.cmd script that should build just fine for an Intel GPU. This script has not yet been tested with and AMD GPU, some editing will be required.

You can also install the oneAPI DPC++/C++ Compiler Runtime so you can run Lc0 without needing to initialize the build environment every time.

BLAS

Lc0 can also run (a bit slow) on CPU, using matrix multiplication functions from a BLAS library. By default OpenBLAS is used if available as it seems to offer good performance on a wide range of processors. If your system doesn't offer an OpenBLAS package (e.g. libopenblas-dev), or you have a recent processor you can get DNNL from here. To use DNNL you have to pass -Ddnnl=true to the build and specify the directory where it was installed using the -Ddnnl_dir= option. For macs, the Accelerate library will be used.

If the "Intel Implicit SPMD Program Compiler" (ispc) is installed, some performance critical functions will use vectorized code for faster execution.

Note that Lc0 is not able to control the number of threads with all BLAS libraries. Some libraries try to exploit cores aggressively, in which case it may be best to leave the threads set to the default (i.e. automatic) setting.

Getting help

If there is an issue or the above instructions were not clear, you can always ask for help. The fastest way is to ask in the help channel of our discord chat, but you can also open a github issue (after checking the issue hasn't already been reported).

Python bindings

Python bindings can be built and installed as follows.

shell pip install --user git+https://github.com/LeelaChessZero/lc0.git

This will build the package lczero-bindings and install it to your Python user install directory. All the lc0 functionality related to position evaluation is now available in the module lczero.backends. An example interactive session can be found here.

License

Leela Chess is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

Leela Chess is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with Leela Chess. If not, see http://www.gnu.org/licenses/.

Additional permission under GNU GPL version 3 section 7

The source files of Lc0 with the exception of the BLAS, OpenCL and SYCL backends (all files in the blas, opencl and sycl sub-directories) have the following additional permission, as allowed under GNU GPL version 3 section 7:

If you modify this Program, or any covered work, by linking or combining it with NVIDIA Corporation's libraries from the NVIDIA CUDA Toolkit and the NVIDIA CUDA Deep Neural Network library (or a modified version of those libraries), containing parts covered by the terms of the respective license agreement, the licensors of this Program grant you additional permission to convey the resulting work.

Owner

  • Name: LCZero
  • Login: LeelaChessZero
  • Kind: organization
  • Email: lc0@lc0.org

Citation (CITATION.cff)

cff-version: 1.2.0
title: LeelaChessZero
type: software
authors:
  - name: The LCZero Authors
repository-code: 'https://github.com/LeelaChessZero/lc0'
url: 'https://lczero.org/'
repository-artifact: 'https://github.com/LeelaChessZero/lc0/releases/'
abstract: >-
  Lc0 is a UCI-compliant chess engine designed to play chess
  via neural network, specifically those of the
  LeelaChessZero project.
keywords:
  - chess
  - neural networks (NN)
  - artificial intelligence (AI)
license: GPL-3.0

GitHub Events

Total
  • Create event: 4
  • Issues event: 20
  • Release event: 1
  • Watch event: 304
  • Delete event: 2
  • Issue comment event: 145
  • Push event: 121
  • Pull request review comment event: 161
  • Gollum event: 4
  • Pull request review event: 309
  • Pull request event: 257
  • Fork event: 59
Last Year
  • Create event: 4
  • Issues event: 20
  • Release event: 1
  • Watch event: 304
  • Delete event: 2
  • Issue comment event: 145
  • Push event: 121
  • Pull request review comment event: 161
  • Gollum event: 4
  • Pull request review event: 309
  • Pull request event: 257
  • Fork event: 59

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 1,518
  • Total Committers: 106
  • Avg Commits per committer: 14.321
  • Development Distribution Score (DDS): 0.732
Past Year
  • Commits: 89
  • Committers: 9
  • Avg Commits per committer: 9.889
  • Development Distribution Score (DDS): 0.506
Top Committers
Name Email Commits
Alexander Lyashuk c****m@g****m 407
borg323 3****3 399
Francois f****s@g****m 154
Tilps T****s 117
Ankan Banerjee a****n@g****m 57
Dubslow b****w@g****m 50
Ed Lee e****e@m****m 32
Naphthalin 4****n 24
Tilps t****s 19
Henrik Forstén h****n@g****m 18
almaudoh a****h@y****m 17
gsobala g****a@g****m 12
Francis Li f****i@g****m 10
Dieter Dobbelaere 3****e 10
F. Huizinga f****a@g****m 9
cwbriscoe c****8@y****m 8
Daniel Uranga d****a@g****m 7
Leandro Álvarez González l****o 7
cn4750 c****6@g****m 7
Alexis Olson A****n@g****m 6
nguyenpham p****n@g****m 6
zz4032 a****2@a****e 5
Brandon Lin b****0 5
jjoshua2 j****2@g****m 5
Hace m****g@g****m 5
exa q****5@y****m 5
SunnyWar i****e@h****m 4
Reece H. Dunn m****d@g****m 4
GBeauregard G****d 4
students a****v@a****u 4
and 76 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 81
  • Total pull requests: 419
  • Average time to close issues: 6 months
  • Average time to close pull requests: 2 months
  • Total issue authors: 36
  • Total pull request authors: 48
  • Average comments per issue: 4.17
  • Average comments per pull request: 1.18
  • Merged pull requests: 271
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 18
  • Pull requests: 244
  • Average time to close issues: 6 days
  • Average time to close pull requests: 5 days
  • Issue authors: 13
  • Pull request authors: 22
  • Average comments per issue: 0.78
  • Average comments per pull request: 0.51
  • Merged pull requests: 172
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • mooskagh (22)
  • Tilps (6)
  • dubslow (5)
  • Chess321 (4)
  • Videodr0me (4)
  • dje-dev (3)
  • gsobala (3)
  • jackL999 (2)
  • killerducky (2)
  • ASilver (2)
  • puct9 (2)
  • borg323 (2)
  • Mardak (1)
  • dkappe (1)
  • JamieO53 (1)
Pull Request Authors
  • borg323 (142)
  • mooskagh (133)
  • Tilps (13)
  • dubslow (13)
  • almaudoh (13)
  • Menkib64 (13)
  • KarlKfoury (12)
  • ContradNamiseb (12)
  • Naphthalin (9)
  • frpays (6)
  • hans-ekbrand (5)
  • AlexisOlson (4)
  • JulianHelmsen (2)
  • Sam-Belliveau (2)
  • eltociear (2)
Top Labels
Issue Labels
good first issue (17) enhancement (14) bug (9) help wanted (5) wip (1)
Pull Request Labels
not for merge (9) demo (7) enhancement (6) rfc (5) testing required (4) stale (1) bug fix (1) wip (1)

Dependencies

.circleci/Dockerfile docker
  • floopcz/tensorflow_cc ubuntu-shared-cuda build