https://github.com/cy-suite/paddle-new

https://github.com/cy-suite/paddle-new

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.5%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: cy-suite
  • License: apache-2.0
  • Language: C++
  • Default Branch: 1.8.5
  • Size: 1.57 GB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 11 months ago · Last pushed 11 months ago
Metadata Files
Readme Contributing License Code of conduct Security Authors

README.md


English | |

Documentation Status Documentation Status Release License Twitter

Welcome to the PaddlePaddle GitHub.

PaddlePaddle, as the first independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It is an industrial platform with advanced technologies and rich features that cover core deep learning frameworks, basic model libraries, end-to-end development kits, tools & components as well as service platforms. PaddlePaddle is originated from industrial practices with dedication and commitments to industrialization. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 10.7 million developers, 235,000 companies and generating 860,000 models. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI.

Installation

Latest PaddlePaddle Release: v2.6

Our vision is to enable deep learning for everyone via PaddlePaddle. Please refer to our release announcement to track the latest features of PaddlePaddle.

Install Latest Stable Release

``` sh

CPU

pip install paddlepaddle

GPU

pip install paddlepaddle-gpu ```

For more information about installation, please view Quick Install

Now our developers can acquire Tesla V100 online computing resources for free. If you create a program by AI Studio, you will obtain 8 hours to train models online per day. Click here to start.

FOUR LEADING TECHNOLOGIES

  • Agile Framework for Industrial Development of Deep Neural Networks

    The PaddlePaddle deep learning framework facilitates the development while lowering the technical burden, through leveraging a programmable scheme to architect the neural networks. It supports both declarative programming and imperative programming with both development flexibility and high runtime performance preserved. The neural architectures could be automatically designed by algorithms with better performance than the ones designed by human experts.

  • Support Ultra-Large-Scale Training of Deep Neural Networks

    PaddlePaddle has made breakthroughs in ultra-large-scale deep neural networks training. It launched the world's first large-scale open-source training platform that supports the training of deep networks with 100 billion features and trillions of parameters using data sources distributed over hundreds of nodes. PaddlePaddle overcomes the online deep learning challenges for ultra-large-scale deep learning models, and further achieved real-time model updating with more than 1 trillion parameters. Click here to learn more

  • High-Performance Inference Engines for Comprehensive Deployment Environments

PaddlePaddle is not only compatible with models trained in 3rd party open-source frameworks , but also offers complete inference products for various production scenarios. Our inference product line includes Paddle Inference: Native inference library for high-performance server and cloud inference; Paddle Serving: A service-oriented framework suitable for distributed and pipeline productions; Paddle Lite: Ultra-Lightweight inference engine for mobile and IoT environments; Paddle.js: A frontend inference engine for browser and mini-apps. Furthermore, by great amounts of optimization with leading hardware in each scenario, Paddle inference engines outperform most of the other mainstream frameworks.

  • Industry-Oriented Models and Libraries with Open Source Repositories

    PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications. Click here to learn more

Documentation

We provide English and Chinese documentation.

You might want to start from how to implement deep learning basics with PaddlePaddle.

So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.

Our new API enables much shorter programs.

We appreciate your contributions!

Open Source Community

Courses

  • Server Deployments: Courses introducing high performance server deployments via local and remote services.
  • Edge Deployments: Courses introducing edge deployments from mobile, IoT to web and applets.

Copyright and License

PaddlePaddle is provided under the Apache-2.0 license.

Owner

  • Name: cy-suite
  • Login: cy-suite
  • Kind: organization

GitHub Events

Total
  • Public event: 1
Last Year
  • Public event: 1

Dependencies

paddle/scripts/musl_build/Dockerfile docker
  • python ${PYTHON_VERSION}-alpine3.11 build
r/Dockerfile docker
  • ubuntu 18.04 build
tools/cinn/docker/Dockerfile docker
  • nvidia/cuda 10.1-cudnn7-devel-ubuntu18.04 build
paddle/fluid/inference/goapi/go.mod go
paddle/scripts/compile_requirements.txt pypi
  • jinja2 *
  • pyyaml *
  • setuptools ==69.0.2
  • setuptools ==57.4.0
  • wget *
  • wheel *
pyproject.toml pypi
python/requirements.txt pypi
  • Pillow *
  • astor *
  • decorator *
  • httpx *
  • numpy >=1.13,<2.0
  • opt_einsum ==3.3.0
  • protobuf >=3.1.0,<=3.20.2
  • protobuf >=3.20.2
python/unittest_py/requirements.txt pypi
  • PyGithub * test
  • autograd ==1.4 test
  • coverage ==5.5 test
  • distro * test
  • gym ==0.26.2 test
  • hypothesis * test
  • librosa ==0.8.1 test
  • mock * test
  • numpy >=1.20 test
  • opencv-python <=4.2.0.32 test
  • paddle2onnx >=0.9.6 test
  • parameterized * test
  • prettytable * test
  • pycrypto * test
  • pygame ==2.5.2 test
  • scipy >=1.6, test
  • ubelt ==1.3.3 test
  • visualdl ==2.5.3 test
  • wandb >=0.13 test
  • xdoctest ==1.1.1 test
  • xlsxwriter ==3.0.9 test
setup.py pypi
tools/cinn/docker/requirements.txt pypi
  • Pillow *
  • astor *
  • decorator ==4.4.2
  • gast >=0.3.3
  • gast ==0.3.3
  • numpy >=1.13,<=1.19.3
  • numpy >=1.13,<=1.16.4
  • numpy >=1.13
  • protobuf >=3.1.0
  • requests >=2.20.0
  • six *
  • xgboost *