Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Xin Pan 66024e9056 | 5 years ago | |
---|---|---|
benchmark | 5 years ago | |
cmake | 5 years ago | |
doc | 5 years ago | |
go | 5 years ago | |
paddle | 5 years ago | |
patches/grpc | 5 years ago | |
proto | 5 years ago | |
python | 5 years ago | |
tools | 5 years ago | |
.clang-format | 6 years ago | |
.dockerignore | 7 years ago | |
.gitignore | 5 years ago | |
.pre-commit-config.yaml | 5 years ago | |
.style.yapf | 7 years ago | |
.travis.yml | 5 years ago | |
AUTHORS.md | 5 years ago | |
CMakeLists.txt | 5 years ago | |
CODE_OF_CONDUCT.md | 6 years ago | |
CODE_OF_CONDUCT_cn.md | 6 years ago | |
CONTRIBUTING.md | 5 years ago | |
Dockerfile | 5 years ago | |
Dockerfile.android | 6 years ago | |
ISSUE_TEMPLATE.md | 7 years ago | |
LICENSE | 6 years ago | |
README.md | 5 years ago | |
RELEASE.cn.md | 6 years ago | |
RELEASE.md | 6 years ago |
Welcome to the PaddlePaddle GitHub.
PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use,
efficient, flexible and scalable deep learning platform, which is originally
developed by Baidu scientists and engineers for the purpose of applying deep
learning to many products at Baidu.
Our vision is to enable deep learning for everyone via PaddlePaddle.
Please refer to our release announcement to track the latest feature of PaddlePaddle.
# Linux CPU
pip install paddlepaddle
# Linux GPU cuda9cudnn7
pip install paddlepaddle-gpu
# Linux GPU cuda8cudnn7
pip install paddlepaddle-gpu==1.0.1.post87
# Linux GPU cuda8cudnn5
pip install paddlepaddle-gpu==1.0.1.post85
# For installation on other platform, refer to http://paddlepaddle.org/
Flexibility
PaddlePaddle supports a wide range of neural network architectures and
optimization algorithms. It is easy to configure complex models such as
neural machine translation model with attention mechanism or complex memory
connection.
Efficiency
In order to unleash the power of heterogeneous computing resource,
optimization occurs at different levels of PaddlePaddle, including
computing, memory, architecture and communication. The following are some
examples:
Scalability
With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed
up your training. PaddlePaddle can achieve high throughput and performance
via optimized communication.
Connected to Products
In addition, PaddlePaddle is also designed to be easily deployable. At Baidu,
PaddlePaddle has been deployed into products and services with a vast number
of users, including ad click-through rate (CTR) prediction, large-scale image
classification, optical character recognition(OCR), search ranking, computer
virus detection, recommendation, etc. It is widely utilized in products at
Baidu and it has achieved a significant impact. We hope you can also explore
the capability of PaddlePaddle to make an impact on your product.
It is recommended to read this doc on our website.
We provide English and
Chinese documentation.
You might want to start from this online interactive book that can run in a Jupyter Notebook.
You can run distributed training jobs on MPI clusters.
Our new API enables much shorter programs.
We appreciate your contributions!
You are welcome to submit questions and bug reports as Github Issues.
PaddlePaddle is provided under the Apache-2.0 license.
PaddlePaddle (PArallel Distributed Deep LEarning) 是一个简单易用、高效灵活、可扩展的深度学习平台,最初由百度科学家和工程师共同开发,目的是将深度学习技术应用到百度的众多产品中。
https://www.paddlepaddle.org.cn/
Python C++ Cuda Text Shell other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》