|liutension 8d873b9d20||1 year ago|
|cambricon-integration||2 years ago|
|charts||1 year ago|
|deepops||1 year ago|
|doc||1 year ago|
|image-factory||1 year ago|
|kubebox-server||1 year ago|
|model||1 year ago|
|pylon||1 year ago|
|rest-server||1 year ago|
|rest-server-plugin||1 year ago|
|rest-server-storage||1 year ago|
|taskset||1 year ago|
|web-portal||1 year ago|
|.gitattributes||2 years ago|
|.gitignore||2 years ago|
|LICENSE||2 years ago|
|LICENSE-CN||2 years ago|
|README.md||1 year ago|
|README_zh.md||1 year ago|
|openilogo.png||2 years ago|
|package-lock.json||2 years ago|
|sysarch.png||1 year ago|
|user manual.pdf||2 years ago|
Openi-octopus is a cluster management tool and resource scheduling platform jointly designed and developed by Peking University, Xi’an Jiaotong University, Zhejiang University and China University of science and technology, and maintained by Pengcheng laboratory, Peking University, China University of science and technology and aitisa. The platform combines some mature designs that perform well in large-scale production environment, and is mainly designed to improve the efficiency of academic research and reproduce academic research results.
OPENI is completely open: it is under the Open-Intelligence license. OPENI is architected in a modular way: different module can be plugged in as appropriate. This makes OPENI particularly attractive to evaluate various research ideas, which include but not limited to the following components:
OPENI operates in an open model: contributions from academia and industry are all highly welcome.
The system runs in a cluster of machines each equipped with one or multiple GPUs.
Each machine in the cluster runs Ubuntu 18.04 LTS and has a statically assigned IP address.
To deploy services, the system further relies on a Docker registry service (e.g., Docker hub)
to store the Docker images for the services to be deployed.
The system also requires a dev machine that runs in the same environment that has full access to the cluster.
And the system need NTP service for clock synchronization.
To deploy and use the system, the process consists of the following steps.
After system services have been deployed, user can access the web portal, a Web UI, for cluster management and job management.
Please refer to this tutorial for details about job submission.
The web portal also provides Web UI for cluster management.
The system architecture is illustrated above.
User submits jobs or monitors cluster status through the Web Portal,
which calls APIs provided by the REST server.
Third party tools can also call REST server directly for job management.
Upon receiving API calls, the REST server coordinates with k8s ApiServer, k8s Scheduler will schedule the job to k8s node with CPU,GPU and other resources.
TaskSetController will monitor the job life cycle in k8s cluster.
Restserver retrieve the status of jobs from k8s ApiServer, and its status can display on Web portal.
Other type of CPU based AI workloads or traditional big data job
can also run in the platform, coexisted with those GPU-based jobs.
The storage of training data and results can be customized according to platform/equipment requirements.