Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
AdamGrabowski 4135b1bd5b | 9 months ago | |
---|---|---|
.. | ||
benchmarks | 9 months ago | |
scripts | 1 year ago | |
.gitignore | 3 years ago | |
Jenkinsfile | 3 years ago | |
README.md | 3 years ago | |
asv.conf.json | 3 years ago | |
run.sh | 1 year ago | |
task.json | 2 years ago |
Benchmarking DGL with Airspeed Velocity.
Before beginning, ensure that airspeed velocity is installed:
pip install asv
To run all benchmarks locally, build the project first and then run:
asv run -n -e --python=same --verbose
Due to ASV's restriction, --python=same
will not write any benchmark results
to disk. It does not support specifying branches and commits either. They are only
available under ASV's managed environment.
To change the device for benchmarking, set the DGL_BENCH_DEVICE
environment variable.
Allowed values are "cpu"
or "gpu"
.
export DGL_BENCH_DEVICE=gpu
To select which benchmark to run, use the --bench
flag. For example,
asv run -n -e --python=same --verbose --bench model_acc.bench_gat
Note that OGB dataset need to be download manually to /tmp/dataset
folder (i.e. /tmp/dataset/ogbn-products/
) beforehand.
You can do it by runnnig the code below in this folder
from benchmarks.utils import get_ogb_graph
get_ogb_graph("ogbn-product")
DGL runs all benchmarks automatically in docker container. To run bencmarks in docker locally,
"branches"
list in asv.conf.json
."HEAD"
which is the last commit of the current branch. For example, to["HEAD", "master"]
.publish.sh
script. It accepts two arguments, a name specifying the identity ofbash publish.sh dev-machine gpu
The script will output two folders results
and html
. The html
folder contains the
generated static web pages. View it by:
asv preview
Please see publish.sh
for more information on how it works and how to modify it according
to your need.
The benchmark folder is organized as follows:
|-- benchmarks/
|-- model_acc/ # benchmarks for model accuracy
|-- bench_gcn.py
|-- bench_gat.py
|-- bench_sage.py
...
|-- model_speed/ # benchmarks for model training speed
|-- bench_gat.py
|-- bench_sage.py
...
... # other types of benchmarks
|-- html/ # generated html files
|-- results/ # generated result files
|-- asv.conf.json # asv config file
|-- build_dgl_asv.sh # script for building dgl in asv
|-- install_dgl_asv.sh # script for installing dgl in asv
|-- publish.sh # script for running benchmarks in docker
|-- README.md # this readme
|-- run.sh # script for calling asv in docker
|-- ... # other aux files
To add a new benchmark, pick a suitable benchmark type and create a python script under
it. We prefer to have the prefix bench_
in the name. Here is a toy example:
# bench_range.py
import time
from .. import utils
@utils.benchmark('time')
@utils.parametrize('l', [10, 100, 1000])
@utils.parametrize('u', [10, 100, 1000])
def track_time(l, u):
t0 = time.time()
for i in range(l, u):
pass
return time.time() - t0
track_*
function. The functionutils.benchmark
and utils.parametrize
.utils.benchmark
indicates the type of this benchmark. Currently supported types are:'time'
and 'acc'
. The decorator will perform some necessary setup and finalize'acc'
type.utils.parametrize
specifies the parameters to test.model_acc/bench_gcn.py
and model_speed/bench_sage.py
.-e --verbose
to asv run
to print out stderr and more information.--python=same
), ASV will not write results to diskasv publish
will not generate plots.No Description
Python C++ Jupyter Notebook Cuda Text other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》