Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Chang Liu 23772ee160 | 11 months ago | |
---|---|---|
.. | ||
scripts | 3 years ago | |
src | 1 year ago | |
README.md | 2 years ago | |
requirements.txt | 11 months ago |
pip install -r requirements.txt
Also requires PyTorch 1.7.0+.
To prepare the datasets:
mkdir data
cd data
Run with following (available dataset: "example", "youtube", "amazon")
python src/main.py --input data/example
To run on "twitter" dataset, use
python src/main.py --input data/twitter --eval-type 1 --gpu 0
For a big dataset, use sparse to avoid cuda out of memory in backward
python src/main_sparse.py --input data/example --gpu 0
If you have multiple GPUs, you can also accelerate training with DistributedDataParallel
python src/main_sparse_multi_gpus.py --input data/example --gpu 0,1
It is worth noting that DistributedDataParallel will cause more cuda memory consumption and a certain loss of preformance.
All the results match the official code with the same hyper parameter values, including twiiter dataset (auc, pr, f1 is 76.29, 76.17, 69.34, respectively).
auc | pr | f1 | |
---|---|---|---|
amazon | 96.88 | 96.31 | 92.12 |
youtube | 82.29 | 80.35 | 74.63 |
72.40 | 74.40 | 65.89 | |
example | 94.65 | 94.57 | 89.99 |
No Description
Python C++ Jupyter Notebook Cuda Text other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》