Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Andrei Ivanov ff66c42bfd | 2 months ago | |
---|---|---|
.. | ||
README.md | 2 years ago | |
data.py | 2 months ago | |
model.py | 1 year ago | |
train.py | 2 months ago | |
train_sampling.py | 1 year ago | |
utils.py | 1 year ago |
Paper link: https://arxiv.org/abs/1706.02263
Author's code: https://github.com/riannevdberg/gc-mc
The implementation does not handle side-channel features and mini-epoching and thus achieves
slightly worse performance when using node features.
Credit: Jiani Zhang (@jennyzhang0215)
python -m spacy download en_core_web_sm
Supported datasets: ml-100k, ml-1m, ml-10m
ml-100k, no feature
python3 train.py --data_name=ml-100k --use_one_hot_fea --gcn_agg_accum=stack
Results: RMSE=0.9088 (0.910 reported)
ml-100k, with feature
python3 train.py --data_name=ml-100k --gcn_agg_accum=stack
Results: RMSE=0.9448 (0.905 reported)
ml-1m, no feature
python3 train.py --data_name=ml-1m --gcn_agg_accum=sum --use_one_hot_fea
Results: RMSE=0.8377 (0.832 reported)
ml-10m, no feature
python3 train.py --data_name=ml-10m --gcn_agg_accum=stack --gcn_dropout=0.3 \
--train_lr=0.001 --train_min_lr=0.0001 --train_max_iter=15000 \
--use_one_hot_fea --gen_r_num_basis_func=4
Results: RMSE=0.7800 (0.777 reported)
Testbed: EC2 p3.2xlarge instance(Amazon Linux 2)
ml-100k, no feature
python3 train_sampling.py --data_name=ml-100k \
--use_one_hot_fea \
--gcn_agg_accum=stack \
--gpu 0
ml-100k, no feature with mix_cpu_gpu run, for mix_cpu_gpu run with no feature, the W_r is stored in CPU by default other than in GPU.
python3 train_sampling.py --data_name=ml-100k \
--use_one_hot_fea \
--gcn_agg_accum=stack \
--mix_cpu_gpu \
--gpu 0
Results: RMSE=0.9380
ml-100k, with feature
python3 train_sampling.py --data_name=ml-100k \
--gcn_agg_accum=stack \
--train_max_epoch 90 \
--gpu 0
Results: RMSE=0.9574
ml-1m, no feature
python3 train_sampling.py --data_name=ml-1m \
--gcn_agg_accum=sum \
--use_one_hot_fea \
--train_max_epoch 160 \
--gpu 0
ml-1m, no feature with mix_cpu_gpu run
python3 train_sampling.py --data_name=ml-1m \
--gcn_agg_accum=sum \
--use_one_hot_fea \
--train_max_epoch 60 \
--mix_cpu_gpu \
--gpu 0
Results: RMSE=0.8632
ml-10m, no feature
python3 train_sampling.py --data_name=ml-10m \
--gcn_agg_accum=stack \
--gcn_dropout=0.3 \
--train_lr=0.001 \
--train_min_lr=0.0001 \
--train_max_epoch=60 \
--use_one_hot_fea \
--gen_r_num_basis_func=4 \
--gpu 0
ml-10m, no feature with mix_cpu_gpu run
python3 train_sampling.py --data_name=ml-10m \
--gcn_agg_accum=stack \
--gcn_dropout=0.3 \
--train_lr=0.001 \
--train_min_lr=0.0001 \
--train_max_epoch=60 \
--use_one_hot_fea \
--gen_r_num_basis_func=4 \
--mix_cpu_gpu \
--gpu 0
Results: RMSE=0.8050
Testbed: EC2 p3.2xlarge instance
ml-100k, no feature
python train_sampling.py --data_name=ml-100k \
--gcn_agg_accum=stack \
--train_max_epoch 30 \
--train_lr 0.02 \
--use_one_hot_fea \
--gpu 0,1,2,3,4,5,6,7
ml-100k, no feature with mix_cpu_gpu run
python train_sampling.py --data_name=ml-100k \
--gcn_agg_accum=stack \
--train_max_epoch 30 \
--train_lr 0.02 \
--use_one_hot_fea \
--mix_cpu_gpu \
--gpu 0,1,2,3,4,5,6,7
Result: RMSE=0.9397
ml-100k, with feature
python train_sampling.py --data_name=ml-100k \
--gcn_agg_accum=stack \
--train_max_epoch 30 \
--gpu 0,1,2,3,4,5,6,7
Result: RMSE=0.9655
ml-1m, no feature
python train_sampling.py --data_name=ml-1m \
--gcn_agg_accum=sum \
--train_max_epoch 40 \
--use_one_hot_fea \
--gpu 0,1,2,3,4,5,6,7
ml-1m, no feature with mix_cpu_gpu run
python train_sampling.py --data_name=ml-1m \
--gcn_agg_accum=sum \
--train_max_epoch 40 \
--use_one_hot_fea \
--mix_cpu_gpu \
--gpu 0,1,2,3,4,5,6,7
Results: RMSE=0.8621
ml-10m, no feature
python train_sampling.py --data_name=ml-10m \
--gcn_agg_accum=stack \
--gcn_dropout=0.3 \
--train_lr=0.001 \
--train_min_lr=0.0001 \
--train_max_epoch=30 \
--use_one_hot_fea \
--gen_r_num_basis_func=4 \
--gpu 0,1,2,3,4,5,6,7
ml-10m, no feature with mix_cpu_gpu run
python train_sampling.py --data_name=ml-10m \
--gcn_agg_accum=stack \
--gcn_dropout=0.3 \
--train_lr=0.001 \
--train_min_lr=0.0001 \
--train_max_epoch=30 \
--use_one_hot_fea \
--gen_r_num_basis_func=4 \
--mix_cpu_gpu \
--gpu 0,1,2,3,4,5,6,7
Results: RMSE=0.8084
Testbed: EC2 p3.16xlarge instance
ml-100k, no feature
python3 train_sampling.py --data_name=ml-100k \
--use_one_hot_fea \
--gcn_agg_accum=stack \
--gpu -1
Testbed: EC2 r5.xlarge instance
No Description
Python C++ Jupyter Notebook Cuda Text other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》