Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
root 259201004b | 2 years ago | |
---|---|---|
InitModel | 2 years ago | |
SampleData | 2 years ago | |
README.md | 2 years ago | |
__init__.py | 2 years ago | |
f1.py | 2 years ago | |
match_utils.py | 2 years ago | |
modeling.py | 2 years ago | |
optimization.py | 2 years ago | |
ranking_metrics.py | 2 years ago | |
tokenization.py | 2 years ago | |
train_HiCapsRKL.py | 2 years ago |
This repository includes the source code for the paper "Leveraging Capsule Routing to Associate Knowledge with Medical Literature Hierarchically".
Basically, the program takes the medical literature, the RCor text fragment, the KImp text fragment, and the knowledge as input, and predict a label to indicate the relevance degree between the medical literature and the knowledge.
More details about the underneath model can be found in the submitted paper.
Libraries: ubuntu = 16.04, cuda = 10.2, cudnn = 8, GPU card = NVIDIA Tesla V100 * 1
Dependencies: python > 3.5, tensorflow > 1.10.0, pdb, numpy, tdqm, codecs
HiCapsRKL
├── SampleData
│ ├── train.tsv
│ ├── relevance_prediction_test_data
│ │ ├── test.tsv
│ ├── medical_literature_retrieval_test_data
│ │ ├── test.tsv
├── InitModel
│ ├── modellink.txt
├── init.py
├── match_utils.py
├── modeling.py
├── optimization.py
├── tokenization.py
├── train_HiCapsRKL.py
├── f1.py
├── ranking_metrics.py
└── README.md
The training data (train.tsv), the relevance prediction test data (relevance_prediction_test_data/test.tsv), and the medical literature retrieval test data (medical_literature_retrieval_test_data/test.tsv) are randomly sampled from each whole set
and these data could be used to run the training and testing process for this code.
The directory contains the BERT-Base, Chinese
pre-trained model as the initial checkpoint for training HiCapsRKL. If needed, one can download these paramters from https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip .
python train_HiCapsRKL.py --task_name=medrkg --do_train=true --data_dir=SampleData \
--vocab_file=InitModel/vocab.txt --bert_config_file=InitModel/bert_config.json \
--init_checkpoint=InitModel/bert_model.ckpt --max_seq_length=256 --train_batch_size=8 \
--learning_rate=2e-5 --num_train_epochs=10.0 --output_dir=output_dir/
* python train_HiCapsRKL.py --task_name=medrkg --do_predict=true \
--data_dir=SampleData/relevance_prediction_test_data --vocab_file=InitModel/vocab.txt \
--bert_config_file=InitModel/bert_config.json --init_checkpoint=output_dir/\*\*\*.ckpt \
--output_dir=output_dir/
* python f1.py output_dir
* python train_HiCapsRKL.py --task_name=medrkg --do_predict=true \
--data_dir=SampleData/medical_literature_retrieval_test_data \
--vocab_file=InitModel/vocab.txt --bert_config_file=InitModel/bert_config.json \
--init_checkpoint=output_dir/\*\*\*.ckpt --output_dir=output_dir/
* python ranking_metrics.py output_dir
If you use any of the resources listed here, please cite:
@inproceedings{gdls-2021-HiCapsRKL,
title = "Leveraging Capsule Routing to Associate Knowledge with Medical Literature Hierarchically",
author = "Liu, Xin and Chen, Qingcai and Chen, Junying and Zhou, Wenxiu and Liu, Tingyu and Yang, Xinlan and Peng, Weihua",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
}
No Description
Python Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》