erxian.lin 13c2bb726c | 8 months ago | |
---|---|---|
.. | ||
cpp_infer/src | 8 months ago | |
golden_stick/pruner/uni_pruning | 8 months ago | |
scripts | 8 months ago | |
src | 8 months ago | |
Dockerfile | 8 months ago | |
README.md | 8 months ago | |
README_CN.md | 8 months ago | |
eval.py | 8 months ago | |
export.py | 8 months ago | |
infer_unet_onnx.py | 8 months ago | |
mindspore_hub_conf.py | 8 months ago | |
postprocess.py | 8 months ago | |
preprocess.py | 8 months ago | |
preprocess_dataset.py | 8 months ago | |
requirements.txt | 8 months ago | |
train.py | 8 months ago |
Unet for 2D image segmentation. This implementation is as described in the original paper UNet: Convolutional Networks for Biomedical Image Segmentation. Unet, in the 2015 ISBI cell tracking competition, many of the best are obtained. In this paper, a network model for medical image segmentation is proposed, and a data enhancement method is proposed to effectively use the annotation data to solve the problem of insufficient annotation data in the medical field. A U-shaped network structure is also used to extract the context and location information.
UNet++ is a neural architecture for semantic and instance segmentation with re-designed skip pathways and deep supervision.
U-Net Paper: Olaf Ronneberger, Philipp Fischer, Thomas Brox. "U-Net: Convolutional Networks for Biomedical Image Segmentation." conditionally accepted at MICCAI 2015. 2015.
UNet++ Paper: Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh and J. Liang, "UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation," in IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 1856-1867, June 2020, doi: 10.1109/TMI.2019.2959609.
Specifically, the U network structure is proposed in UNET, which can better extract and fuse high-level features and obtain context information and spatial location information. The U network structure is composed of encoder and decoder. The encoder is composed of two 3x3 conv and a 2x2 max pooling iteration. The number of channels is doubled after each down sampling. The decoder is composed of a 2x2 deconv, concat layer and two 3x3 convolutions, and then outputs after a 1x1 convolution.
Dataset used: ISBI Challenge
We also support Multi-Class dataset which get image path and mask path from a tree of directories.
Images within one folder is an image, the image file named "image.png"
, the mask file named "mask.png"
.
The directory structure is as follows:
.
└─dataset
└─0001
├─image.png
└─mask.png
└─0002
├─image.png
└─mask.png
...
└─xxxx
├─image.png
└─mask.png
When you set split
in (0, 1) in config, all images will be split to train dataset and val dataset by split value, and the split
default is 0.8.
If set split
=1.0, you should split train dataset and val dataset by directories, the directory structure is as follows:
.
└─dataset
└─train
└─0001
├─image.png
└─mask.png
...
└─xxxx
├─image.png
└─mask.png
└─val
└─0001
├─image.png
└─mask.png
...
└─xxxx
├─image.png
└─mask.png
We support script to convert COCO and a Cell_Nuclei dataset used in used in Unet++ original paper to mulyi-class dataset format.
Select unet_*_cell_config.yaml
or unet_*_coco_config.yaml
file according to different datasets under unet
and modify the parameters as needed.
run script to convert to mulyi-class dataset format:
python preprocess_dataset.py --config_path path/unet/unet_nested_cell_config.yaml --data_path /data/save_data_path
After installing MindSpore via the official website, you can start training and evaluation as follows:
yaml
in unet/
. We support unet and unet++, and we provide some parameter configurations for quick start.unet/ *.yaml
. You can set 'model'
to 'unet_nested'
or 'unet_simple'
to select which net to use. We support ISBI
and Cell_nuclei
two dataset, you can set 'dataset'
to 'Cell_nuclei'
to use Cell_nuclei
dataset, default is ISBI
.# run training example
python train.py --data_path=/path/to/data/ --config_path=/path/to/yaml > train.log 2>&1 &
OR
bash scripts/run_standalone_train.sh [DATASET] [CONFIG_PATH]
# run distributed training example
bash scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [CONFIG_PATH]
# run evaluation example
python eval.py --data_path=/path/to/data/ --checkpoint_file_path=/path/to/checkpoint/ --config_path=/path/to/yaml > eval.log 2>&1 &
OR
bash scripts/run_standalone_eval.sh [DATASET] [CHECKPOINT] [CONFIG_PATH]
# run training example
python train.py --data_path=/path/to/data/ --config_path=/path/to/yaml --device_target=GPU > train.log 2>&1 &
OR
bash scripts/run_standalone_train_gpu.sh [DATASET] [CONFIG_PATH] [DEVICE_ID](optional)
# run distributed training example
bash scripts/run_distribute_train.sh [RANKSIZE] [DATASET] [CONFIG_PATH] [CUDA_VISIBLE_DEVICES(0,1,2,3,4,5,6,7)](optional)
# run evaluation example
python eval.py --data_path=/path/to/data/ --checkpoint_file_path=/path/to/checkpoint/ --config_path=/path/to/yaml > eval.log 2>&1 &
OR
bash scripts/run_standalone_eval_gpu.sh [DATASET] [CHECKPOINT] [CONFIG_PATH] [DEVICE_ID](optional)
# run export
python export.py --config_path=[CONFIG_PATH] --checkpoint_file_path=[model_ckpt_path] --file_name=[air_model_name] --file_format=MINDIR --device_target=GPU
Build docker images(Change version to the one you actually used)
# build docker
docker build -t unet:20.1.0 . --build-arg FROM_IMAGE_NAME=ascend-mindspore-arm:20.1.0
Create a container layer over the created image and start it
# start docker
bash scripts/docker_start.sh unet:20.1.0 [DATA_DIR] [MODEL_DIR]
Then you can run everything just like on ascend.
If you want to run in modelarts, please check the official documentation of modelarts, and you can start training and evaluation as follows:
# run distributed training on modelarts example
# (1) First, Perform a or b.
# a. Set "enable_modelarts=True" on yaml file.
# Set other parameters on yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add other parameters on the website UI interface.
# (2) Set the config directory to "config_path=/The path of config in S3/"
# (3) Set the code directory to "/path/unet" on the website UI interface.
# (4) Set the startup file to "train.py" on the website UI interface.
# (5) Set the "Dataset path" and "Output file path" and "Job log path" to your path on the website UI interface.
# (6) Create your job.
# run evaluation on modelarts example
# (1) Copy or upload your trained model to S3 bucket.
# (2) Perform a or b.
# a. Set "enable_modelarts=True" on yaml file.
# Set "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on yaml file.
# Set "checkpoint_url=/The path of checkpoint in S3/" on yaml file.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
# Add "checkpoint_url=/The path of checkpoint in S3/" on the website UI interface.
# (3) Set the config directory to "config_path=/The path of config in S3/"
# (4) Set the code directory to "/path/unet" on the website UI interface.
# (5) Set the startup file to "eval.py" on the website UI interface.
# (6) Set the "Dataset path" and "Output file path" and "Job log path" to your path on the website UI interface.
# (7) Create your job.
├── model_zoo
├── README.md // descriptions about all the models
├── unet
├── README.md // descriptions about Unet
├── README_CN.md // chinese descriptions about Unet
├── ascend310_infer // code of infer on ascend 310
├── Dockerfile
├── scripts
│ ├──docker_start.sh // shell script for quick docker start
│ ├──run_disribute_train.sh // shell script for distributed on Ascend
│ ├──run_infer_310.sh // shell script for infer on ascend 310
│ ├──run_standalone_train.sh // shell script for standalone on Ascend
│ ├──run_standalone_eval.sh // shell script for evaluation on Ascend
│ ├──run_standalone_train_gpu.sh // shell script for training on GPU
│ ├──run_standalone_eval_gpu.sh // shell script for evaluation on GPU
│ ├──run_distribute_train_gpu.sh // shell script for distributed on GPU
│ ├──run_eval_onnx.sh // shell script for evaluation on ONNX
├── src
│ ├──__init__.py
│ ├──data_loader.py // creating dataset
│ ├──loss.py // loss
│ ├──eval_callback.py // evaluation callback while training
│ ├──utils.py // General components (callback function)
│ ├──unet_medical // Unet medical architecture
├──__init__.py // init file
├──unet_model.py // unet model
└──unet_parts.py // unet part
│ ├──unet_nested // Unet++ architecture
├──__init__.py // init file
├──unet_model.py // unet model
└──unet_parts.py // unet part
│ ├──model_utils
├──__init__.py
├── config.py // parameter configuration
├── device_adapter.py // device adapter
├── local_adapter.py // local adapter
└── moxing_adapter.py // moxing adapter
├── unet_medical_config.yaml // parameter configuration
├── unet_medicl_gpu_config.yaml // parameter configuration
├── unet_nested_cell_config.yaml // parameter configuration
├── unet_nested_coco_config.yaml // parameter configuration
├── unet_nested_config.yaml // parameter configuration
├── unet_simple_config.yaml // parameter configuration
├── unet_simple_coco_config.yaml // parameter configuration
├── default_config.yaml // parameter configuration
├── train.py // training script
├── eval.py // evaluation script
├── infer_unet_onnx.py // evaluation script on ONNX
├── export.py // export script
├── mindspore_hub_conf.py // hub config file
├── postprocess.py // unet 310 infer postprocess.
├── preprocess.py // unet 310 infer preprocess dataset
├── preprocess_dataset.py // the script to adapt MultiClass dataset
└── requirements.txt // Requirements of third party package.
Parameters for both training and evaluation can be set in *.yaml
config for Unet, ISBI dataset
'name': 'Unet', # model name
'lr': 0.0001, # learning rate
'epochs': 400, # total training epochs when run 1p
'repeat': 400, # Repeat times pre one epoch
'distribute_epochs': 1600, # total training epochs when run 8p
'batchsize': 16, # training batch size
'cross_valid_ind': 1, # cross valid ind
'num_classes': 2, # the number of classes in the dataset
'num_channels': 1, # the number of channels
'keep_checkpoint_max': 10, # only keep the last keep_checkpoint_max checkpoint
'weight_decay': 0.0005, # weight decay value
'loss_scale': 1024.0, # loss scale
'FixedLossScaleManager': 1024.0, # fix loss scale
'is_save_on_master': 1, # save checkpoint on master or all rank
'rank': 0, # local rank of distributed(default: 0)
'resume': False, # whether training with pretrain model
'resume_ckpt': './', # pretrain model path
'transfer_training': False # whether do transfer training
'filter_weight': ["final.weight"] # weight name to filter while doing transfer training
'run_eval': False # Run evaluation when training
'show_eval': False # Draw eval result
'eval_activate': softmax # Select output processing method, should be softmax or argmax
'save_best_ckpt': True # Save best checkpoint when run_eval is True
'eval_start_epoch': 0 # Evaluation start epoch when run_eval is True
'eval_interval': 1 # valuation interval when run_eval is True
'train_augment': True # Whether apply data augment when training.
config for Unet++, cell nuclei dataset
'model': 'unet_nested', # model name
'dataset': 'Cell_nuclei', # dataset name
'img_size': [96, 96], # image size
'lr': 3e-4, # learning rate
'epochs': 200, # total training epochs when run 1p
'repeat': 10, # Repeat times pre one epoch
'distribute_epochs': 1600, # total training epochs when run 8p
'batchsize': 16, # batch size
'num_classes': 2, # the number of classes in the dataset
'num_channels': 3, # the number of input image channels
'keep_checkpoint_max': 10, # only keep the last keep_checkpoint_max checkpoint
'weight_decay': 0.0005, # weight decay value
'loss_scale': 1024.0, # loss scale
'FixedLossScaleManager': 1024.0, # loss scale
'use_bn': True, # whether to use BN
'use_ds': True, # whether to use deep supervisio
'use_deconv': True, # whether to use Conv2dTranspose
'resume': False, # whether training with pretrain model
'resume_ckpt': './', # pretrain model path
'transfer_training': False # whether do transfer training
'filter_weight': ['final1.weight', 'final2.weight', 'final3.weight', 'final4.weight'] # weight name to filter while doing transfer training
'run_eval': False # Run evaluation when training
'show_eval': False # Draw eval result
'eval_activate': softmax # Select output processing method, should be softmax or argmax
'save_best_ckpt': True # Save best checkpoint when run_eval is True
'eval_start_epoch': 0 # Evaluation start epoch when run_eval is True
'eval_interval': 1 # valuation interval when run_eval is True
'train_augment': False # Whether apply data augment when training. Apply augment may lead to accuracy fluctuation.
Note: total steps pre epoch is floor(epochs / repeat), because unet dataset usually is small, we repeat the dataset to avoid drop too many images when add batch size.
python train.py --data_path=/path/to/data/ --config_path=/path/to/yaml > train.log 2>&1 &
OR
bash scripts/run_standalone_train.sh [DATASET] [CONFIG_PATH]
The python command above will run in the background, you can view the results through the file train.log
.
After training, you'll get some checkpoint files under the script folder by default. The loss value will be achieved as follows:
# grep "loss is " train.log
step: 1, loss is 0.7011719, fps is 0.25025035060906264
step: 2, loss is 0.69433594, fps is 56.77693756377044
step: 3, loss is 0.69189453, fps is 57.3293877244179
step: 4, loss is 0.6894531, fps is 57.840651522059716
step: 5, loss is 0.6850586, fps is 57.89903776054361
step: 6, loss is 0.6777344, fps is 58.08073627299014
...
step: 597, loss is 0.19030762, fps is 58.28088370287449
step: 598, loss is 0.19958496, fps is 57.95493929352674
step: 599, loss is 0.18371582, fps is 58.04039977720966
step: 600, loss is 0.22070312, fps is 56.99692546024671
The model checkpoint will be saved in the current directory.
python train.py --data_path=/path/to/data/ --config_path=/path/to/config/ --output ./output --device_target GPU > train.log 2>&1 &
OR
bash scripts/run_standalone_train_gpu.sh [DATASET] [CONFIG_PATH] [DEVICE_ID](optional)
The python command above will run in the background, you can view the results through the file train.log. The model checkpoint will be saved in the current directory.
bash scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET]
The above shell script will run distribute training in the background. You can view the results through the file LOG[X]/log.log
. The loss value will be achieved as follows:
# grep "loss is" LOG0/log.log
step: 1, loss is 0.70524895, fps is 0.15914689861221412
step: 2, loss is 0.6925452, fps is 56.43668656967454
...
step: 299, loss is 0.20551169, fps is 58.4039329983891
step: 300, loss is 0.18949677, fps is 57.63118508760329
bash scripts/run_distribute_train_gpu.sh [RANKSIZE] [DATASET] [CONFIG_PATH]
The above shell script will run distribute training in the background. You can view the results through the file train.log
.
You can add run_eval
to start shell and set it True, if you want evaluation while training. And you can set argument option: eval_start_epoch
, eval_interval
, eval_metrics
if train on coco dataset.
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/unet/ckpt_unet_medical_adam-48_600.ckpt".
python eval.py --data_path=/path/to/data/ --checkpoint_file_path=/path/to/checkpoint/ --config_path=/path/to/yaml > eval.log 2>&1 &
OR
bash scripts/run_standalone_eval.sh [DATASET] [CHECKPOINT] [CONFIG_PATH]
The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
# grep "Cross valid dice coeff is:" eval.log
============== Cross valid dice coeff is: {'dice_coeff': 0.9111}
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/unet/ckpt_unet_medical_adam-2_400.ckpt".
python eval.py --data_path=/path/to/data/ --checkpoint_file_path=/path/to/checkpoint/ --config_path=/path/to/config/ > eval.log 2>&1 &
OR
bash scripts/run_standalone_eval_gpu.sh [DATASET] [CHECKPOINT] [CONFIG_PATH]
The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
# grep "Cross valid dice coeff is:" eval.log
============== Cross valid dice coeff is: {'dice_coeff': 0.9089390969777261}
python export.py --config_path=[CONFIG_PATH] --checkpoint_file_path=[model_ckpt_path] --file_name=[model_name] --file_format=[EXPORT_FORMAT]
The checkpoint_file_path
parameter is required,
EXPORT_FORMAT
should be in ["AIR", "MINDIR", "ONNX"]. BATCH_SIZE current batch_size can only be set to 1.
(If you want to run in modelarts, please check the official documentation of modelarts, and you can start as follows)
# Export on ModelArts
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file.
# Set "file_name='./unet'" on default_config.yaml file.
# Set "file_format='MINDIR'" on default_config.yaml file.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
# Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface.
# Add "file_name='./unet'" on the website UI interface.
# Add "file_format='MINDIR'" on the website UI interface.
# Add other parameters on the website UI interface.
# (2) Set the config_path="/path/yaml file" on the website UI interface.
# (3) Set the code directory to "/path/unet" on the website UI interface.
# (4) Set the startup file to "export.py" on the website UI interface.
# (5) Set the "Output file path" and "Job log path" to your path on the website UI interface.
# (6) Create your job.
Before performing inference, the mindir file must be exported by export.py script. We only provide an example of inference using MINDIR model.
bash run_infer_cpp.sh [NETWORK] [MINDIR_PATH] [NEED_PREPROCESS] [DEVICE_TYPE] [DEVICE_ID]
NETWORK
Now, the supported networks are unet
and unet++
. The corresponding configuration files are unet_simple_config.yaml
and unet_nested_cell_config.yaml
respectively.
NEED_PREPROCESS
indicates whether data needs to be preprocessed. y
means that data needs to be preprocessed. If y
is selected, the dataset of unet
is processed in the numpy format, the unet++
validation dataset will be separated from the raw dataset for inference, and the value in yaml
configuration file need to be set,for example,change the value of batch_size
to 1 and set the dataset path.
DEVICE_ID
is optional, default value is 0.
Inference result is saved in current path, you can find result in acc.log file.
Cross valid dice coeff is: 0.9054352151297033
Before performing inference, the onnx file must be exported by export.py script. We only provide an example of inference using ONNX model.
Only ONNX models with batch size of 1 are supported for model inference accuracy verification, so --batch_size=1
is set in ONNX export.
python export.py --config_path=[CONFIG_PATH] --checkpoint_file_path=[model_ckpt_path] --file_name=[model_name] --file_format=ONNX --batch_size=1
Command
python infer_unet_onnx.py [CONFIG_PATH] [ONNX_MODEL] [DATASET_PATH] [DEVICE_TARGET]
CONFIG_PATH
: is the relative path to the YAML config file.
ONNX_MODEL
:is the relative path to the ONNX file.
DATASET_PATH
:is the relative path to the dataset.
DEVICE_TARGET
:Device name e.g. GPU, Ascend .
Script
bash ./scripts/run_eval_onnx.sh [DATASET_PATH] [ONNX_MODEL] [DEVICE_TARGET] [CONFIG_PATH]
Result on Checkpoint
============== Cross valid dice coeff is: 0.9138285672951554
============== Cross valid IOU is: 0.8414870790389525
Result on ONNX
============== Cross valid dice coeff is: 0.9138280814519938
============== Cross valid IOU is: 0.8414862558573599
Parameters | Ascend | GPU |
---|---|---|
Model Version | Unet | Unet |
Resource | Ascend 910 ;CPU 2.60GHz,192cores; Memory,755G; OS Euler2.8 | NV SMX2 V100-32G |
uploaded Date | 09/15/2020 (month/day/year) | 01/20/2021 (month/day/year) |
MindSpore Version | 1.2.0 | 1.1.0 |
Dataset | ISBI | ISBI |
Training Parameters | 1pc: epoch=400, total steps=600, batch_size = 16, lr=0.0001 | 1pc: epoch=400, total steps=800, batch_size = 12, lr=0.0001 |
Optimizer | ADAM | ADAM |
Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
outputs | probability | probability |
Loss | 0.22070312 | 0.21425568 |
Speed | 1pc: 267 ms/step; | 1pc: 423 ms/step; |
Total time | 1pc: 2.67 mins; | 1pc: 5.64 mins; |
Accuracy | IOU 90% | IOU 90% |
Parameters (M) | 93M | 93M |
Checkpoint for Fine tuning | 355.11M (.ckpt file) | 355.11M (.ckpt file) |
configuration | unet_medical_config.yaml | unet_medical_gpu_config.yaml |
Scripts | unet script | unet script |
Parameters | Ascend | GPU |
---|---|---|
Model Version | U-Net nested(unet++) | U-Net nested(unet++) |
Resource | Ascend 910 ;CPU 2.60GHz,192cores; Memory,755G; OS Euler2.8 | NV SMX2 V100-32G |
uploaded Date | 2021-8-20 | 2021-8-20 |
MindSpore Version | 1.3.0 | 1.3.0 |
Dataset | Cell_nuclei | Cell_nuclei |
Training Parameters | 1pc: epoch=200, total steps=6700, batch_size=16, lr=0.0003, 8pc: epoch=1600, total steps=6560, batch_size=16*8, lr=0.0003 | 1pc: epoch=200, total steps=6700, batch_size=16, lr=0.0003, 8pc: epoch=1600, total steps=6560, batch_size=16*8, lr=0.0003 |
Optimizer | ADAM | ADAM |
Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
outputs | probability | probability |
probability | cross valid dice coeff is 0.966, cross valid IOU is 0.936 | cross valid dice coeff is 0.976,cross valid IOU is 0.955 |
Loss | <0.1 | <0.1 |
Speed | 1pc: 150~200 fps | 1pc:230~280 fps, 8pc:(170~210)*8 fps |
Accuracy | IOU 93% | IOU 92% |
Total time | 1pc: 10.8min | 1pc:8min |
Parameters (M) | 27M | 27M |
Checkpoint for Fine tuning | 103.4M(.ckpt file) | 103.4M(.ckpt file) |
configuration | unet_nested_cell_config.yaml | unet_nested_cell_config.yaml |
Scripts | unet script | unet script |
Before inference, please refer to MindSpore Inference with C++ Deployment Guide to set environment variables.
If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you
can refer to this Link. Following
the steps below, this is a simple example:
Set options resume
to True in *.yaml
, and set resume_ckpt
to the path of your checkpoint. e.g.
'resume': True,
'resume_ckpt': 'ckpt_unet_sample_adam_1-1_600.ckpt',
'transfer_training': False,
'filter_weight': ["final.weight"]
Do the same thing as resuming traing above. In addition, set transfer_training
to True. The filter_weight
shows the weights which will be filtered for different dataset. Usually, the default value of filter_weight
don't need to be changed. The default values includes the weights which depends on the class number. e.g.
'resume': True,
'resume_ckpt': 'ckpt_unet_sample_adam_1-1_600.ckpt',
'transfer_training': True,
'filter_weight': ["final.weight"]
In data_loader.py, we set the seed inside “_get_val_train_indices" function. We also use random seed in train.py.
Please check the official homepage.
基于TCL在智能制造上缺陷检测的成功经验,TCL集团工业研究院开源了第一个工业视觉无监督异常检测框架,具有算法丰富、开箱即用、精度保证等特点。
Python Markdown Shell C++ Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》