Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
sk-w b7c5c16dd2 | 1 year ago | |
---|---|---|
mdImage | 1 year ago | |
README.md | 1 year ago | |
dataset.py | 1 year ago | |
net.py | 1 year ago | |
test.py | 1 year ago | |
train.py | 1 year ago |
Model Innovation
1.Relu is used as the activation function instead of the traditional sigmoid and tanh.
Relu is an unsaturated function. In this paper, it is verified that its effect exceeds sigmoid in a deeper network, and
the gradient dispersion problem of sigmoid in a deeper network is successfully solved.
2.Model training on multiple GPUs
Improve the training speed of the model and the use scale of data
3.Using random drop technique (dropout)
Selectively ignore individual neurons in training,avoid overfitting of the model.
Model Architecture
The architecture of AlexNet is 5(convolution layer,relu and pool)+3(full connect layer).
No Description
Python
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》