Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
xiaomabufei 60b17ac29e | 2 years ago | |
---|---|---|
checkpoint | 2 years ago | |
CosFace.py | 2 years ago | |
README.md | 2 years ago | |
eval_lfw.py | 2 years ago | |
log | 2 years ago | |
lr_schedule.py | 2 years ago | |
train.py | 2 years ago |
CosFace propose a novel loss function, namely large margin
cosine loss (LMCL), to realize this idea from a different
perspective. More specifically, we reformulate the softmax
loss as a cosine loss by L2 normalizing both features and
weight vectors to remove radial variations, based on which
a cosine margin term is introduced to further maximize the
decision margin in the angular space. As a result, minimum
intra-class variance and maximum inter-class variance are
achieved by virtue of normalization and cosine decision
margin maximization
Paper: Wang, Hao, et al. "Cosface: Large margin cosine loss for deep face recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
We use backbone network of MobileFaceNet
Considering a scenario of binary-classes for example,
let θi denote the angle between the learned feature vector
and the weight vector of Class Ci (i = 1,2). The NSL
forces cos(θ1) > cos(θ2) for C1 , and similarly for C2,
so that features from different classes are correctly classified.
To develop a large margin classifier, we further require
cos(θ1) − m > cos(θ2) and cos(θ2) − m > cos(θ1), where
m ≥ 0 is a fixed parameter introduced to control the magnitude
of the cosine margin. Since cos(θi) − m is lower than
cos(θi), the constraint is more stringent for classification.
The above analysis can be well generalized to the scenario
of multi-classes. Therefore, the altered loss reinforces the
discrimination of learned features by encouraging an extra
margin in the cosine space.
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
Train Dataset used: Align-CASIA-WebFace
Test Dataset used: Align-LFW
After installing MindSpore via the official website, you can start training and evaluation as follows:
# enter script dir, train net
python train.py
# enter script dir, evaluate net
python eval_lfw.py
Major parameters in train.py and config.py as follows:
--batch_size: Training batch size.
--init_lr: the init Learning rate, default is 0.1.
--epoch_size: Epoch size, default is 70.
--lr_strategy: the lr strategy, default is preserve init lr, consine, Multistep.
We config the input image width 96, height 112
user must adjust the training image root in train.py line 70
user must adjust the weight file path in eval_lfw.py line 161 and test dataset root in eval_lfw.py line 163
running on Ascend
DEVICES_ID=0 nohup python train.py > log &
After training, the loss value will be achieved as follows:
epoch: 1 step: 1, loss is 29.115448
epoch: 1 step: 2, loss is 29.015577
epoch: 1 step: 3, loss is 30.393188
epoch: 1 step: 4, loss is 29.896229
epoch: 1 step: 5, loss is 31.129133
epoch: 1 step: 6, loss is 31.261982
epoch: 1 step: 7, loss is 31.596119
epoch: 1 step: 8, loss is 31.839394
epoch: 1 step: 9, loss is 32.113644
epoch: 1 step: 10, loss is 32.441643
...
The model checkpoint will be saved in the ./checkpoint directory.
Before running the command below, please check the checkpoint path used for evaluation.
running on Ascend
DEVICES_ID=0 python eval_lfw.py
and can get accuracy
'Accuracy': 98.82%
Please check the official homepage.
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》