Deleting the wiki page 'HiStar快速上手' cannot be undone. Continue?
本节以两个全连接层加一个激活函函数的简单模型为例,向读者演示利用 HiStar 将深度学习算法转换为联邦学习算法进行隐私保护下的训练过程,通过学习本节内容,读者可以了解 HiStar 的具体使用方法。
要实现非联邦情况下模型微调,参与微调的用户可以利用自己的数据,直接运行原始代码:
python toymodel.py
聚合服务器按照如下步骤修改“python server.py”中的代码:
server = HiStar.ServerWorker(None, host, port, device='cpu', verbose=False)
修改完成后运行。
python server.py
按照以下步骤修改参与联邦学习的用户代码:
import HiStar
client = HiStar.ClientWorker(None, host, port, rank=rank, client_num=client_num, device=device)
将“host”改为聚合服务器的IP地址;
将“port”改为聚合服务器的端口地址;
将代码中的“rank”改为事先约定好的序号;
将代码中“client_num”替换为当前参与联邦的用户人数。
完成上述修改后,参与联邦的用户在原有代码中构建模型前添加修改后的代码,并在优化器后添加
opt = HiStar.FedOptim(opt, client)
后,利用本地数据各自运行以下三个修改后的微调代码,开始联邦微调。
python client1.py
python client2.py
python client3.py
在本示例中,用户1在联邦前后的代码变化如下:
346
347
348
349 optimizer = torch.optim.AdamW(model.parameters(), lr = args.lr)
350 scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, 500, eta_min=1e-6)
351
352
346 import HiStar # import our model
347 client = HiStar.ClientWorker(None, "127.0.0.1", 9001, rank=0, client_num=3, device=device, verbose=False) # setup the client
348
349 optimizer = torch.optim.AdamW(model.parameters(), lr = args.lr)
350 scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, 500, eta_min=1e-6)
351
352 optimizer = HiStar.FedOptim(optimizer, client) # wrap the optimizer
Deleting the wiki page 'HiStar快速上手' cannot be undone. Continue?
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》