【Pytorch】pytorch使用多张GPU进行训练以及测试调用模型
@【TOC】使用多张GPU进行训练的代码os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")print(device)model = getModel(args)if torch.cuda.device_count()
·
文章目录
使用多张GPU进行训练的代码
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
model = getModel(args)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model, device_ids=[0, 1, 2, 3]) # device_ids=[0, 1, 2, 3]
model.to(device)
注意事项
1.当需要改变主GPU时,只需要修改如下:
os.environ["CUDA_VISIBLE_DEVICES"] = "1,2,3"
device_ids=[0, 1, 2] # 此时的GPU从GPU1开始,使用三张GPU
2.使用多张GPU时,batch_size大于1才能使得多张GPU都工作。
测试调用模型
G_model, _ = getModel(args)
if torch.cuda.device_count() > 1:
G_model = nn.DataParallel(G_model)
checkpoint = torch.load(r'./Gmodel.pth', map_location='cuda')
G_model.load_state_dict(checkpoint)
if isinstance(G_model, torch.nn.DataParallel):
G_model = G_model.module
else:
checkpoint = torch.load(r'./Gmodel.pth', map_location='cuda')
G_model.load_state_dict(checkpoint)
G_model = G_model.to(device)
更多推荐



所有评论(0)