admin管理员组文章数量:1590698
错误问题:
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
这个错误,一般是训练模型中有多个(两个或以上)的损失函数计算,导致的,所以在第一个loss.backward()加上retain_graph=True
即:loss.backward(retain_graph=True)
但是针对于我的问题,我的模型类似GAN模型的
所以即使加上还是会报错:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2048]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
解决办法:
在鉴定模型和生成模型之间要在使用一次鉴定模型
参考博客:
python - One of the variables modified by an inplace operation - Stack Overflow
pytorch中反向传播的loss.backward(retain_graph=True)报错 - StarZhai - 博客园
本文标签: 报错RuntimeErrorGraphintermediatesaved
版权声明:本文标题:【报错】:RuntimeError: Trying to backward through the graph a second time, but the saved intermediate re 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://www.elefans.com/dianzi/1725805521a1043982.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论