admin管理员组

文章数量:1565283

原因:

在跑深度学习中出现:RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed).  Saved intermediate values of the graph are freed when you call .backward() or autograd.grad().  Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

解决办法:

发现 retain_graph=True 也不对,原因是我想要将每次跑出的结果加入下一次训练迭代,直接将预测的结果y_pre赋给x,改变了tensor的梯度(grad),其实只需要改变data属性就行,错误结果为:

xx = y_pred

修改正确为:

xx = y_pred.data

本文标签: TIMEGraphRuntimeErrorTensorsaved