在Python中嵌入层:如何在Torchsummary中正确使用?

编程入门 行业动态 更新时间:2024-10-20 01:31:16
本文介绍了在Python中嵌入层:如何在Torchsummary中正确使用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

这是一个最低限度的工作/可复制的示例:

This is a minimally working/reproducible example:

import torch import torch.nn as nn from torchsummary import summary class Network(nn.Module): def __init__(self, channels_img, features_d, num_classes, img_size): super(Network, self).__init__() self.img_size = img_size self.disc = nn.Conv2d( in_channels = channels_img + 1, out_channels = features_d, kernel_size = (4,4) ) # ConditionalGan: self.embed = nn.Embedding( num_embeddings = num_classes, embedding_dim = img_size * img_size ) def forward(self, x, labels): embedding = self.embed(labels).view(labels.shape[0], 1, self.img_size, self.img_size) x = torch.cat([x, embedding], dim = 1) return self.disc(x) # device: device = torch.device("cpu") # hyperparameter: batch_size = 64 # Initialize model: model = Network( channels_img = 1, features_d = 16, num_classes = 10, img_size = 28).to(device) # Print model summary: summary( model, input_size = [(1, 28, 28), (1, 28, 28)], # MNIST batch_size = batch_size )

我得到的错误消息是(对于带有 summary(...)的行):

The error message I get is (for the line with summary(...)):

参数#1'indices'的预期张量具有标量类型Long;但是却得到了torch.cuda.FloatTensor(在检查嵌入参数时)

我在此帖子中看到了,该 .to(torch.int64)应该会提供帮助,但老实说,我不知道在哪里编写它.

I saw in this post, that .to(torch.int64) is supposed to help, but I honestly don't know where to write it.

谢谢!

推荐答案

问题出在这里:

self.embed(labels)...

嵌入层是离散索引和连续值之间的映射,如此处.也就是说,它的输入应该是整数,并且会给您返回浮点数.以您的情况为例,例如,您正在MNIST的 embedding 类标签(范围从0到9,到连续)(由于某些原因,我不知道,因为我不熟悉GAN):)).但简而言之,该嵌入层将给出 10->的转换.PyTorch说,"784 对您来说,那10个数字应该是整数.

An embedding layer is kind of a mapping between discrete indices and continuous values, as stated here. That is, its inputs should be integers and it will give you back floats. In your case, for example, you are embedding class labels of the MNIST which range from 0 to 9, to a contiuum (for some reason that I don't know as i'm not familiar with GANs :)). But in short, that embedding layer will give a transformation of 10 -> 784 for you and those 10 numbers should be integers, PyTorch says.

整数类型的奇特名称是"long",因此您需要确保 self.embed 中所输入内容的数据类型是该类型.有一些方法可以做到这一点:

A fancy name for an integer type is "long", so you need to make sure the data type of what goes into self.embed is of that type. There are some ways to do that:

self.embed(labels.long())

self.embed(labels.to(torch.long))

self.embed(labels.to(torch.int64))

长数据类型实际上是一个64位整数(您可能会在此处),所以所有这些都有效.

Long datatype is really an 64 bit integer (you may see here), so all these work.

更多推荐

在Python中嵌入层:如何在Torchsummary中正确使用?

本文发布于:2023-10-13 10:11:03,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1487660.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:正确   如何在   Python   Torchsummary

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!