{"id":284,"hash":"b1348b877b1cfb25cedd9863f41b455777e29c4cecec16dcd3be14cb1e96f81c","pattern":"RuntimeError: expected scalar type Long but found Float","full_message":"I can't get the dtypes to match, either the loss wants long or the model wants float if I change my tensors to long. The shape of the tensors are 42000, 1, 28, 28 and 42000. I'm not sure where I can change what dtypes are required for the model or loss. \n\nI'm not sure if dataloader is required, using Variable didn't work either.\n\ndataloaders_train = torch.utils.data.DataLoader(Xt_train, batch_size=64)\n\ndataloaders_test = torch.utils.data.DataLoader(Yt_train, batch_size=64)\n\nclass Network(nn.Module):\n    def __init__(self):\n        super().__init__()\n\n        self.hidden = nn.Linear(42000, 256)\n\n        self.output = nn.Linear(256, 10)\n\n        self.sigmoid = nn.Sigmoid()\n        self.softmax = nn.Softmax(dim=1)\n\n    def forward(self, x):\n\n        x = self.hidden(x)\n        x = self.sigmoid(x)\n        x = self.output(x)\n        x = self.softmax(x)\n\n        return x\n\nmodel = Network()\n\ninput_size = 784\nhidden_sizes = [28, 64]\noutput_size = 10 \nmodel = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),\n                      nn.ReLU(),\n                      nn.Linear(hidden_sizes[0], hidden_sizes[1]),\n                      nn.ReLU(),\n                      nn.Linear(hidden_sizes[1], output_size),\n                      nn.Softmax(dim=1))\nprint(model)\n\ncriterion = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.003)\n\nepochs = 5\n\nfor e in range(epochs):\n    running_loss = 0\n    for images, labels in zip(dataloaders_train, dataloaders_test):\n\n        images = images.view(images.shape[0], -1)\n        #images, labels = Variable(images), Variable(labels)\n        print(images.dtype)\n        print(labels.dtype)\n\n        optimizer.zero_grad()\n\n        output = model(images)\n        loss = criterion(output, labels)\n        loss.backward()\n        optimizer.step()\n\n        running_loss += loss.item()\n    else:\n        print(f\"Training loss: {running_loss}\")\n\nWhich gives\n\nRuntimeError                              Traceback (most recent call last)\n<ipython-input-128-68109c274f8f> in <module>\n     11 \n     12         output = model(images)\n---> 13         loss = criterion(output, labels)\n     14         loss.backward()\n     15         optimizer.step()\n\n/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\n    530             result = self._slow_forward(*input, **kwargs)\n    531         else:\n--> 532             result = self.forward(*input, **kwargs)\n    533         for hook in self._forward_hooks.values():\n    534             hook_result = hook(self, input, result)\n\n/opt/conda/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)\n    202 \n    203     def forward(self, input, target):\n--> 204         return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)\n    205 \n    206 \n\n/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)\n   1836                          .format(input.size(0), target.size(0)))\n   1837     if dim == 2:\n-> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\n   1839     elif dim == 4:\n   1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\n\nRuntimeError: expected scalar type Long but found Float","ecosystem":"pypi","package_name":"machine-learning","package_version":null,"solution":"LongTensor is synonymous with integer. PyTorch won't accept a FloatTensor as categorical target, so it's telling you to cast your tensor to LongTensor. This is how you should change your target dtype:\n\nYt_train = Yt_train.type(torch.LongTensor)\n\nThis is very well documented on the PyTorch website, you definitely won't regret spending a minute or two reading this page. PyTorch essentially defines nine CPU tensor types and nine GPU tensor types:\n\n╔══════════════════════════╦═══════════════════════════════╦════════════════════╦═════════════════════════╗\n║        Data type         ║             dtype             ║     CPU tensor     ║       GPU tensor        ║\n╠══════════════════════════╬═══════════════════════════════╬════════════════════╬═════════════════════════╣\n║ 32-bit floating point    ║ torch.float32 or torch.float  ║ torch.FloatTensor  ║ torch.cuda.FloatTensor  ║\n║ 64-bit floating point    ║ torch.float64 or torch.double ║ torch.DoubleTensor ║ torch.cuda.DoubleTensor ║\n║ 16-bit floating point    ║ torch.float16 or torch.half   ║ torch.HalfTensor   ║ torch.cuda.HalfTensor   ║\n║ 8-bit integer (unsigned) ║ torch.uint8                   ║ torch.ByteTensor   ║ torch.cuda.ByteTensor   ║\n║ 8-bit integer (signed)   ║ torch.int8                    ║ torch.CharTensor   ║ torch.cuda.CharTensor   ║\n║ 16-bit integer (signed)  ║ torch.int16 or torch.short    ║ torch.ShortTensor  ║ torch.cuda.ShortTensor  ║\n║ 32-bit integer (signed)  ║ torch.int32 or torch.int      ║ torch.IntTensor    ║ torch.cuda.IntTensor    ║\n║ 64-bit integer (signed)  ║ torch.int64 or torch.long     ║ torch.LongTensor   ║ torch.cuda.LongTensor   ║\n║ Boolean                  ║ torch.bool                    ║ torch.BoolTensor   ║ torch.cuda.BoolTensor   ║\n╚══════════════════════════╩═══════════════════════════════╩════════════════════╩═════════════════════════╝","confidence":0.95,"source":"stackoverflow","source_url":"https://stackoverflow.com/questions/60440292/runtimeerror-expected-scalar-type-long-but-found-float","votes":57,"created_at":"2026-04-19T04:41:44.174008+00:00","updated_at":"2026-04-19T04:51:56.205108+00:00"}