34

My input tensor is torch.DoubleTensor type. But I got the RuntimeError below:

RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 'weight'

I didn't specify the type of the weight explicitly(i.e. I did not init my weight by myself. The weight is created by pytorch). What will influence the type of weight in the forward process?

Thanks a lot!!

1
  • 1
    After I transfer the input type to FloatTensor by .float(). The code can process correctly. But I still don't know what if I want doubletensor type input...... Commented Mar 21, 2018 at 13:19

2 Answers 2

39

The default type for weights and biases are torch.FloatTensor. So, you'll need to cast either your model to torch.DoubleTensor or cast your inputs to torch.FloatTensor. For casting your inputs you can do

X = X.float()

or cast your complete model to DoubleTensor as

model = model.double()

You can also set the default type for all tensors using

pytorch.set_default_tensor_type('torch.DoubleTensor')

It is better to convert your inputs to float rather than converting your model to double, because mathematical computations on double datatype is considerably slower on GPU.

Sign up to request clarification or add additional context in comments.

6 Comments

Thanks for your answer very much!! BTW, I meet another problem about loading model. The IDE raise UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 918: ordinal not in range(128) error when I use model = torch.load("file.pth") (ps: the .pth file comes from somewhere, and it was not trained by myself. The pth file trained by myself can be loaded using torch.load) .Thanks again!
That is probably because the model was created (and saved) using Python3, whose default encoding is utf-8, but you are using Python2. Add # -*- coding: utf-8 -*- at top of your python file. Also it is not a good practice to save and load the complete model. This can break in a number of ways. For proper serialization of models see the official post
Thank you very much. But I am using python3... And I have added -*- coding: utf-8 -*- at top of my pyhton file. It doesn't work...
Then you can try to load the model in python2. For more details you can refer to this issue on GitHub
for me with torch0.4.1 it is not pytorch.set(...) but torch.set(...)
|
3

I was also receiving exact same error. The root cause turned out to be this statement in my data loading code:

t = t.astype(np.float)

Here np.float translates to 64-bit float which maps to DoubleTensor. So changing this to,

t = t.astype(np.float32)

solved the issue.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.