1

I don't understand what my problem is. It should work, if only because its the standard autoenoder from the tensorflow documentation. this is the error

line 64, in call
    decoded = self.decoder(encoded)
ValueError: Exception encountered when calling Autoencoder.call().

Invalid dtype: <property object at 0x7fb471cc1c60>

Arguments received by Autoencoder.call():
  • x=tf.Tensor(shape=(32, 28, 28), dtype=float32)

and this is my code

(x_train, _), (x_test, _) = fashion_mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

print (x_train.shape)
print (x_test.shape)

class Autoencoder(Model):
  def __init__(self, latent_dim, shape):
    super(Autoencoder, self).__init__()
    self.latent_dim = latent_dim
    self.shape = shape
    self.encoder = tf.keras.Sequential([
      layers.Flatten(),
      layers.Dense(latent_dim, activation='relu'),
    ])
    self.decoder = tf.keras.Sequential([
      layers.Dense(tf.math.reduce_prod(shape), activation='sigmoid'),
      layers.Reshape(shape)
    ])

  def call(self, x):
    encoded = self.encoder(x)
    print(encoded)
    decoded = self.decoder(encoded)
    print(decoded)
    return decoded


shape = x_test.shape[1:]
latent_dim = 64
autoencoder = Autoencoder(latent_dim, shape)

autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())

autoencoder.fit(x_train, x_train,
                epochs=10,
                shuffle=True,
                validation_data=(x_test, x_test))

I tried to change the database and also tried different shapes

3
  • This code works fine for me. Commented Mar 15, 2024 at 8:08
  • @xdurch0 so why do I get this error? Do you have any idea? Commented Mar 17, 2024 at 16:44
  • Not really. You may have a corrupt installation, or incompatible libraries (e.g. Keras vs Tensorflow). Is this the whole error traceback? It's unfortunately not very useful. Commented Mar 17, 2024 at 18:38

1 Answer 1

0

I ran into the same error when trying to get this example working with Keras 3. The invalid dtype error is because the Dense layer in the decoder expects a positive integer, but reduce_prod returns a scalar tensor. You must extract the scalar value with, e.g. numpy():

layers.Dense(tf.math.reduce_prod(shape).numpy(), activation='sigmoid')

After fixing that error, I ran into a problem with batch sizes (the model in the example doesn't expect a batch dimension), which I fixed with an initial Input layer in the encoder. Here is my Autoencoder model converted to Keras 3:

class Autoencoder(keras.Model):

  def __init__(self, latent_dim, shape):
    super().__init__()

    self.latent_dim = latent_dim
    self.shape = shape

    self.encoder = keras.Sequential([
      keras.Input(shape),
      keras.layers.Flatten(),
      keras.layers.Dense(latent_dim, activation='relu'),
    ])

    self.decoder = keras.Sequential([
      keras.layers.Dense(keras.ops.prod(shape).numpy(), activation='sigmoid'),
      keras.layers.Reshape(shape)
    ])

  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)

    return decoded

shape = x_test.shape[1:]
latent_dim = 64
autoencoder = Autoencoder(latent_dim, shape)
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.