Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=3. Full shape re

Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 208, 1]

insert image description here
Scenario: When convolution is used for structured data, the convolution call in the custom Keras class reports Conv2Dan error

Reason for the error: The input data is 3-dimensional, which does not meet the convolution requirements. The convolution requirements are 4-dimensional.

The convolution calculation requires that the input image must have 4 dimensions. The 0th dimension represents feeding several batches at a time, and the 1st, 2nd, and 3rd dimensions represent the resolution and number of channels of the input image respectively. For a detailed and clear explanation of this part, please refer to the following link: https://baijiahao.baidu.com/s?id=1647365166670429226&wfr=spider&for=pc

  • 1. You always have to feed a 4D array of shape (batch_size, height, width, depth) into the CNN.
  • 2. The output data of CNN is also a 4D array of shape (batch_size, height, width, depth).
  • 3. To add a Dense layer on top of the CNN layer, we have to use keras' Flatten layer to change the CNN's 4D output to 2D.

Code part modification:

Only post the key code

def call(self, inputs, training=None, mask=None):
    dense_inputs, sparse_inputs = inputs[:, :13], inputs[:, 13:]

    sparse_embed = [layer(sparse_inputs[:, i]) for i, layer in enumerate(self.emb_layers)]  # 26 * [None, k]
    # 卷积要求输入为4维
    sparse_embed = tf.transpose(tf.convert_to_tensor(sparse_embed), [1, 0, 2])  # [None, 26, k]
    print(sparse_embed.shape)

    fgcnn_out = self.fgcnn_layers(sparse_embed)
 def call(self, inputs, **kwargs):
    x = tf.expand_dims(inputs, axis=-1)  # [None, n, k, 1] 最后一维为通道

Reference link:
https://blog.csdn.net/tushuguan_sun/article/details/105914661

Guess you like

Origin blog.csdn.net/qq_42363032/article/details/122823556