RuntimeError: expected scalar type Half but found Float

Cause: Transplant CCNet's cross attention module to YOLOv5.

After: In the attention module, there will be more matrix operations, and there will be conflicts between cuda and cup types during training (another article I wrote); and the above errors will appear during verification.

The error code is as follows:

        # [b1*w1, c1, h1] -> [b1, w1, c1, h1] -> [b1, c1, h1, w1]
        out_H = torch.bmm(value_H, att_H.permute(0, 2, 1)).view(b1, w1, -1, h1).permute(0, 2, 3, 1)
        # [b1 * h1, c1, w1] -> [b1, h1, c1, w1] -> [b1, c1, h1, w1]
        out_W = torch.bmm(value_W, att_W.permute(0, 2, 1)).view(b1, h1, -1, w1).permute(0, 2, 1, 3)

The location of the error is at torch.bmm(), where a matrix multiplication operation is performed. The conflict occurs because the two data types are different.

Solution: Still use the to() method to modify the data type to another data type.

        # [b1*w1, c1, h1] -> [b1, w1, c1, h1] -> [b1, c1, h1, w1]
        out_H = torch.bmm(value_H, att_H.permute(0, 2, 1).to(value_H.dtype)).view(b1, w1, -1, h1).permute(0, 2, 3, 1)
        # [b1 * h1, c1, w1] -> [b1, h1, c1, w1] -> [b1, c1, h1, w1]
        out_W = torch.bmm(value_W, att_W.permute(0, 2, 1).to(value_W.dtype)).view(b1, h1, -1, w1).permute(0, 2, 1, 3)

Guess you like

Origin blog.csdn.net/adminHD/article/details/127766616