Check failed: error == cudaSuccess (77 vs. 0) an illegal memory access was encountered

  1. When the bottom and top dimensions of a layer are the same, it is fine to take the same name (for example, relu), but when the dimensions of the two are inconsistent, they cannot take the same name. It is recommended to check the layers with the same name as bottom and top in the log to see if the data dimension is consistent.
  2. I just encountered this issue, in my case this seems due to the use of rectangular kernels into conv layers.
    Reshaping the kernels into square ones resolved the issue.
    Note that I also used rectangular kernels for the pooling layers and those work properly
  3. I have the similar problem:
    F0309 11:30:48.307298 892 syncedmem.hpp:19] Check failed: error == cudaSuccess (77 vs. 0) an illegal memory access was encountered
    *** Check failure stack trace: ***
    Aborted (core dumped)

    When I run demo.py with gpu_id = 0, it is OK. But when I set gpu_id = 1 or 2 or 3(I have 4 gpu), the problem arises.

  4. if using cuda-8, the problem seems missing from my side.
  5. This is the case for me because my environment data is actually fine. The original 2D Object detection network removed the RPN part, and gt_boxes replaced rois in the roi pooling layer. First, pay attention to the order of gt_boxes (xmin, ymin, xmax, ymax, label) quintuple and rois (label, xmin) , ymin, xmax, ymax) are different. Secondly, it should be noted that the label of rois is directly set to 0 in the proposal target layer. It should be noted that if the training txt contains 5 categories (0-4), then error. Make sure that the data is correct at every link.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325683490&siteId=291194637