【已解决】RuntimeError: Queue objects should only be shared between processes through inheritance

error message

    raise RuntimeError(
RuntimeError: Queue objects should only be shared between processes through inheritance

Analyze the reasons

Multiprocessing creates a process pool, and when using Queue to complete inter-process communication, you need to use Queue() in multiprocessing.Manager() instead of multiprocessing.Queue(), otherwise an error will be reported.

Modification method

Will:

multiprocessing.Queue()

Change to:

multiprocessing.Manager().Queue(-1)

Article directory

Summary

This column is a column explaining how to improve Yolov8. The improved method adopts the method mentioned in the latest paper. The improved methods include: adding attention mechanism, replacing convolution, replacing block, replacing backbone, replacing head, replacing optimizer, etc.; each article provides one to N kinds of improvement methods.

The data set used for the evaluation is a data set marked by myself, which contains 32 types of aircraft. I have evaluated each improvement method and compared it with the official model.

The code and PDF version of the article will be uploaded to the Baidu network disk after I verify that they are correct, so that everyone can download and use them.

In this column, we strive for quality rather than quantity, and strive to create a high-quality column with all our heart and soul! ! !

Thank you for your support! ! !
insert image description here

YoloV8 improvement strategy: Based on hierarchical attention FasterViT, let YoloV8 achieve a leap in performance

YoloV8 improvement strategy: FasterViT based on layered attention, let YoloV8 achieve a leap in performance.
This article shows you how to use FasterViT to improve YoloV8. I tried several methods and selected three methods with better results. Recommended for everyone .
FasterViT combines the advantages of the fast local representation learning of CNNs and the global modeling properties of ViT. The newly proposed Hierarchical Attention (HAT) method decomposes global self-attention with quadratic complexity into multi-level attention with reduced computational cost. We benefit from efficient window-based self-attention. Each window has access to a dedicated carrier token that participates in local and global representation learning. At a high level, global self-attention enables efficient cross-window communication at low cost. FasterViT achieves a SOTA Pareto-front in terms of accuracy and image throughput.
insert image description here

YoloV8 improvement strategy: InceptionNext backbone replaces the backbone of YoloV8 and YoloV5

YoloV8 improvement strategy: InceptionNext backbone replaces the backbone of YoloV8 and YoloV5

This article mainly explains how to use the InceptionNext backbone network to replace the backbone of YoloV8 and YoloV5. Changed the InceptionNext network structure, and the architecture of Yolov5 and YoloV8.
insert image description here

YoloV8 improvement strategy: lightweight CloFormer helps Yolov8 achieve both speed and accuracy improvements

YoloV8 improvement strategy: lightweight CloFormer helps Yolov8 achieve both speed and accuracy improvements

CloFormer is a lightweight backbone network published by Tsinghua University this year, which introduces AttnConv, an attention-style convolution operator. The proposed AttnConv uses shared weights to aggregate local information and configures well-designed context-aware weights to enhance local features. The combination of AttnConv and normal attention uses pooling to reduce FLOPs in CloFormer, enabling the model to perceive high-frequency and low-frequency information.
insert image description here

YoloV8 improvement strategy: the perfect combination of InceptionNeXt and YoloV8 makes YoloV8 shine

YoloV8 improvement strategy: the perfect combination of InceptionNeXt and YoloV8 makes YoloV8 shine

InceptionNeXt is a paper released by Yan Shuicheng's team this year, which combines the ideas of ConvNext and Inception, namely IncepitonNeXt. InceptionNeXt-T achieves 1.6x higher training throughput than ConvNext-T, and achieves 0.2% top-1 accuracy improvement on ImageNet-1K.

insert image description here

YoloV8 improvement strategy: The newly released EMA attention mechanism helps YoloV8 become more powerful

YoloV8 improvement strategy: The newly released EMA attention mechanism helps YoloV8 become more powerful

The EMA attention mechanism is a new efficient multi-scale attention module this year. With the goal of preserving information on each channel and reducing computational overhead, some channels are reshaped into batch dimensions, and channel dimensions are grouped into multiple sub-features, so that spatial semantic features are evenly distributed in each feature group. Specifically, in addition to encoding global information to recalibrate channel weights in each parallel branch, the output features of two parallel branches are further aggregated via cross-dimensional interactions to capture pixel-level pairwise relationships.
insert image description here

YoloV8 improvement strategy: VanillaNet minimalist network, greatly reducing the parameters of YoloV8

YoloV8 improvement strategy: VanillaNet minimalist network, greatly reducing the parameters of YoloV8

VanillaNet, a neural network architecture incorporating elegant design. By avoiding high-depth, shotcut and complex operations such as autonomous willpower, VanillaNet is refreshingly concise, yet very powerful. Each layer is carefully crafted to be compact and straightforward, and non-linear activation functions are pruned after training to restore the original structure. VanillaNet overcomes the challenge of inherent complexity, making it ideal for resource-constrained environments. Its easy-to-understand and highly simplified architecture opens up new possibilities for efficient deployment. Extensive experiments show that VanillaNet delivers comparable performance to well-known deep neural networks and visual transformers, demonstrating the power of minimalism in deep learning. This visionary journey of VanillaNet has great potential to redefine the landscape and challenge the status quo of underlying models, opening a new path for elegant and effective model design.

insert image description here

YoloV8 improvement strategy: RFAConv module plug and play, realize YoloV8 silky smooth score

YoloV8 improvement strategy: The RFAConv module is plug-and-play to achieve YoloV8 silky top score
RFAConv is a new attention mechanism called Receptive Field Attention (RFA). Convolutional Block Attention Module (CBAM) and Coordinated Attention Module (CA) only focus on spatial features and cannot completely solve the problem of convolution kernel parameter sharing. However, in RFA, the spatial features of the receptive field are not only concentrated, but also large-scale volume The product kernel provides good attention weights. The receptive field attention convolution operation (RFAConv) designed by RFA can be considered as a new method to replace the standard convolution, which brings almost negligible computational cost and many parameters. Since the author did not open source, I reproduced a version myself and tried to add it to the YoloV8 network.
insert image description here

YoloV8 improvement strategy: Let SeaFormer enter the field of vision of Yolov8, and the lightweight and efficient attention module shows unparalleled charm

YoloV8 improvement strategy: Let SeaFormer enter the field of vision of Yolov8, and the lightweight and efficient attention module shows unparalleled charm

SeaFormer designs a general attention block using axis compression and detail enhancement. It can be further used to create a series of backbone architectures with excellent cost-effectiveness. Coupled with a light segmentation head, we achieve the best trade-off between segmentation accuracy and latency on the ADE20K and cityscape datasets on ARM-based mobile devices. Crucially, we beat our mobile-friendly and transformer-based counterparts with better performance and lower latency, and no bells and whistles.
insert image description here

YoloV8 improvement strategy: apply DCN v1 and v2 to YoloV8, incarnate into a small black boy with a high score

YoloV8 improvement strategy: apply DCN v1 and v2 to YoloV8, incarnate into a small black boy with a high score

Try to replace ordinary convolution with DCNv1 and DCNv2!
insert image description here

YoloV8 improvement strategy: Visual Transformer based on double-layer routing attention improves the detection ability of YoloV8

YoloV8 improvement strategy: Visual Transformer based on double-layer routing attention improves the detection ability of YoloV8.
Double-layer routing attention enables more flexible calculation allocation with content awareness. Take advantage of sparsity to save computation and memory while only involving dense matrix multiplication suitable for GPUs. A new general vision transformer called BiFormer is built with the proposed two-layer routing attention.
insert image description here

YoloV8 improvement strategy: From Google's latest optimizer - Lion, both speed and accuracy are improved. Adam said that young people do not speak martial arts

YoloV8 improvement strategy: From Google's latest optimizer - Lion, both speed and accuracy are improved. Adam said that young people do not speak martial arts

Lion improves ViT's accuracy on ImageNet by 2% and saves up to 5x pre-training computation on JFT. For visual-language contrastive learning, we achieve 88.3% zero-shot and 91.1% fine-tuning accuracy on ImageNet, surpassing the previous best results by 2% and 0.1%, respectively. On the Diffusion model, Lion outperforms Adam by achieving better FID scores and reducing training computation by 2.3x. Lion shows similar or better performance than Adam on autoregressive, masked language modeling, and fine-tuning. Analysis of Lion shows that its performance gain grows with training batch size. It also requires a smaller learning rate than Adam due to the larger update norm produced by the sign function.

YoloV8 improvement strategy: deep integration of Conv2Former and YoloV8, minimalist network, extremely high performance

YoloV8 improvement strategy: Deep integration of Conv2Former and YoloV8, minimalist network, extremely high performance
Conv2Former is based on ConvNeXt, and has been further optimized, and its performance has been improved.
insert image description here

YoloV8 improvement strategy: What kind of sparks can be produced by the passionate collision of ConvNextV2 and YoloV8?

YoloV8 improvement strategy: what kind of sparks can be produced by the passionate collision of ConvNextV2 and YoloV8

ConvNextV2 incorporates a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer, which can be added to the ConvNeXt architecture to enhance feature competition between channels, which significantly improves the performance of pure ConvNets in each performance on several recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation.

insert image description here

YoloV8 improvement strategy: replace CIoU with Wise-IoU, happiness increases, it is worth having, and also supports seamless replacement of EIoU, GIoU, DIoU, SIoU.

YoloV8 improvement strategy: replace CIoU with Wise-IoU, happiness increases, it is worth having, and also supports seamless replacement of EIoU, GIoU, DIoU, SIoU.
This article describes how to use Wise-IoU to increase points in yolov8. First of all, I translated the paper to let everyone know what Wise IoU is and the three versions of Wise IoU. Next, I will explain how to add Wise IoU in yolov8.

insert image description here

YoloV8 improvement strategy: increase branches and reduce missed detection

YoloV8 improvement strategy: increase branches and reduce missed detection

Improve the detection of small targets by adding a branch
insert image description here

YoloV8 improvement strategy: deep integration of FasterNet and YoloV8 to create a faster and stronger detection network

YoloV8 improvement strategy: Integrate FasterNet and YoloV8 deeply to create a faster and stronger detection network
fastternet, which is a new family of neural networks that achieves higher operating speeds than other networks on various devices without Affects the accuracy of various vision tasks.

insert image description here

Detailed explanation and actual combat of Yolov8 network (with data set)

Detailed explanation and actual combat of Yolov8 network (with data set)
insert image description here

Guess you like

Origin blog.csdn.net/hhhhhhhhhhwwwwwwwwww/article/details/131780176