ncnn的softmax

ncnn里面可以直接调用softmax函数,如下面

ncnn::Layer* softmax = ncnn::create_layer("Softmax");

ncnn::ParamDict pd;
pd.set(0, 1); // axis
pd.set(1, 1);
softmax->load_param(pd);

ncnn::Option opt;
opt.num_threads = 1;
opt.use_packing_layout = false;

softmax->create_pipeline(opt);

softmax->forward_inplace(pred, opt);
delete softmax;

主要处理过程在forward_inplace函数,
看下它的源码,这里看一维向量的情况
正常softmax公式为 exp(value_i) / sum(exp(value_i))

可能ncnn里面为了值更均匀,用了value = exp(value - max) (这里value被更新)
然后求sum, 再用每个更新后的value / sum

int Softmax::forward_inplace(Mat& bottom_top_blob, const Option& opt) const
{
    
    
    // value = expf( value - global max value )
    // sum all value
    // value = value / sum

    int dims = bottom_top_blob.dims;
    size_t elemsize = bottom_top_blob.elemsize;
    int positive_axis = axis < 0 ? dims + axis : axis;

    if (dims == 1) // positive_axis == 0
    {
    
    
        int w = bottom_top_blob.w;

        float* ptr = bottom_top_blob;

        float max = -FLT_MAX;
        for (int i = 0; i < w; i++)
        {
    
    
            max = std::max(max, ptr[i]);
        }

        float sum = 0.f;
        for (int i = 0; i < w; i++)
        {
    
    
            ptr[i] = expf(ptr[i] - max);
            sum += ptr[i];
        }

        for (int i = 0; i < w; i++)
        {
    
    
            ptr[i] /= sum;
        }
    }
  ...

验证一个1x16的向量

-0.910332, -0.126514, 0.419731, 1.550650, 4.828178, 7.582278, 5.889471, 3.399679, 2.319962, 1.346125, 0.744986, 0.158084, -0.525536, -3.267861, -4.301273, -3.085058

经过类似上面的源码模拟softmax之后结果为

0.000161, 0.000352, 0.000607, 0.001882, 0.049898, 0.783749, 0.144212, 0.011959, 0.004062, 0.001534, 0.000841, 0.000468, 0.000236, 0.000015, 0.000005, 0.000018

直接调用ncnn softmax的结果为

0.000161 0.000352 0.000607 0.001882 0.049898 0.783749 0.144212 0.011959 0.004062 0.001534 0.000841 0.000468 0.000236 0.000015 0.000005 0.000018

猜你喜欢

转载自blog.csdn.net/level_code/article/details/132064858