Ceres Solver:从入门到使用


Ceres Solver是谷歌开源的C++非线性优化库,能够解决有约束或无约束条件下的非线性最小二乘问题。2010年之后大量的运用在谷歌的产品开发中,尤其在谷歌开源的cartographer中被大量的使用。
ceres可以在Linux,Windows,macOS,Andrioid,IOS系统进行安装使用,详情可查看下方的官网链接。

题外话:ceres被命名是由于高斯使用了最小二乘方法成功的预测了绕行至太阳背后的小行星ceres的位置。

Ceres Solver 官网:http://ceres-solver.org/
github:https://github.com/ceres-solver/ceres-solver

一. 简介

ceres可以解决带有约束条件的非线性最小二乘问题,数学表达如下:
min ⁡ X ∑ i 1 2 ρ i ( ∥ f i ( x i 1 , . . . , x i k ) ∥ 2 ) s . t .   l j ≤ x j ≤ u j \underset{X}{\min}\sum_{i} \frac{1}{2} ρ_i(∥f_i(x_{i_1},...,x_{i_k})∥^2)\\s.t.\ l_j≤x_j≤u_j Xmini21ρi(fi(xi1,...,xik)2)s.t. ljxjuj

  • ρ i ( ∥ f i ( x i 1 , . . . , x i k ) ∥ 2 ) ρ_i(∥f_i(x_{i_1},...,x_{i_k})∥^2) ρi(fi(xi1,...,xik)2):残差块(ResidualBlock)
  • f i ( ⋅ ) f_i(\cdot) fi():代价函数(CostFunction),,在slam中称为误差项它的参数块表示为 [ x i 1 , . . . , x i k ] [x_{i_1},...,x_{i_k}] [xi1,...,xik]
  • l j 和 l j l_j 和 l_j ljlj:分别表示参数块 x j x_j xj的下界和上界
  • ρ i ρ_i ρi:损失函数(LossFunction)或称为核函数,它属于标量函数,为了减小异常值对非线性优化的影响。

特殊情况:当损失函数 ρ i = x ρ_i=x ρi=x,并且 l j = − ∞ l_j=-∞ lj= u j = ∞ u_j=\infty uj=。那么得到了一个常见的非线性优化函数:
1 2 ∑ i ( ∥ f i ( x i 1 , . . . , x i k ) ∥ 2 ) \frac{1}{2}\sum_{i} (∥f_i(x_{i_1},...,x_{i_k})∥^2) 21i(fi(xi1,...,xik)2)
ceres求解的一般步骤:

  1. 定义Cost Function模型,即代价函数。也就是我们要寻找的最优目标,这里我们用到了仿函数或称为拟函数(functor)。做法是写一个类,然后在仿函数中重载()运算符。
  2. 使用定义的代价函数构建待求解的优化问题。即调用AddResidualBlock将误差项,添加到目标函数中。由于优化需要梯度,我们有几种选择:1)使用ceres自动求导(Auto Diff)2)使用数值求导(Numeric Diff)3)自行推导解析形式,提供给ceres。
  3. 配置求解器参数并求解问题。配置项options比较丰富,可以查看options的定义。

二. Hello World

本例来自于:examples/helloworld.cc

为了快速入门,我们现在举一个简单的例子,待优化的函数形式如下:
1 2 ( 10 − x ) 2 \frac{1}{2}(10-x)^2 21(10x)2
这个函数比较简单,能看出来函数去最小值时, x = 5 x=5 x=5。先贴出ceres求解的代码和注释。

#include<iostream>
#include<ceres/ceres.h>
using namespace ceres;

//第一步:构建代价函数
struct CostFunctor {
    
    
   template <typename T>
   bool operator()(const T* const x, T* residual) const {
    
    
     residual[0] = 10.0 - x[0];
     return true;
   }
};

int main(int argc, char** argv) {
    
    
  google::InitGoogleLogging(argv[0]);

  // The variable to solve for with its initial value.
  double initial_x = 5.0;
  double x = initial_x;

  // Build the problem.
  Problem problem;

	//第二步:构建寻优问题
  // Set up the only cost function (also known as residual). This uses
  // auto-differentiation to obtain the derivative (jacobian).
  CostFunction* cost_function =
      new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
      //自动求导
      //第一个1是输出维度,即残差的维度,第二个1是输入维度,即待寻优参数x的维度
  problem.AddResidualBlock(cost_function, nullptr, &x);//nullptr表示不使用核函数,&x表示x为寻优参数

  //第三步:配置并运行
  // Run the solver!
  Solver::Options options;
  options.linear_solver_type = ceres::DENSE_QR;
  options.minimizer_progress_to_stdout = true;
  Solver::Summary summary;
  Solve(options, &problem, &summary);

  std::cout << summary.BriefReport() << "\n";
  std::cout << "x : " << initial_x
            << " -> " << x << "\n";
  return 0;
}

第一步:构造代价函数:
使用了仿函数,即对()进行了重载。其中输入和输出都是同一类型T。定义这样一个结构体之后,我们就可以通过ceres调用CostFunctor ::operator()来使用这个重载操作符了。当T=double时,输出double类型的残差,如果T=Jet,那么输出雅可比矩阵。
第二步:构建带求解的优化问题:
当这个残差方程被建立之后,就可以使用它构建一个非线性最小二乘问题,并使用ceres求解。在这里使用了自动求导,总共有三个参数:代价函数结构体(CostFunctor)、输出维度(残差维度)和输入维度(x)。
然后将误差项添加到问题中,AddResidualBlock总共3个参数:代价函数(cost_function)、核函数(null)和寻优问题(x)。
第三步:配置并求解
这一部分是配置options并输出求解器的结果。

我使用qt运行以上程序,需要在.pro文件中添加以下内容:

INCLUDEPATH += /usr/include/eigen3
INCLUDEPATH += /usr/include/ceres
LIBS +=/usr/lib/libceres.so
LIBS +=/usr/lib/x86_64-linux-gnu/libglog.so

输出结果如下:

iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  4.512500e+01    0.00e+00    9.50e+00   0.00e+00   0.00e+00  1.00e+04       0    5.33e-04    3.46e-03
   1  4.511598e-07    4.51e+01    9.50e-04   9.50e+00   1.00e+00  3.00e+04       1    5.00e-04    4.05e-03
   2  5.012552e-16    4.51e-07    3.17e-08   9.50e-04   1.00e+00  9.00e+04       1    1.60e-05    4.09e-03
Ceres Solver Report: Iterations: 2, Initial cost: 4.512500e+01, Final cost: 5.012552e-16, Termination: CONVERGENCE
x : 0.5 -> 10

初始值为5,经过两次迭代之后达到最优解 x x x->10。本质上这个例子是一个线性问题,但是不妨碍我们的分析。

三. 导数

本例来自于:
examples/helloworld_numeric_diff.cc.
examples/helloworld_analytic_diff.cc.

Ceres和大多数优化包一样,求解器的求解主要依赖于能够在任意参数值下评估目标函数中每个项的值和导数。在hello world中我们已经使用了自动求导(Automatic Differentiation)。下面我们来介绍解析法(Analytic)和数值法(Numeric )求导。

  • 自动求导(Automatic Differentiation):AutoDiffCostFunction
  • 数值法(Numeric Derivatives):NumericDiffCostFunction

3.1 数值求导(Numeric Derivatives)

在一些情况下,不能定义模板类的代价仿函数,例如,残差值(residual)涉及到一些我们无法掌控的库函数调用时。在这种情况下,数值求导就排上用场了。下面我们还是用 f ( x ) = 10 − x f(x)=10-x f(x)=10x进行举例,相应的仿函数如下:

struct NumericDiffCostFunctor {
    
    
  bool operator()(const double* const x, double* residual) const {
    
    
    residual[0] = 10.0 - x[0];
    return true;
  }
};

对应的添加到problem中:

CostFunction* cost_function =
  new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(
      new NumericDiffCostFunctor);
problem.AddResidualBlock(cost_function, nullptr, &x);

和自动求导对比,只差一个模板类型。但是在这里ceres推荐我们使用自动求导,因为它C++的模板是的自动求导更加高效,而数值求导则计算量更大,容易出现数字错误,并且它收敛更慢。

3.2 解析求导

在一些情况下,使用解析求导比自动求导更高效,比如求解闭合解而不是自动求导的链式法。同样我们还是用 f ( x ) = 10 − x f(x)=10-x f(x)=10x进行举例。它的仿函数形式如下:

class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
    
    
 public:
  virtual ~QuadraticCostFunction() {
    
    }
  virtual bool Evaluate(double const* const* parameters,
                        double* residuals,
                        double** jacobians) const {
    
    
    const double x = parameters[0][0];
    residuals[0] = 10 - x;

    // Compute the Jacobian if asked for.
    if (jacobians != nullptr && jacobians[0] != nullptr) {
    
    
      jacobians[0][0] = -1;
    }
    return true;
  }
};

Evaluate 函数用于检查jacobians 是否为非零值。如果是,那么就把残差方程的导数值赋值给它。从上述代码片段可以看出,实现“CostFunction““其实有点乏味。所以除非有什么特殊原因需要自行构建雅可比的计算,否则最好还是直接使用自动微分法或者数值微分法来创建残差块

3.2 其他求导方法

计算导数是ceres中最复杂的部分了,根据环境,用户可能需要更加复杂的求导方法。这部分只介绍了最浅显的求导方法,如果对NumericDiffCostFunction 和 AutoDiffCostFunction都熟悉之后,还可以深入DynamicAutoDiffCostFunction, CostFunctionToFunctor, NumericDiffFunctor 和ConditionedCostFunction。相应的在examples中提供了examples/helloworld_numeric_diff.cc和examples/helloworld_analytic_diff.cc进行参考。

四. Powell方程

本例来自于:examples/powell.cc

现在来看一个更复杂的例子-最小化鲍威尔方程。假设 x = [ x 1 , x 2 , x 3 , x 4 ] x=[x_1,x_2,x_3,x_4] x=[x1,x2,x3,x4],它的方程表示如下:
f 1 ( x ) = x 1 + 10 x 2 f 2 ( x ) = 5 ( x 3 − x 4 ) f 3 ( x ) = ( x 2 − 2 x 3 ) 2 f 4 ( x ) = 1 0 ( x 1 − x 4 ) 2 F ( x ) = [ f 1 ( x ) , f 2 ( x ) , f 3 ( x ) , f 4 ( x ) ] f_1(x)=x_1+10x_2\\f_2(x)=\sqrt5(x_3-x_4)\\f_3(x)=(x_2-2x_3)^2\\f_4(x)=\sqrt10(x_1-x_4)^2\\F(x)=[f_1(x),f_2(x),f_3(x),f_4(x)] f1(x)=x1+10x2f2(x)=5 (x3x4)f3(x)=(x22x3)2f4(x)=1 0(x1x4)2F(x)=[f1(x),f2(x),f3(x),f4(x)]
上式中 F ( x ) F(x) F(x)有四个参数块和四个残差,找到一组 x x x使得 1 2 ∣ ∣ F ( x ) ∣ ∣ 2 \frac12||F(x)||^2 21F(x)2最小。
第一步:创建仿函数,下面是 f 4 ( x 1 , x 4 ) f_4(x_1,x_4) f4(x1,x4)的例子:

struct F4 {
    
    
  template <typename T>
  bool operator()(const T* const x1, const T* const x4, T* residual) const {
    
    
    residual[0] = sqrt(10.0) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
    return true;
  }
};

同样的我们可以创建 F 1 , F 2 , F 3 F_1,F_2,F_3 F1,F2,F3,然后使用这些残差块(仿函数)加入到problem中:

double x1 =  3.0; double x2 = -1.0; double x3 =  0.0; double x4 = 1.0;

Problem problem;

// Add residual terms to the problem using the using the autodiff
// wrapper to get the derivatives automatically.
problem.AddResidualBlock(
  new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
problem.AddResidualBlock(
  new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x3, &x4);
problem.AddResidualBlock(
  new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x2, &x3)
problem.AddResidualBlock(
  new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x1, &x4);

下面贴出完整C++代码和注释:

// An example program that minimizes Powell's singular function.
//
//   F = 1/2 (f1^2 + f2^2 + f3^2 + f4^2)
//
//   f1 = x1 + 10*x2;
//   f2 = sqrt(5) * (x3 - x4)
//   f3 = (x2 - 2*x3)^2
//   f4 = sqrt(10) * (x1 - x4)^2
//
// The starting values are x1 = 3, x2 = -1, x3 = 0, x4 = 1.
// The minimum is 0 at (x1, x2, x3, x4) = 0.
#include <vector>
#include "ceres/ceres.h"
#include "gflags/gflags.h"
#include "glog/logging.h"

using ceres::AutoDiffCostFunction;
using ceres::CostFunction;
using ceres::Problem;
using ceres::Solve;
using ceres::Solver;
//创建四个残差块,也就是四个仿函数
struct F1 {
    
    
  template <typename T>
  bool operator()(const T* const x1, const T* const x2, T* residual) const {
    
    
    // f1 = x1 + 10 * x2;
    residual[0] = x1[0] + 10.0 * x2[0];
    return true;
  }
};

struct F2 {
    
    
  template <typename T>
  bool operator()(const T* const x3, const T* const x4, T* residual) const {
    
    
    // f2 = sqrt(5) (x3 - x4)
    residual[0] = sqrt(5.0) * (x3[0] - x4[0]);
    return true;
  }
};

struct F3 {
    
    
  template <typename T>
  bool operator()(const T* const x2, const T* const x3, T* residual) const {
    
    
    // f3 = (x2 - 2 x3)^2
    residual[0] = (x2[0] - 2.0 * x3[0]) * (x2[0] - 2.0 * x3[0]);
    return true;
  }
};

struct F4 {
    
    
  template <typename T>
  bool operator()(const T* const x1, const T* const x4, T* residual) const {
    
    
    // f4 = sqrt(10) (x1 - x4)^2
    residual[0] = sqrt(10.0) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
    return true;
  }
};

DEFINE_string(minimizer,
              "trust_region",
              "Minimizer type to use, choices are: line_search & trust_region");

int main(int argc, char** argv) {
    
    
  GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
  google::InitGoogleLogging(argv[0]);
//设置初始值
  double x1 = 3.0;
  double x2 = -1.0;
  double x3 = 0.0;
  double x4 = 1.0;
//加入problem
  Problem problem;
  // Add residual terms to the problem using the using the autodiff
  // wrapper to get the derivatives automatically. The parameters, x1 through
  // x4, are modified in place.
  problem.AddResidualBlock(
      new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x1, &x2);
  problem.AddResidualBlock(
      new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x3, &x4);
  problem.AddResidualBlock(
      new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x2, &x3);
  problem.AddResidualBlock(
      new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x1, &x4);

  Solver::Options options;
  LOG_IF(
      FATAL,
      !ceres::StringToMinimizerType(FLAGS_minimizer, &options.minimizer_type))
      << "Invalid minimizer: " << FLAGS_minimizer
      << ", valid options are: trust_region and line_search.";
//设置options
  options.max_num_iterations = 100;//迭代次数
  options.linear_solver_type = ceres::DENSE_QR;
  options.minimizer_progress_to_stdout = true;

  // clang-format off
  std::cout << "Initial x1 = " << x1
            << ", x2 = " << x2
            << ", x3 = " << x3
            << ", x4 = " << x4
            << "\n";
  // clang-format on

  // Run the solver!
  Solver::Summary summary;
  Solve(options, &problem, &summary);

  std::cout << summary.FullReport() << "\n";
  // clang-format off
  std::cout << "Final x1 = " << x1
            << ", x2 = " << x2
            << ", x3 = " << x3
            << ", x4 = " << x4
            << "\n";
  // clang-format on
  return 0;
}

输出结果:

Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  1.075000e+02    0.00e+00    1.55e+02   0.00e+00   0.00e+00  1.00e+04       0    4.95e-04    2.30e-03
   1  5.036190e+00    1.02e+02    2.00e+01   2.16e+00   9.53e-01  3.00e+04       1    4.39e-05    2.40e-03
   2  3.148168e-01    4.72e+00    2.50e+00   6.23e-01   9.37e-01  9.00e+04       1    9.06e-06    2.43e-03
   3  1.967760e-02    2.95e-01    3.13e-01   3.08e-01   9.37e-01  2.70e+05       1    8.11e-06    2.45e-03
   4  1.229900e-03    1.84e-02    3.91e-02   1.54e-01   9.37e-01  8.10e+05       1    6.91e-06    2.48e-03
   5  7.687123e-05    1.15e-03    4.89e-03   7.69e-02   9.37e-01  2.43e+06       1    7.87e-06    2.50e-03
   6  4.804625e-06    7.21e-05    6.11e-04   3.85e-02   9.37e-01  7.29e+06       1    5.96e-06    2.52e-03
   7  3.003028e-07    4.50e-06    7.64e-05   1.92e-02   9.37e-01  2.19e+07       1    5.96e-06    2.55e-03
   8  1.877006e-08    2.82e-07    9.54e-06   9.62e-03   9.37e-01  6.56e+07       1    5.96e-06    2.57e-03
   9  1.173223e-09    1.76e-08    1.19e-06   4.81e-03   9.37e-01  1.97e+08       1    7.87e-06    2.60e-03
  10  7.333425e-11    1.10e-09    1.49e-07   2.40e-03   9.37e-01  5.90e+08       1    6.20e-06    2.63e-03
  11  4.584044e-12    6.88e-11    1.86e-08   1.20e-03   9.37e-01  1.77e+09       1    6.91e-06    2.65e-03
  12  2.865573e-13    4.30e-12    2.33e-09   6.02e-04   9.37e-01  5.31e+09       1    5.96e-06    2.67e-03
  13  1.791438e-14    2.69e-13    2.91e-10   3.01e-04   9.37e-01  1.59e+10       1    7.15e-06    2.69e-03

Ceres Solver v1.12.0 Solve Report
----------------------------------
                                     Original                  Reduced
Parameter blocks                            4                        4
Parameters                                  4                        4
Residual blocks                             4                        4
Residual                                    4                        4

Minimizer                        TRUST_REGION

Dense linear algebra library            EIGEN
Trust region strategy     LEVENBERG_MARQUARDT

                                        Given                     Used
Linear solver                        DENSE_QR                 DENSE_QR
Threads                                     1                        1
Linear solver threads                       1                        1

Cost:
Initial                          1.075000e+02
Final                            1.791438e-14
Change                           1.075000e+02

Minimizer iterations                       14
Successful steps                           14
Unsuccessful steps                          0

Time (in seconds):
Preprocessor                            0.002

  Residual evaluation                   0.000
  Jacobian evaluation                   0.000
  Linear solver                         0.000
Minimizer                               0.001

Postprocessor                           0.000
Total                                   0.005

Termination:                      CONVERGENCE (Gradient tolerance reached. Gradient max norm: 3.642190e-11 <= 1.000000e-10)

Final x1 = 0.000292189, x2 = -2.92189e-05, x3 = 4.79511e-05, x4 = 4.79511e-05

很明显看出原函数在 x 1 = x 2 = x 3 = x 4 = 0 x_1=x_2=x_3=x_4=0 x1=x2=x3=x4=0的时候,目标函数有最小值0。虽然设置的是迭代100次,但是实际上迭代了14次已经收敛了。

五. 曲线拟合

本例来自于:examples/curve_fitting.cc

到目前为止,我们使用的例子都是没有数据的简单优化问题,最小二乘和非线性最小二乘的最初目的就是曲线拟合。这里我们对 y = e 0.3 x + 0.1 y=e^{0.3x+0.1} y=e0.3x+0.1进行数据采样,采样的高斯噪声 σ = 0.2 σ=0.2 σ=0.2。现在我们对它进行曲线拟合:
y = e m x + c y=e^{mx+c} y=emx+c
完整代码和具体注释:

#include "ceres/ceres.h"
#include "glog/logging.h"
using ceres::AutoDiffCostFunction;
using ceres::CostFunction;
using ceres::Problem;
using ceres::Solve;
using ceres::Solver;
// Data generated using the following octave code.
//   randn('seed', 23497);
//   m = 0.3;
//   c = 0.1;
//   x=[0:0.075:5];
//   y = exp(m * x + c);
//   noise = randn(size(x)) * 0.2;
//   y_observed = y + noise;
//   data = [x', y_observed'];

const int kNumObservations = 67;
// clang-format off
const double data[] = {
    
    
  0.000000e+00, 1.133898e+00,
  7.500000e-02, 1.334902e+00,
  1.500000e-01, 1.213546e+00,
  2.250000e-01, 1.252016e+00,
  3.000000e-01, 1.392265e+00,
  3.750000e-01, 1.314458e+00,
  4.500000e-01, 1.472541e+00,
  5.250000e-01, 1.536218e+00,
  6.000000e-01, 1.355679e+00,
  6.750000e-01, 1.463566e+00,
  7.500000e-01, 1.490201e+00,
  8.250000e-01, 1.658699e+00,
  9.000000e-01, 1.067574e+00,
  9.750000e-01, 1.464629e+00,
  1.050000e+00, 1.402653e+00,
  1.125000e+00, 1.713141e+00,
  1.200000e+00, 1.527021e+00,
  1.275000e+00, 1.702632e+00,
  1.350000e+00, 1.423899e+00,
  1.425000e+00, 1.543078e+00,
  1.500000e+00, 1.664015e+00,
  1.575000e+00, 1.732484e+00,
  1.650000e+00, 1.543296e+00,
  1.725000e+00, 1.959523e+00,
  1.800000e+00, 1.685132e+00,
  1.875000e+00, 1.951791e+00,
  1.950000e+00, 2.095346e+00,
  2.025000e+00, 2.361460e+00,
  2.100000e+00, 2.169119e+00,
  2.175000e+00, 2.061745e+00,
  2.250000e+00, 2.178641e+00,
  2.325000e+00, 2.104346e+00,
  2.400000e+00, 2.584470e+00,
  2.475000e+00, 1.914158e+00,
  2.550000e+00, 2.368375e+00,
  2.625000e+00, 2.686125e+00,
  2.700000e+00, 2.712395e+00,
  2.775000e+00, 2.499511e+00,
  2.850000e+00, 2.558897e+00,
  2.925000e+00, 2.309154e+00,
  3.000000e+00, 2.869503e+00,
  3.075000e+00, 3.116645e+00,
  3.150000e+00, 3.094907e+00,
  3.225000e+00, 2.471759e+00,
  3.300000e+00, 3.017131e+00,
  3.375000e+00, 3.232381e+00,
  3.450000e+00, 2.944596e+00,
  3.525000e+00, 3.385343e+00,
  3.600000e+00, 3.199826e+00,
  3.675000e+00, 3.423039e+00,
  3.750000e+00, 3.621552e+00,
  3.825000e+00, 3.559255e+00,
  3.900000e+00, 3.530713e+00,
  3.975000e+00, 3.561766e+00,
  4.050000e+00, 3.544574e+00,
  4.125000e+00, 3.867945e+00,
  4.200000e+00, 4.049776e+00,
  4.275000e+00, 3.885601e+00,
  4.350000e+00, 4.110505e+00,
  4.425000e+00, 4.345320e+00,
  4.500000e+00, 4.161241e+00,
  4.575000e+00, 4.363407e+00,
  4.650000e+00, 4.161576e+00,
  4.725000e+00, 4.619728e+00,
  4.800000e+00, 4.737410e+00,
  4.875000e+00, 4.727863e+00,
  4.950000e+00, 4.669206e+00,
};
// clang-format on
//第一步:定义残差结构体。
struct ExponentialResidual {
    
    
  ExponentialResidual(double x, double y) : x_(x), y_(y) {
    
    }

  template <typename T>
  bool operator()(const T* const m, const T* const c, T* residual) const {
    
    
    residual[0] = y_ - exp(m[0] * x_ + c[0]);
    return true;
  }

 private:
  const double x_;
  const double y_;
};

int main(int argc, char** argv) {
    
    
  google::InitGoogleLogging(argv[0]);
//设置初始值
  double m = 0.0;
  double c = 0.0;
//残差加入problem,每个观测值都需要添加
  Problem problem;
  for (int i = 0; i < kNumObservations; ++i) {
    
    
    problem.AddResidualBlock(
        new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
            new ExponentialResidual(data[2 * i], data[2 * i + 1])),
        NULL,
        &m,
        &c);
  }

  Solver::Options options;
  options.max_num_iterations = 25;
  options.linear_solver_type = ceres::DENSE_QR;
  options.minimizer_progress_to_stdout = true;

  Solver::Summary summary;
  Solve(options, &problem, &summary);
  std::cout << summary.BriefReport() << "\n";
  std::cout << "Initial m: " << 0.0 << " c: " << 0.0 << "\n";
  std::cout << "Final   m: " << m << " c: " << c << "\n";
  return 0;
}

注意在加入problem中时,与hello world的不同。我们再贴出hello world的部分代码:

struct CostFunctor {
template
bool operator()(const T* const x, T* residual) const {
residual[0] = 10.0 - x[0];
return true;
}
};
加入problem部分:
CostFunction* cost_function =
new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);

在加入problem的时候hello world中不需要初始值,因为他们有data,在这里我们还需要把采样值给添加进去。

Problem problem;
for (int i = 0; i < kNumObservations; ++i) {
    
    
  CostFunction* cost_function =
       new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
           new ExponentialResidual(data[2 * i], data[2 * i + 1]));
  problem.AddResidualBlock(cost_function, nullptr, &m, &c);
}

输出结果:

   0  1.211734e+02    0.00e+00    3.61e+02   0.00e+00   0.00e+00  1.00e+04       0    5.34e-04    2.56e-03
   1  1.211734e+02   -2.21e+03    0.00e+00   7.52e-01  -1.87e+01  5.00e+03       1    4.29e-05    3.25e-03
   2  1.211734e+02   -2.21e+03    0.00e+00   7.51e-01  -1.86e+01  1.25e+03       1    1.10e-05    3.28e-03
   3  1.211734e+02   -2.19e+03    0.00e+00   7.48e-01  -1.85e+01  1.56e+02       1    1.41e-05    3.31e-03
   4  1.211734e+02   -2.02e+03    0.00e+00   7.22e-01  -1.70e+01  9.77e+00       1    1.00e-05    3.34e-03
   5  1.211734e+02   -7.34e+02    0.00e+00   5.78e-01  -6.32e+00  3.05e-01       1    1.00e-05    3.36e-03
   6  3.306595e+01    8.81e+01    4.10e+02   3.18e-01   1.37e+00  9.16e-01       1    2.79e-05    3.41e-03
   7  6.426770e+00    2.66e+01    1.81e+02   1.29e-01   1.10e+00  2.75e+00       1    2.10e-05    3.45e-03
   8  3.344546e+00    3.08e+00    5.51e+01   3.05e-02   1.03e+00  8.24e+00       1    2.10e-05    3.48e-03
   9  1.987485e+00    1.36e+00    2.33e+01   8.87e-02   9.94e-01  2.47e+01       1    2.10e-05    3.52e-03
  10  1.211585e+00    7.76e-01    8.22e+00   1.05e-01   9.89e-01  7.42e+01       1    2.10e-05    3.56e-03
  11  1.063265e+00    1.48e-01    1.44e+00   6.06e-02   9.97e-01  2.22e+02       1    2.60e-05    3.61e-03
  12  1.056795e+00    6.47e-03    1.18e-01   1.47e-02   1.00e+00  6.67e+02       1    2.10e-05    3.64e-03
  13  1.056751e+00    4.39e-05    3.79e-03   1.28e-03   1.00e+00  2.00e+03       1    2.10e-05    3.68e-03
Ceres Solver Report: Iterations: 13, Initial cost: 1.211734e+02, Final cost: 1.056751e+00, Termination: CONVERGENCE
Initial m: 0 c: 0
Final   m: 0.291861 c: 0.131439

从m=0,c=0开始,最终得到一个解决方案m=0.291861,c=0.131439。最小二乘拟合效果如下图:
在这里插入图片描述

六. 稳定曲线拟合

本例来自于:examples/robust_curve_fitting.cc

现在假设数据有一些离群值,例如,有一些值不满足噪声模型。开头我们就讲过使用核函数来处理离群值。我们只需要将problem进行修改。

problem.AddResidualBlock(cost_function, nullptr , &m, &c);
//修改为:
problem.AddResidualBlock(cost_function, new CauchyLoss(0.5) , &m, &c);

其中CauchyLoss是ceres当中的一种核函数。

七. Bundle Adjustment

本例来自于:examples/bundle_adjuster.cc

BA部分还没开始使用,这部分先不更新了。

八. 其它例子

  1. bundle_adjuster.cc :上面的第七部分
  2. circle_fit.cc :拟合圆
  3. ellipse_approximation.cc :拟合椭圆
  4. denoising.cc:使用“专家领域”模型实现图像去噪。
  5. nist.cc:实现并尝试解决NIST非线性回归问题
  6. more_garbow_hillstrom.cc :一个论文中的问题。。
  7. libmv_bundle_adjuster.cc:Blender/libmv使用的BA算法
  8. libmv_homography.cc :该文件演示了如何通过对图像空间错误进行回调检查来解决两组点之间的单应性并使用自定义退出条件
  9. robot_pose_mle.cc : 使用DynamicAutoDiffCostFunction代价函数
  10. slam/pose_graph_2d/pose_graph_2d.cc :2D slam
  11. slam/pose_graph_3d/pose_graph_3d.cc :3D slam

猜你喜欢

转载自blog.csdn.net/QLeelq/article/details/112070796