[Apollo Study Notes] - Planning module TASK PIECEWISE_JERK_NONLINEAR_SPEED_OPTIMIZER (2)

TASK series analysis articles

1.【Apollo学习笔记】——规划模块TASK之LANE_CHANGE_DECIDER
2.【Apollo学习笔记】——规划模块TASK之PATH_REUSE_DECIDER
3.【Apollo学习笔记】——规划模块TASK之PATH_BORROW_DECIDER
4.【Apollo学习笔记】——规划模块TASK之PATH_BOUNDS_DECIDER
5.【Apollo学习笔记】——规划模块TASK之PIECEWISE_JERK_PATH_OPTIMIZER
6.【Apollo学习笔记】——规划模块TASK之PATH_ASSESSMENT_DECIDER
7.【Apollo学习笔记】——规划模块TASK之PATH_DECIDER
8.【Apollo学习笔记】——规划模块TASK之RULE_BASED_STOP_DECIDER
9.【Apollo学习笔记】——规划模块TASK之SPEED_BOUNDS_PRIORI_DECIDER&&SPEED_BOUNDS_FINAL_DECIDER
10.【Apollo学习笔记】——规划模块TASK之SPEED_HEURISTIC_OPTIMIZER
11.【Apollo学习笔记】——规划模块TASK之SPEED_DECIDER
12.【Apollo学习笔记】——规划模块TASK之PIECEWISE_JERK_SPEED_OPTIMIZER
13.【Apollo学习笔记】——规划模块TASK之PIECEWISE_JERK_NONLINEAR_SPEED_OPTIMIZER(一)
14.【Apollo学习笔记】——规划模块TASK之PIECEWISE_JERK_NONLINEAR_SPEED_OPTIMIZER(二)

续接上文:【Apollo学习笔记】——规划模块TASK之PIECEWISE_JERK_NONLINEAR_SPEED_OPTIMIZER(一)

OptimizeByNLP

这部分是具体的非线性规划代码实现。

在代码中首先可以看到构造了PiecewiseJerkSpeedNonlinearIpoptInterface这个类的对象。下面是该类构造函数中所含有的参数。

PiecewiseJerkSpeedNonlinearIpoptInterface::
    PiecewiseJerkSpeedNonlinearIpoptInterface(
        const double s_init, const double s_dot_init, const double s_ddot_init,
        const double delta_t, const int num_of_points, const double s_max,
        const double s_dot_max, const double s_ddot_min,
        const double s_ddot_max, const double s_dddot_min,
        const double s_dddot_max)
    : curvature_curve_(0.0, 0.0, 0.0),
      v_bound_func_(0.0, 0.0, 0.0),
      s_init_(s_init),
      s_dot_init_(s_dot_init),
      s_ddot_init_(s_ddot_init),
      delta_t_(delta_t),
      num_of_points_(num_of_points),
      s_max_(s_max),
      s_dot_max_(s_dot_max),
      s_ddot_min_(-std::abs(s_ddot_min)),
      s_ddot_max_(s_ddot_max),
      s_dddot_min_(-std::abs(s_dddot_min)),
      s_dddot_max_(s_dddot_max),
      v_offset_(num_of_points),
      a_offset_(num_of_points * 2) {
    
    }

PiecewiseJerkSpeedNonlinearIpoptInterface继承自基类Ipopt::TNLP。为了利用IPOPT的接口TNLP求解问题,需要构建一个接口类来重写TNLP中求解问题所需要的一些函数,并完成对这些函数的实现。

IPOPT(Interior Point Optimizer)是一个用于大规模非线性优化的开源软件包。它可用于解决如下形式的非线性规划问题:
min ⁡ x ∈ R n f ( x ) s.t. g L ≤ g ( x ) ≤ g U x L ≤ x ≤ x U , x ∈ R n \begin{aligned} \min_{x\in\mathbb{R}^{n}}&& f(x) \\ \text{s.t.}&& g^{L}\leq g(x)\leq g^{U} \\ &&x^{L}\leq x\leq x^{U}, \\ &&x\in\mathbb{R}^{n} \end{aligned} xRnmins.t.f(x)gLg(x)gUxLxxU,xRn
g L {g^L} gL g U {g^U} gU是约束函数的上界和下界, x L {x^L} xL x U {x^U} xU是优化变量的上界和下界。

IPOPT的求解由以下几个函数构成:

1.get_nlp_info()定义问题规模
/** Method to return some info about the nlp */
  bool get_nlp_info(int &n, int &m, int &nnz_jac_g, int &nnz_h_lag,
                    IndexStyleEnum &index_style) override;

• Number of optimization variables: n
• Number of constraint functions: m
• Number of non-zero entries in the Jacobian matrix: nnz_jac_g
• Number of non-zero entries in the Hessian matrix:nnz_h_lag

2.get_bounds_info() defines constraint boundary constraints
  /** Method to return the bounds for my problem */
  bool get_bounds_info(int n, double *x_l, double *x_u, int m, double *g_l,
                       double *g_u) override;

• Lower bound of the independent variable: x_l
• Upper bound of the independent variable: x_u
• Lower bound of the constrained function: g_l
• Upper bound of the constrained function:g_u

The variable xx can be known through the codex的边界约束(5n x 1)为:
x L = [ s i n i t s ( 1 ) − l o w e r ⋮ s ( n − 1 ) − l o w e r s ˙ i n i t 0 ⋮ 0 s ¨ i n i t s ¨ min ⁡ ⋮ s ¨ min ⁡ 0 ⋮ 0 0 ⋮ 0 ] , x U = [ s i n i t s ( 1 ) − u p p e r ⋮ s ( n − 1 ) − u p p e r s ˙ i n i t s ˙ max ⁡ ⋮ s ˙ max ⁡ s ¨ i n i t s ¨ max ⁡ ⋮ s ¨ max ⁡ inf ⁡ ⋮ inf ⁡ inf ⁡ ⋮ inf ⁡ ] {x^L} = \left[ {\begin{array}{ccccccccccccccc}{ {s_{init}}}\\{ {s_{(1) - lower}}}\\ \vdots \\{ {s_{(n - 1) - lower}}}\\{ { {\dot s}_{init}}}\\0\\ \vdots \\0\\{ { {\ddot s}_{init}}}\\{ { {\ddot s}_{\min }}}\\ \vdots \\{ { {\ddot s}_{\min }}}\\0\\ \vdots \\0\\0\\ \vdots \\0\end{array}} \right],{x^U} = \left[ {\begin{array}{ccccccccccccccc}{ {s_{init}}}\\{ {s_{(1) - upper}}}\\ \vdots \\{ {s_{(n - 1) - upper}}}\\{ { {\dot s}_{init}}}\\{ { {\dot s}_{\max }}}\\ \vdots \\{ { {\dot s}_{\max }}}\\{ { {\ddot s}_{init}}}\\{ { {\ddot s}_{\max }}}\\ \vdots \\{ { {\ddot s}_{\max }}}\\{\inf }\\ \vdots \\{\inf }\\{\inf }\\ \vdots \\{\inf }\end{array}} \right] xL= sinits(1)lowers(n1)lowers˙init00s¨inits¨mins¨min0000 ,xU= sinits(1)uppers(n1)uppers˙inits˙maxs˙maxs¨inits¨maxs¨maxinfinfinfinf

g ( x ) g(x) g(x)的边界约束([4(n-1)+3n] x 1)为:
g L = [ 0 ⋮ 0 s ′ ′ ′ min ⁡ ⋮ s ′ ′ ′ min ⁡ 0 ⋮ 0 0 ⋮ 0 − inf ⁡ ⋮ − inf ⁡ 0 ⋮ 0 − inf ⁡ ⋮ − inf ⁡ ] , g U = [ s ˙ max ⁡ ⋅ Δ t ⋮ s ˙ max ⁡ ⋅ Δ t s ′ ′ ′ max ⁡ ⋮ s ′ ′ ′ max ⁡ 0 ⋮ 0 0 ⋮ 0 0 ⋮ 0 inf ⁡ ⋮ inf ⁡ 0 ⋮ 0 ] {g^L} = \left[ {\begin{array}{ccccccccccccccc}0\\ \vdots \\0\\{ { {s'''}_{\min }}}\\ \vdots \\{ { {s'''}_{\min }}}\\0\\ \vdots \\0\\0\\ \vdots \\0\\{ - \inf }\\ \vdots \\{ - \inf }\\0\\ \vdots \\0\\{ - \inf }\\ \vdots \\{ - \inf }\end{array}} \right],{g^U} = \left[ {\begin{array}{ccccccccccccccc}{ { {\dot s}_{\max }} \cdot \Delta t}\\ \vdots \\{ { {\dot s}_{\max }} \cdot \Delta t}\\{ { {s'''}_{\max }}}\\ \vdots \\{ { {s'''}_{\max }}}\\0\\ \vdots \\0\\0\\ \vdots \\0\\0\\ \vdots \\0\\{\inf }\\ \vdots \\{\inf }\\0\\ \vdots \\0\end{array}} \right] gL= 00s′′′mins′′′min0000infinf00infinf ,gU= s˙maxΔts˙maxΔts′′′maxs′′′max000000infinf00

3.get_starting_point() defines the initial value
  /** Method to return the starting point for the algorithm */
  bool get_starting_point(int n, bool init_x, double *x, bool init_z,
                          double *z_L, double *z_U, int m, bool init_lambda,
                          double *lambda) override;

This part is introduced in detail in OptimizeByQP .
Depending on whether warm-start is started, it is determined whether to use the previous QP results as the NLP initial value. X 0 {X_0}X0为5n x 1的向量。
X 0 = [ s Q P 0 ⋮ s Q P n − 1 s ˙ Q P 0 ⋮ s ˙ Q P n − 1 s ¨ Q P 0 ⋮ s ¨ Q P n − 1 0 ⋮ 0 0 ⋮ 0 ] {X_0} = \left[ {\begin{array}{ccccccccccccccc}{s_{QP}^0}\\ \vdots \\{s_{QP}^{n - 1}}\\{\dot s_{QP}^0}\\ \vdots \\{\dot s_{QP}^{n - 1}}\\{\ddot s_{QP}^0}\\ \vdots \\{\ddot s_{QP}^{n - 1}}\\0\\ \vdots \\0\\0\\ \vdots \\0\end{array}} \right] X0= sQP0sQPn1s˙QP0s˙QPn1s¨QP0s¨QPn10000

4.eval_f() solves the objective function
  /** Method to return the objective value */
  bool eval_f(int n, const double *x, bool new_x, double &obj_value) override;

• 变量值:x
• 目标函数值:obj_val
m i n f = ∑ i = 0 n − 1 w s − r e f ( s i − s − r e f i ) 2 + w s ˙ − r e f ( s ˙ i − s ˙ − r e f ) 2 + w s ¨ s ¨ i 2 + ∑ i = 0 n − 2 w s ′ ′ ′ ( s ¨ i + 1 − s ¨ i Δ t ) 2 + ∑ i = 0 n − 1 w l a t _ a c c l a t _ a c c i 2 + w s o f t s _ s l a c k _ l o w e r i + w s o f t s _ s l a c k _ u p p e r i + w t a r g e t − s ( s n − 1 − s t a r g e t ) 2 + w t a r g e t − s ˙ ( s ˙ n − 1 − s ˙ t a r g e t ) 2 + w t a r g e t − s ¨ ( s ¨ n − 1 − s ¨ t a r g e t ) 2 \begin{aligned}minf=&\sum_{i=0}^{n-1}w_{s-ref}(s_i-s_-ref_i)^2+w_{\dot{s}-ref}(\dot{s}_i-\dot{s}_-ref)^2+w_{\ddot{s}}\ddot{s}_i^2+\sum_{i=0}^{n-2}w_{ {s}^{'''}}(\frac{\ddot{s}_{i+1}-\ddot{s}_i}{\Delta t})^2\\&+\sum_{i=0}^{n-1}w_{lat\_acc}lat\_acc_i^2+w_{soft}s\_slack\_lower_i+w_{soft}s\_slack\_upper_i\\&+w_{target-s}(s_{n-1}-s_{target})^2+w_{target-\dot{s}}(\dot{s}_{n-1}-\dot{s}_{target})^2+w_{target-\ddot{s}}(\ddot{s}_{n-1}-\ddot{s}_{target})^2 \end{aligned} my f=i=0n1wsref(sisrefi)2+ws˙ref(s˙is˙ref)2+ws¨s¨i2+i=0n2ws′′′(Δts¨i+1s¨i)2+i=0n1wl a t _ a ccl a t _ a c ci2+wsofts_slack_loweri+wsofts_slack_upperi+wtargets(sn1starget)2+wtargets˙(s˙n1s˙target)2+wtargets¨(s¨n1s¨target)2

5.eval_grad_f()求解梯度
  /** Method to return the gradient of the objective */
  bool eval_grad_f(int n, const double *x, bool new_x, double *grad_f) override;

• 变量值:x
• 梯度值:grad_f

梯度的定义:
∇ x = d e f [ ∂ ∂ x 1 , ∂ ∂ x 2 , ⋯   , ∂ ∂ x ] T = ∂ ∂ x \nabla_x\overset{\mathrm{def}}{=}\left[\frac{\partial}{\partial x_1},\frac{\partial}{\partial x_2},\cdots,\frac{\partial}{\partial x}\right]^T=\frac{\partial}{\partial\boldsymbol{x}} x=def[x1,x2,,x]T=x

对目标函数求偏导,可得:
∂ f ∂ s = 2 ∑ i = 0 n − 1 w s − r e f ( s i − s − r e f i ) + ∑ i = 0 n − 1 2 w l a t _ a c c s ˙ i 4 κ κ ˙ + 2 w t a r g e t − s ( s n − 1 − s t a r g e t ) + 2 w t a r g e t − s ˙ ( s ˙ n − 1 − s ˙ t a r g e t ) + 2 w t a r g e t − s ¨ ( s ¨ n − 1 − s ¨ t a r g e t ) ∂ f ∂ s ˙ = 2 ∑ i = 0 n − 1 w s ˙ − r e f ( s ˙ i − s ˙ − r e f ) + ∑ i = 0 n − 1 4 w l a t _ a c c s ˙ i 3 κ 2 ∂ f ∂ s ¨ = 2 ∑ i = 0 n − 1 w s ¨ s ¨ i − 2 w s ′ ′ ′ 2 ( s ¨ 1 − s ¨ 0 ) Δ t 2 + ∑ i = 1 n − 2 2 w s ′ ′ ′ ( 2 s ¨ i − s ¨ i + 1 − s ¨ i − 1 ) Δ t 2 + 2 w s ′ ′ ′ 2 ( s ¨ n − 1 − s ¨ n − 2 ) Δ t 2 ∂ f ∂ s _ s l a c k _ l o w e r = n ⋅ w s o f t ∂ f ∂ s _ s l a c k _ u p p e r = n ⋅ w s o f t \begin{aligned} \frac{\partial f}{\partial s}=&2\sum_{i=0}^{n-1}w_{s-ref}(s_i-s_-ref_i)+\sum_{i=0}^{n-1}2w_{lat\_acc}{\dot s_i^4}{\kappa}{\dot\kappa}\\+&2w_{target-s}(s_{n-1}-s_{target})+2w_{target-\dot{s}}(\dot{s}_{n-1}-\dot{s}_{target})+2w_{target-\ddot{s}}(\ddot{s}_{n-1}-\ddot{s}_{target})\\ \frac{\partial f}{\partial \dot s}=&2\sum_{i=0}^{n-1}w_{\dot{s}-ref}(\dot{s}_i-\dot{s}_-ref)+\sum_{i=0}^{n-1}4w_{lat\_acc}{\dot s_i^3}{\kappa^2}\\ \frac{\partial f}{\partial \ddot s}=&2\sum_{i=0}^{n-1}w_{\ddot{s}}\ddot{s}_i-2w_{ {s}^{'''}}{\frac{2(\ddot s_1-\ddot s_0)}{\Delta t^2}}+\sum_{i=1}^{n-2}2w_{ {s}^{'''}}{\frac{(2\ddot s_i-\ddot s_{i+1}-\ddot s_{i-1})}{\Delta t^2}}+2w_{ {s}^{'''}}{\frac{2(\ddot s_{n-1}-\ddot s_{n-2})}{\Delta t^2}}\\ \frac{\partial f}{\partial s\_slack\_lower}=&n \cdot w_{soft}\\ \frac{\partial f}{\partial s\_slack\_upper}=&n \cdot w_{soft} \end{aligned} sf=+s˙f=s¨f=s_slack_lowerf=s_slack_upperf=2i=0n1wsref(sisrefi)+i=0n12w _l a t _ a ccs˙i4KK˙2w _targets(sn1starget)+2w _targets˙(s˙n1s˙target)+2w _targets¨(s¨n1s¨target)2i=0n1ws˙ref(s˙is˙ref)+i=0n14wl a t _ a ccs˙i3K22i=0n1ws¨s¨i2w _s′′′Δt22(s¨1s¨0)+i=1n22w _s′′′Δt2(2s¨is¨i+1s¨i1)+2w _s′′′Δt22(s¨n1s¨n2)nwsoftnwsoft

所以,目标函数的梯度为
∂ f ∂ X = 2 ∑ i = 0 n − 1 w s − r e f ( s i − s − r e f i ) + 2 ∑ i = 0 n − 1 w s ˙ − r e f ( s ˙ i − s ˙ − r e f ) + 2 ∑ i = 0 n − 1 w s ¨ s ¨ i − 2 w s ′ ′ ′ 2 ( s ¨ 1 − s ¨ 0 ) Δ t 2 + ∑ i = 1 n − 2 2 w s ′ ′ ′ ( 2 s ¨ i − s ¨ i + 1 − s ¨ i − 1 ) Δ t 2 + 2 w s ′ ′ ′ 2 ( s ¨ n − 1 − s ¨ n − 2 ) Δ t 2 + ∑ i = 0 n − 1 2 w l a t _ a c c s ˙ i 4 κ κ ˙ + ∑ i = 0 n − 1 4 w l a t _ a c c s ˙ i 3 κ 2 + 2 w t a r g e t − s ( s n − 1 − s t a r g e t ) + 2 w t a r g e t − s ˙ ( s ˙ n − 1 − s ˙ t a r g e t ) + 2 w t a r g e t − s ¨ ( s ¨ n − 1 − s ¨ t a r g e t ) + 2 n ⋅ w s o f t \begin{aligned} \frac{\partial f}{\partial X}=&2\sum_{i=0}^{n-1}w_{s-ref}(s_i-s_-ref_i)+2\sum_{i=0}^{n-1}w_{\dot{s}-ref}(\dot{s}_i-\dot{s}_-ref)+2\sum_{i=0}^{n-1}w_{\ddot{s}}\ddot{s}_i\\ -&2w_{ {s}^{'''}}{\frac{2(\ddot s_1-\ddot s_0)}{\Delta t^2}}+\sum_{i=1}^{n-2}2w_{ {s}^{'''}}{\frac{(2\ddot s_i-\ddot s_{i+1}-\ddot s_{i-1})}{\Delta t^2}}+2w_{ {s}^{'''}}{\frac{2(\ddot s_{n-1}-\ddot s_{n-2})}{\Delta t^2}}\\ +&\sum_{i=0}^{n-1}2w_{lat\_acc}{\dot s_i^4}{\kappa}{\dot\kappa}+\sum_{i=0}^{n-1}4w_{lat\_acc}{\dot s_i^3}{\kappa^2}\\ +&2w_{target-s}(s_{n-1}-s_{target})+2w_{target-\dot{s}}(\dot{s}_{n-1}-\dot{s}_{target})+2w_{target-\ddot{s}}(\ddot{s}_{n-1}-\ddot{s}_{target})\\ +&2n\cdot w_{soft} \end{aligned} Xf=+++2i=0n1wsref(sisrefi)+2i=0n1ws˙ref(s˙is˙ref)+2i=0n1ws¨s¨i2w _s′′′Δt22(s¨1s¨0)+i=1n22w _s′′′Δt2(2s¨is¨i+1s¨i1)+2w _s′′′Δt22(s¨n1s¨n2)i=0n12w _l a t _ a ccs˙i4KK˙+i=0n14wl a t _ a ccs˙i3K22w _targets(sn1starget)+2w _targets˙(s˙n1s˙target)+2w _targets¨(s¨n1s¨target)2 nwsoft

6.eval_g() solves the constraint function
  /** Method to return the constraint residuals */
  bool eval_g(int n, const double *x, bool new_x, int m, double *g) override;

• Variable values: x
• Constraint function values:g
g ( x ) = [ s 1 − s 0 ⋮ s n − 1 − s n − 2 s ¨ 1 − s ¨ 0 Δ t ⋮ s ¨ n − 1 − s ¨ n − 2 Δ t s 1 − ( s 0 + s ˙ 0 Δ t + 1 2 s ¨ 0 Δ t 2 + 1 6 ( s ¨ 1 − s ¨ 0 Δ t ) Δ t 3 ) ⋮ s n − 1 − ( s n − 2 + s ˙ n − 2 Δ t + 1 2 s ¨ n − 2 Δ t 2 + 1 6 ( s ¨ n − 1 − s ¨ n − 2 Δ t ) Δ t 3 ) s ˙ 1 − ( s ˙ 0 + s ¨ 0 Δ t + 1 2 ( s ¨ 1 − s ¨ 0 Δ t ) Δ t 2 ) ⋮ s ˙ n − 1 − ( s ˙ n − 2 + s ¨ n − 2 Δ t + 1 2 ( s ¨ n − 1 − s ¨ n − 2 Δ t ) Δ t 2 ) s ˙ 0 − s p e e d _ l i m i t ( s 0 ) ⋮ s ˙ n − 1 − s p e e d _ l i m i t ( s n − 1 ) s 0 − s _ s o f t _ l o w e r 0 + s _ s l a c k _ l o w e r ⋮ s n − 1 − s _ s o f t _ l o w e r n − 1 + s _ s l a c k _ l o w e r s 0 − s _ s o f t _ u p p e r 0 − s _ s l a c k _ u p p e r ⋮ s n − 1 − s _ s o f t _ u p p e r n − 1 − s _ s l a c k _ u p p e r ] g(x) = \left[ {\begin{array}{ccccccccccccccc}{ {s_1} - {s_0}}\\ \vdots \\{ { s_{n - 1}} - {s_{n - 2}}}\\{\frac{ { { { \ ddot s}_1} - { {\ddot s}_0}}}{ {\Delta t}}}\\ \vdots \\{\frac{ { { {\ddot s}_{n - 1}} - { {\ddot s}_{ n - 2}}}}{ {\Delta t}}}\\{ {s_1} - ({s_0} + { { \dot s}_0}\Delta t{\rm{ + }}\frac{ { \ rm{1}}}{ {\rm{2}}}{ {\ddot s}_0}\Delta {t^{\rm{2}}}{\rm{ + }}\frac{ { \rm{ 1}}}{ {\rm{6}}}(\frac{ { { {\ddot s}_1} - { {\ddot s}_0}}}{ {\Delta t}})\Delta {t^{\rm{3}}})}\\ \vdots \\{ {s_{n - 1}} - ({s_{n - 2}} + { {\dot s}_{n - 2}}\Delta t{\rm{ + }}\frac{ {\rm{1}}}{ {\rm{2}}}{ {\ddot s}_{n - 2}}\Delta {t^{\rm{2}}}{\rm{ + }}\frac{ {\rm{1}}}{ {\rm{6}}}(\frac{ { { {\ddot s}_{n - 1}} - { {\ddot s}_{n - 2}}}}{ {\Delta t}})\Delta {t^{\rm{3}}})}\\{ { {\dot s}_1} - ({ {\dot s}_0} + { {\ddot s}_0}\Delta t + \frac{1}{2}(\frac{ { { {\ddot s}_1} - { {\ddot s}_0}}}{ {\Delta t}})\Delta {t^2})}\\ \vdots \\{ { {\dot s}_{n - 1}} - ({ {\dot s}_{n - 2}} + { {\ddot s}_{n - 2}}\Delta t + \frac{1}{2}(\frac{ { { {\ddot s}_{n - 1}} - { {\ddot s}_{n - 2}}}}{ {\Delta t}})\Delta {t^2})}\\{ { {\dot s}_0} - speed\_limit({s_0})}\\ \vdots \\{ { {\dot s}_{n - 1}} - speed\_limit({s_{n - 1}})}\\{ {s_0} - s\_soft\_lowe{r_0} + s\_slack\_lower}\\ \vdots \\{ {s_{n - 1}} - s\_soft\_lowe{r_{n - 1}} + s\_slack\_lower}\\{ {s_0} - s\_soft\_uppe{r_0} - s\_slack\_upper}\\ \vdots \\{ {s_{n - 1}} - s\_soft\_uppe{r_{n - 1}} - s\_slack\_upper}\end{array}} \right] g(x)= s1s0sn1sn2Δts¨1s¨0Δts¨n1s¨n2s1(s0+s˙0Δt+21s¨0Δt2+61(Δts¨1s¨0)Δt3)sn1(sn2+s˙n2Δt+21s¨n2Δt2+61(Δts¨n1s¨n2)Δt3)s˙1(s˙0+s¨0Δt+21(Δts¨1s¨0)Δt2)s˙n1(s˙n2+s¨n2Δt+21(Δts¨n1s¨n2)Δt2)s˙0speed_limit(s0)s˙n1speed_limit(sn1)s0s_soft_lower0+s_slack_lowersn1s_soft_lowern1+s_slack_lowers0s_soft_upper0s_slack_uppersn1s_soft_uppern1s_slack_upper

7.eval_jac_g() solves the constrained Jacobian matrix
  /** Method to return:
   *   1) The structure of the jacobian (if "values" is nullptr)
   *   2) The values of the jacobian (if "values" is not nullptr)
   */
  bool eval_jac_g(int n, const double *x, bool new_x, int m, int nele_jac,
                  int *iRow, int *jCol, double *values) override;

• Variable values: x
• Number of non-zero elements of the Jacobian matrix: nele_jac
• Jacobian matrix values:values

f : R n → R n f:\mathbb{R}^n\to\mathbb{R}^n f:RnRn is a differentiable function, thenfff 在点 x = ( x 1 , x 2 , ⋯   , x n ) T \boldsymbol{x}=(x_1,x_2,\cdots,x_n)^T x=(x1,x2,,xn)The Jacobian matrix at T is:

J = ( ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ) \boldsymbol{J}=\begin{pmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{pmatrix} J= x1f1x1f2x1fnx2f1x2f2x2fnxnf1xnf2xnfn

Among them, fi f_ifimeans fff 的第 i i i 个分量函数, ∂ f i ∂ x j \frac{\partial f_i}{\partial x_j} xjfi 表示 f i f_i fi 对第 j j j 个自变量 x j x_j xj 的偏导数。

求解器通过稀疏矩阵来保存值。Insert image description hereInsert image description here

稀疏矩阵的例子

由约束矩阵可知,共有7个类型的函数,分别求偏导可得:

  • 单调性约束: g 1 = s i + 1 − s i g_1=s_{i+1}-s_i g1=si+1si ∂ g 1 ∂ s i = − 1 , ∂ g 1 ∂ s i + 1 = 1 \frac{\partial g_1}{\partial s_i}=-1,\frac{\partial g_1}{\partial s_{i+1}}=1 sig1=1,si+1g1=1

  • 加加速度jerk约束: g 2 = s ¨ i + 1 − s ¨ i Δ t g_2 = \frac {\ddot s_{i+1}-\ddot s_i}{\Delta t} g2=Δts¨i+1s¨i ∂ g 2 ∂ s i = − 1 Δ t , ∂ g 2 ∂ s i + 1 = 1 Δ t \frac{\partial g_2}{\partial s_i}=-\frac{1}{\Delta t},\frac{\partial g_2}{\partial s_{i+1}}=\frac{1}{\Delta t} sig2=Δt1,si+1g2=Δt1

  • 位置等式约束: g 3 = s i + 1 − ( s i + s i ′ ∗ Δ t + 1 3 ∗ s i ′ ′ ∗ Δ t 2 + 1 6 ∗ s i + 1 ′ ′ ∗ Δ t 2 ) g_3=s_{i+1}-(s_i+s_i^{\prime}*\Delta t+\frac13*s_i^{\prime\prime}*\Delta t^2+\frac16*s_{i+1}^{\prime\prime}*\Delta t^2) g3=si+1(si+siΔt+31si′′Δt2+61si+1′′Δt2)
    ∂ g 3 ∂ s i = − 1 ∂ g 3 ∂ s i + 1 = 1 ∂ g 3 ∂ s ˙ i = − Δ t ∂ g 3 ∂ s ¨ i = − Δ t 2 3 ∂ g 3 ∂ s ¨ i + 1 = − Δ t 2 6 \begin{gathered} \begin{aligned}\frac{\partial g_3}{\partial s_i}=-1\end{aligned} \\ \frac{\partial g_3}{\partial s_{i+1}}=1 \\ \begin{aligned}\frac{\partial g_3}{\partial \dot s_i}&=-\Delta t\end{aligned} \\ \frac{\partial g_3}{\partial\ddot{s}_i}=-\frac{\Delta t^2}3 \\ \frac{\partial g_3}{\partial\ddot{s}_{i+1}}=-\frac{\Delta t^2}6 \end{gathered} sig3=1si+1g3=1s˙ig3=Δts¨ig3=3Δt2s¨i+1g3=6Δt2

  • 速度等式约束: g 4 = s i + 1 ′ − ( s i ′ + 1 2 ∗ s i ′ ′ ∗ Δ t + 1 2 ∗ s i + 1 ′ ′ ∗ Δ t ) g_4=s_{i+1}^{\prime}-(s_i^{\prime}+\frac12*s_i^{\prime\prime}*\Delta t+\frac12*s_{i+1}^{\prime\prime}*\Delta t) g4=si+1(si+21si′′Δt+21si+1′′Δt)
    ∂ g 4 ∂ s ˙ i = − 1 ∂ g 4 ∂ s ˙ i + 1 = 1 ∂ g 4 ∂ s ¨ i = − Δ t 2 ∂ g 4 ∂ s ¨ i + 1 = − Δ t 2 \begin{gathered} \frac{\partial g_4}{\partial\dot{s}_i}=-1 \\ \frac{\partial g_4}{\partial\dot{s}_{i+1}}=1 \\ \frac{\partial g_4}{\partial\ddot{s}_i}=-\frac{\Delta t}2 \\ \frac{\partial g_4}{\partial\ddot{s}_{i+1}}=-\frac{\Delta t}2 \end{gathered} s˙ig4=1s˙i+1g4=1s¨ig4=2Δts¨i+1g4=2Δt

  • Speed ​​limit constraint: g 5 = s ˙ i − speed _ limit ( si ) g_5={ { {\dot s}_i} - speed\_limit({s_i})}g5=s˙ispeed_limit(si)
    ∂ g 5 ∂ s ˙ i = 1 , ∂ g 5 ∂ s i = − d s p e e d _ l i m i t ( s i ) d s i \frac{\partial g_5}{\partial \dot s_i}=1,\frac{\partial g_5}{\partial s_{i}}=-\frac{d speed\_limit({s_i})}{ds_i} s˙ig5=1,sig5=dsidspeed_limit(si)

  • 软约束lower: g 6 = s i − s _ s o f t _ l o w e r i + s _ s l a c k _ l o w e r i g_6=s_i-s\_soft\_lower_i+s\_slack\_lower_i g6=sis_soft_loweri+s_slack_loweri
    ∂ g 6 ∂ s i = 1 , ∂ g 6 ∂ s _ s l a c k _ l o w e r i = 1 \frac{\partial g_6}{\partial s_i}=1,\frac{\partial g_6}{\partial s\_slack\_lower_i}=1 sig6=1,s_slack_lowerig6=1

  • 软约束upper: g 7 = s i − s _ s o f t _ u p p e r i − s _ s l a c k _ u p p e r i g_7=s_i-s\_soft\_upper_i-s\_slack\_upper_i g7=sis_soft_upperis_slack_upperi
    ∂ g 7 ∂ s i = 1 , ∂ g 7 ∂ s _ s l a c k _ u p p e r i = − 1 \frac{\partial g_7}{\partial s_i}=1,\frac{\partial g_7}{\partial s\_slack\_upper_i}=-1 sig7=1,s_slack_upperig7=1

The solvent(s: [4(N-1)+3N]*5N) is:
[ − 1 1 ⋱ ⋱ − 1 1 − 1 Δ t 1 Δ t ⋱ ⋱ − 1 Δ t 1 Δ t − 1 1 ⋱ ⋱ − 1 1 − Δ t ⋱ − Δ t − Δ t 2 3 − Δ t 2 6 ⋱ ⋱ − Δ t 2 3 − Δ t 2 6 − 1 1 ⋱ ⋱ − 1 1 − Δ t 2 − Δ t 2 ⋱ ⋱ − Δ t 2 − Δ t 2 − v _ f ( si ) ′ ⋱ − v _ f ( si ) ′ 1 ⋱ 1 1 ⋱ 1 1 ⋱ 1 1 ⋱ 1 − 1 ⋱ − 1 ] \left[ {\begin {array}{cccccccccccccc}{\begin{array}{cccccccccccccccc}{ - 1}&1&{}&{}\\{}& \ddots & \ddots &{}\\{}&{}&{ - 1} &1\end{array}}&{}&{}&{}&{}\\{}&{}&{\begin{array}{cccccccccccccc}{ - \frac{1}{{\Delta t} } }&{\frac{1}{ {\Delta t}}}&{}&{}\\{}& \ddots & \ddots &{}\\{}&{}&{ - \frac{1} { {\Delta t}}}&{\frac{1}{ {\Delta t}}}\end{array}}&{}&{}\\{\begin{array}{ccccccccccccccc}{ - 1}&1&{}&{}\\{}& \ddots & \ddots &{}\\{}&{}&{ - 1}&1\end{array}}&{\begin{array}{ccccccccccccccc}{ - \Delta t}&{}&{}\\{}& \ddots &{}\\{}&{}&{ - \Delta t}\end{array}}&{\begin{array}{ccccccccccccccc}{ - \frac{ {\Delta {t^2}}}{3}}&{ - \frac{ {\Delta {t^2}}}{6}}&{}&{}\\{}& \ddots & \ddots &{}\\{}&{}&{ - \frac{ {\Delta {t^2}}}{3}}&{ - \frac{ {\Delta {t^2}}}{6}}\end{array}}&{}&{}\\{}&{\begin{array}{ccccccccccccccc}{ - 1}&1&{}&{}\\{}& \ddots & \ddots &{}\\{}&{}&{ - 1}&1\end{array}}&{\begin{array}{ccccccccccccccc}{ - \frac{ {\Delta t}}{2}}&{ - \frac{ {\Delta t}}{2}}&{}&{}\\{}& \ddots & \ddots &{}\\{}&{}&{ - \frac{ {\Delta t}}{2}}&{ - \frac{ {\Delta t}}{2}}\end{array}}&{}&{}\\{\begin{array}{ccccccccccccccc}{ - v\_f({s_i})'}&{}&{}\\{}& \ddots &{}\\{}&{}&{ - v\_f({s_i})'}\end{array}}&{\begin{array}{ccccccccccccccc}1&{}&{}\\{}& \ddots &{}\\{}&{}&1\end{array}}&{}&{}&{}\\{\begin{array}{ccccccccccccccc}1&{}&{}\\{}& \ddots &{}\\{}&{}&1\end{array}}&{}&{}&{\begin{array}{ccccccccccccccc}1&{}&{}\\{}& \ddots &{}\\{}&{}&1\end{array}}&{}\\{\begin{array}{ccccccccccccccc}1&{}&{}\\{}& \ddots &{}\\{}&{}&1\end{array}}&{}&{}&{}&{\begin{array}{ccccccccccccccc}{ - 1}&{}&{}\\{}& \ddots &{}\\{}&{}&{ - 1}\end{array}}\end{array}} \right] 11111111v_f(si)v_f(si)1111ΔtΔt111111Δt1Δt1Δt1Δt13Δt26Δt23Δt26Δt22Δt2Δt2Δt2Δt1111

8.eval_h() solves the Hessian matrix
  /** Method to return:
   *   1) The structure of the hessian of the lagrangian (if "values" is
   * nullptr) 2) The values of the hessian of the lagrangian (if "values" is not
   * nullptr)
   */
  bool eval_h(int n, const double *x, bool new_x, double obj_factor, int m,
              const double *lambda, bool new_lambda, int nele_hess, int *iRow,
              int *jCol, double *values) override;

• Variable values: x
• Lagrange multiplier: lambda
• Hessian matrix values: values
• Objective function factors:obj_factor

The basic form of the Hessian matrix is ​​as follows:

H = ( ∂ 2 f ∂ x 1 2 ∂ 2 f ∂ x 1 ∂ x 2 ⋯ ∂ 2 f ∂ x 1 ∂ x n ∂ 2 f ∂ x 2 ∂ x 1 ∂ 2 f ∂ x 2 2 ⋯ ∂ 2 f ∂ x 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ 2 f ∂ x n ∂ x 1 ∂ 2 f ∂ x n ∂ x 2 ⋯ ∂ 2 f ∂ x n 2 ) H = \begin{pmatrix} \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1 \partial x_2} & \cdots & \frac{\partial^2 f}{\partial x_1 \partial x_n} \\ \frac{\partial^2 f}{\partial x_2 \partial x_1} & \frac{\partial^2 f}{\partial x_2^2} & \cdots & \frac{\partial^2 f}{\partial x_2 \partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2 f}{\partial x_n \partial x_1} & \frac{\partial^2 f}{\partial x_n \partial x_2} & \cdots & \frac{\partial^2 f}{\partial x_n^2} \end{pmatrix} H= x122fx2x12fxnx12 fx1x22 fx222 fxnx22 fx1xn2 fx2xn2 fxn22 f

where, f ( x ) f(\boldsymbol{x})f ( x ) is a quadratic differentiable function,∂ 2 f ∂ xi ∂ xj \frac{\partial^2 f}{\partial x_i \partial x_j}xixj2 frepresents f ( x ) f(\boldsymbol{x})f(x) x i x_i xiand xj x_jxjThe second partial derivative of , HHH represents the Hessian matrix.

Only individual terms of the second-order partial derivative of the objective function are non-zero:
∂ f ∂ s i 2 = 2 w s − r e f + 2 w l a t _ a c c [ s ˙ i 4 ⋅ ( κ ′ ( s i ) ) 2 + s ˙ i 4 ⋅ κ ( s i ) ⋅ κ ′ ′ ( s i ) ] + 2 w t a r g e t − s + 2 w t a r g e t − s ˙ + 2 w t a r g e t − s ¨ ∂ f ∂ s i ∂ s i ′ = 8 w l a t _ a c c s i ′ 3 ⋅ κ ( s i ) ⋅ κ ′ ( s i ) ∂ f ∂ s i ′ 2 = 12 w l a t _ a c c s i ′ 2 ⋅ κ 2 ( s i ) + 2 w s ˙ − r e f ∂ f ∂ s i ′ ′ 2 = 2 w a s i ′ 2 ⋅ κ 2 ( s i ) + 4 w j Δ t 2 + 2 w a ∂ f ∂ s i ′ ′ ∂ s i + 1 ′ ′ = − 2 w j Δ t 2 \begin{aligned} \frac{\partial f}{\partial s_i^2}&=2w_{s-ref}+2w_{lat\_acc}[ \dot s_i^4\cdot(\kappa^{\prime}(s_i))^2+\dot s_i^4\cdot \kappa(s_i)\cdot \kappa^{\prime\prime}(s_i)] \\&+2w_{target-s}+2w_{target-\dot s}+2w_{target-\ddot s}\\ \frac{\partial f}{\partial s_i \partial s_i^{\prime}}&=8w_{lat\_acc}{s_i^{\prime}}^3\cdot \kappa(s_i)\cdot \kappa^{\prime}(s_i) \\ \frac{\partial f}{\partial{s_i^{\prime}}^2}&=12w_{lat\_acc}{s_i^{\prime}}^2\cdot \kappa^2(s_i)+2w_{\dot s-ref} \\ \frac{\partial f}{\partial{s_i^{\prime\prime}}^2}&=2w_a{s_i^{\prime}}^2\cdot \kappa^2(s_i)+\frac{4w_j}{\Delta t^2}+2w_a \\ \frac{\partial f}{\partial s_i^{\prime\prime}\partial s_{i+1}^{\prime\prime}}&=-\frac{2w_j}{\Delta t^2}\end{aligned} si2fsisifsi2fsi′′2fsi′′si+1′′f=2w _sref+2w _l a t _ a cc[s˙i4( Mr(si))2+s˙i4k ( si)K′′(si)]+2w _targets+2w _targets˙+2w _targets¨=8wl a t _ a ccsi3k ( si)K(si)=12wl a t _ a ccsi2K2(si)+2w _s˙ref=2w _asi2K2(si)+Δt24wj+2w _a=Δt22w _jSecond-order partial derivative of the constraint function:
∂ g ∂ si 2 = − speed _ limit ′ ′ ( si ) \begin{aligned} &\frac{\partial g}{\partial{s_i}^2}=-speed\_limit ^{\prime\prime}(s_i) \end{aligned}si2g=speed_limit′′(si)

9. finalize_solution()
  /** @name Solution Methods */
  /** This method is called when the algorithm is complete so the TNLP can
   * store/write the solution */
  void finalize_solution(Ipopt::SolverReturn status, int n, const double *x,
                         const double *z_L, const double *z_U, int m,
                         const double *g, const double *lambda,
                         double obj_value, const Ipopt::IpoptData *ip_data,
                         Ipopt::IpoptCalculatedQuantities *ip_cq) override;

The amount of optimization when the objective function obtains the minimum value: x
Minimum value of the objective function:obj_value

Finally, review the process again:

  • Input . The input part includes PathData and the starting TrajectoryPoint
  • Process.
    • Snaity Check . This ensures that speed_data is not empty and that the speed Optimizer does not receive empty data.
    • const auto problem_setups_status = SetUpStatesAndBounds(path_data, *speed_data);Initialize QP problem. If it fails, the data in speed_data will be cleared.
    • const auto qp_smooth_status = OptimizeByQP(speed_data, &distance, &velocity, &acceleration);Solve the QP problem and obtain distance\velocity\acceleration and other data. If it fails, the data in speed_data will be cleared. This part is used to calculate the initial solution to the nonlinear problem and perform quadratic programming smoothing on the results of dynamic programming .
    • const bool speed_limit_check_status = CheckSpeedLimitFeasibility();
      Check speed limits. Then or perform the following four steps:
      1)Smooth Path Curvature 2)SmoothSpeedLimit 3)Optimize By NLP 4)Record speed_constraint
    • Add s/t/v/a/jerk and other information into speed_data and pad it with zeros to prevent fallback.
  • Output . Output SpeedData, including the s/t/v/a/jerk of the trajectory.

reference

[1] Planning Piecewise Jerk Nonlinear Speed ​​Optimizer Introduction
[2] Planning Speed ​​planning based on nonlinear planning
[3] Apollo Spark Plan study notes - Apollo speed planning algorithm principle and practice
[4] Apollo planning control study notes

Guess you like

Origin blog.csdn.net/sinat_52032317/article/details/132744663