[Machine Learning Linear Regression] The inner product and L2 norm in the loss function

Properties: ⟨ u , v ⟩ \rangle \mathbf{u}, \mathbf{v} \rangleu,v



L2 norm:∣ ∣ y − X w − b ∣ ∣ || \mathbf{y-Xw-b} ||∣∣yXwb ∣∣∣ ∣ y − X w − b ∣ ∣ 2 || \mathbf{y-Xw-b}||_2∣∣yXwb2

In mathematics, it is usually considered that when p = 2 p=2p=2 ,L p L_pLpThe norm represents the Euclidean norm (also known as L 2 L_2L2norm), because at this time L p L_pLpThe norm is calculated in the same way as the Euclidean distance. Therefore, in the expression L 2 L_2L2Norm, the exponent 2 is usually omitted, directly written ∣ ∣ x ∣ ∣ || \mathbf{x} ||∣∣ x ∣∣ or∣ ∣ x ∣ ∣ 2 || \mathbf{x}||_2∣∣x2。当p ≠ 2 p \neqp=2 , you need to explicitly specifyL p L_pLpThe exponent of the norm.

Guess you like

Origin blog.csdn.net/m0_60641871/article/details/129266337