0A03 Unsupervised Learning: Classification (1) linear regression Lasso

In order to solve larger computing Ridge produced, Lasso a good solution to this problem.

  Ridge standardization is aΣw

  Standardization is Lasso aΣ | w |

But the punishment of the effect of Lasso Ridge severe than many. Can put a lot of punishment are w 0.

Combat:

Import numpy AS NP
 Import matplotlib.pyplot AS PLT
 from sklearn Import linear_model     # linear regression 

# initialization data 
DEF make_data (nDim): 
    X0 = np.linspace (. 1, np.pi, 50 ) 
    X = np.vstack ([[X0] , [I ** X0 for I in Range (2, nDim +. 1)]])      # rows continuously adding 
    Y = np.sin (X0) + np.random.normal (0,0.15 , len (X0))
     return x.transpose (), Y         # key X is the transpose of 

X, Y = make_data (12 is ) 

DEF lasso_regression (): 
    alphas,= [1E-10,1e-3,1,10]        # A list 
    for IDX, I in the enumerate (alphas,): 
        plt.subplot ( 2, len (alphas,) / 2, + IDX. 1 ) 
        REG = linear_model.Lasso ( I = Alpha)     # initialize the object Lasso 
        , sub_x X = [:, 0:12]                       # take the full 12-dimensional feature 
        reg.fit (, sub_x, Y)                        # training 
        plt.plot (x [:, 0] , reg.predict (, sub_x)) 
        plt.plot (X [:, 0], Y, ' . ' ) 
        plt.title ( " Dim = 12 is, E% = Alpha " % I) 

        Print ( "alpha %e"%i)
        print("intercept_ :"% (reg.intercept_))
        print("coef_: %s"% (reg.coef_))
    plt.show()

lasso_regression()

Out:

 alpha 1.000000e-10

intercept_ :
coef_: [ 2.04299371e+00 -8.51843390e-01 -4.24962643e-02 -1.84973826e-03
1.23358346e-03 1.04820524e-03 6.83173993e-04 4.30557304e-04
2.74201438e-04 1.78402076e-04 1.18808043e-04 8.09076056e-05]

alpha 1.000000e-03

intercept_ :

coef_: [ 1.34709178e+00 -2.25101225e-01 -1.24268682e-01 -1.06427929e-02
-2.54148588e-04 5.07251667e-04 7.93022205e-04 5.44370622e-04
3.66845898e-04 2.49812709e-04 1.73236411e-04 1.22468296e-04]

alpha 1.000000e+00
intercept_ :
coef_: [-0. -0. -0. -0. -0. -0.
-0. -0. -0. -0.0005254 -0.00026335 -0. ]
alpha 1.000000e+01
intercept_ :
coef_: [-0. -0. -0. -0. -0. -0.
-0. -0. -0. -0. -0. -0.00047827]

We can see a bigger, there are more and more regression parameter is set to 0.

Guess you like

Origin www.cnblogs.com/liu247/p/11068538.html