sklearn 逻辑回归/Ridge/LASSO

1. 逻辑回归

sklearn.linear_model.LogisticRegression(penalty=’l2’, dual=False, tol=0.0001, C=1.0, 
    fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, 
    solver=’liblinear’, max_iter=100, multi_class=’ovr’, verbose=0, warm_start=False, n_jobs=1)


2. Ridge

sklearn.linear_model.Ridge(alpha=1.0, fit_intercept=True, normalize=False, copy_X=True, 
    max_iter=None, tol=0.001, solver=’auto’, random_state=None)


3. LASSO

sklearn.linear_model.Lasso(alpha=1.0, fit_intercept=True, normalize=False, precompute=False, 
    copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection=’cyclic’)


可以注意到,在LogisticRegression里,惩罚项参数为C,而Ridge和LASSO的惩罚项参数为alpha。alpha是我们平时在损失函数里常用的正则化项惩罚参数lamda,C是类似于软间隔SVM里的参数C(详情见周志华《机器学习》P130)。

在sklearn里,C和Lamda的关系是:alpha=C^-1

猜你喜欢

转载自blog.csdn.net/xxy0118/article/details/80566100