机器学习准备的面试题

1, 计算卷积参数数目
https://www.cnblogs.com/hejunlin1992/p/7624807.html
http://blog.csdn.net/dcxhun3/article/details/46878999

2,OneClassSVM——无监督︱异常、离群点检测 一分类
http://blog.csdn.net/sinat_26917383/article/details/76647272

3,画卷积网络及代码实现,了解caffe各个层的参数意义
http://www.cnblogs.com/denny402/tag/caffe/default.html?page=2
http://blog.csdn.net/liyuan123zhouhui/article/details/70858472(不懂这里关于gropu的介绍。)
选一个CNN网络,比如Alexnet,从输入层到输出层,画出网络结构及中间卷积等计算过程。

4,ROC曲线含义(横坐标为假正率,纵坐标为真正率,越靠近左上角越好)
http://blog.csdn.net/pipisorry/article/details/51788927
http://blog.csdn.net/abcjennifer/article/details/7359370

5,AUC曲线( ROC曲线下面的面积),精确率,召回率,F1值
http://blog.csdn.net/pzy20062141/article/details/48711355

6,为什么会出现过拟合和欠拟合,怎么解决
https://www.zhihu.com/question/59201590/answer/167392763
https://zhuanlan.zhihu.com/p/29707029

7,如何解决机器学习中数据不平衡问题
http://blog.csdn.net/lujiandong1/article/details/52658675
https://www.nowcoder.com/questionTerminal/f0edfb5a59a84f10bf57af0548e3ec02?toCommentId=78036
(10算是均匀的话,可以将多数类分割成为1000份。然后将每一份跟少数类的样本组合进行训练得到分类器。而后将这1000个分类器用assemble的方法组合位一个分类器)

8,我的模型:最后一层用的是softmax
http://www.jianshu.com/p/dcf5a0f63597
https://www.zhihu.com/question/23765351/answer/139826397

9,对逻辑回归的理解及逻辑回归和SVM的区别
http://www.jianshu.com/p/19ca7eb549a7
https://www.zhihu.com/question/24904422/answer/92164679
http://blog.csdn.net/u010976453/article/details/78488279

10,RF、GBDT、XGBoost面试级整理,原理及区别(Bagging,Boosting)
https://www.cnblogs.com/ModifyRong/p/7744987.html
http://blog.csdn.net/qq_28031525/article/details/70207918
http://blog.csdn.net/xlinsist/article/details/51475345
http://blog.csdn.net/abcjennifer/article/details/8164315
http://blog.csdn.net/qccc_dm/article/details/63684453
https://www.zhihu.com/question/54626685?from=profile_question_card

11,神经网络的激活函数
https://zhuanlan.zhihu.com/p/32610035
https://www.jianshu.com/p/22d9720dbf1a
https://www.v2ex.com/t/340003
https://zhuanlan.zhihu.com/p/22142013

12,为什么 LR 模型要使用 sigmoid 函数,背后的数学原理是什么?
https://www.zhihu.com/question/35322351

13,代价函数/损失函数
https://blog.csdn.net/qq547276542/article/details/7798004
https://blog.csdn.net/google19890102/article/details/50522945
https://blog.csdn.net/u013527419/article/details/60322106(交叉熵代价函数)
https://blog.csdn.net/u012162613/article/details/44239919

14,L1,L2正则化的区别
https://blog.csdn.net/cs24k1993/article/details/79683042

15,RF和XGBOOST的列采样是什么意思?
https://www.cnblogs.com/SpeakSoftlyLove/p/5256131.html

16,SVM常用核函数
https://blog.csdn.net/batuwuhanpei/article/details/52354822
https://www.zhihu.com/question/21883548

17,SVM问题整理
https://blog.csdn.net/gao1440156051/article/details/61435358

猜你喜欢

转载自blog.csdn.net/cs24k1993/article/details/79502854