Letter A
Activation Function |
激活函数 |
Adversarial Networks |
对抗网络 |
Affine Layer |
仿射层 |
Affinity matrix |
亲和矩阵 |
Area Under ROC Curve/AUC |
Roc 曲线下面积 |
Artificial General Intelligence/AGI |
通用人工智能 |
Artificial Intelligence/AI |
人工智能 |
Association analysis |
关联分析 |
Attention mechanism |
注意力机制 |
Autoencoder |
自编码器 |
Average-Pooling |
平均池化 |
Letter B
Backpropagation/BP |
反向传播 |
Baseline |
基准方法 |
Batch Normalization/BN |
批量归一化 |
Between-class scatter matrix |
类间散度矩阵 |
Bias |
偏置 / 偏差 |
Bi-directional Long-Short Term Memory/Bi-LSTM |
双向长短期记忆 |
Binary classification |
二分类 |
Boltzmann machine |
玻尔兹曼机 |
Letter C
Calibration |
校准 |
Channel |
通道 |
Classifier |
分类器 |
Class-imbalance |
类别不平衡 |
Cluster |
簇/类/集群 |
Clustering |
聚类 |
Coding matrix |
编码矩阵 |
Comprehensibility |
可解释性 |
Computation Cost |
计算成本 |
Computer vision |
计算机视觉 |
Conditional Probability Table/CPT |
条件概率表 |
Conditional random field/CRF |
条件随机场 |
Confidence |
置信度 |
Confusion matrix |
混淆矩阵 |
Consistency |
一致性/相合性 |
Convergence |
收敛 |
Convexity |
凸性 |
Convolutional neural network/CNN |
卷积神经网络 |
Correlation coefficient |
相关系数 |
Cross entropy |
交叉熵 |
Cross-validation |
交叉验证 |
Curse of dimensionality |
维数灾难 |
Letter D
Data mining |
数据挖掘 |
Data set |
数据集 |
Decision tree |
决策树/判定树 |
Deep Belief Network |
深度信念网络 |
Deep learning |
深度学习 |
Discriminative model |
判别模型 |
Discriminator |
判别器 |
Distance measure |
距离度量 |
Distance metric learning |
距离度量学习 |
Distribution |
分布 |
Divergence |
散度 |
Domain adaption |
领域自适应 |
Downsampling |
下采样 |
Dynamic programming |
动态规划 |
Letter E
Eigenvalue decomposition |
特征值分解 |
Embedding |
嵌入 |
End-to-End |
端到端 |
Ensemble learning |
集成学习 |
Euclidean distance |
欧氏距离 |
Evolutionary computation |
演化计算 |
Expectation-Maximization |
期望最大化 |
Exploding Gradient Problem |
梯度爆炸问题 |
Exponential loss function |
指数损失函数 |
Letter F
False-negative |
假负类 |
False-positive |
假正类 |
False Positive Rate/FPR |
假正例率 |
Feature engineering |
特征工程 |
Feature selection |
特征选择 |
Feature vector |
特征向量 |
Featured Learning |
特征学习 |
Feedforward Neural Networks/FNN |
前馈神经网络 |
Fine-tuning |
微调 |
Flipping output |
翻转法 |
Fluctuation |
震荡 |
Letter G
Gaussian kernel function |
高斯核函数 |
Generalization |
泛化 |
Generalization error |
泛化误差 |
Generative Adversarial Networks/GAN |
生成对抗网络 |
Generative Model |
生成模型 |
Generator |
生成器 |
Genetic Algorithm/GA |
遗传算法 |
Global minimum |
全局最小 |
Global Optimization |
全局优化 |
Gradient boosting |
梯度提升 |
Gradient Descent |
梯度下降 |
Gradient Vanishing |
梯度弥散 |
Ground-truth |
真相/真实 |
Letter H
Hard margin |
硬间隔 |
Harmonic mean |
调和平均 |
Hidden layer |
隐藏层 |
Hidden Markov Model/HMM |
隐马尔可夫模型 |
Hyperparameter |
超参数 |
Hypothesis |
假设 |
Letter I
Incremental learning |
增量学习 |
Input layer |
输入层 |
Letter K
Kernel method |
核方法 |
Kernelized Linear Discriminant Analysis/KLDA |
核线性判别分析 |
K-fold cross validation |
k 折交叉验证/k 倍交叉验证 |
K-Means Clustering |
K – 均值聚类 |
K-Nearest Neighbours Algorithm/KNN |
K近邻算法 |
Letter L
Learning rate |
学习率 |
Linear Discriminant Analysis/LDA |
线性判别分析 |
Linear model |
线性模型 |
Linear Regression |
线性回归 |
Local minimum |
局部最小 |
Log likelihood |
对数似然 |
Logistic Regression |
Logistic 回归 |
Long-Short Term Memory/LSTM |
长短期记忆 |
Long tail effect |
长尾效应 |
Loss function |
损失函数 |
Letter M
Markov Chain Monte Carlo/MCMC |
马尔可夫链蒙特卡罗方法 |
Markov Random Field |
马尔可夫随机场 |
Maximum Likelihood Estimation/MLE |
极大似然估计/极大似然法 |
Max-Pooling |
最大池化 |
Mean squared error |
均方误差 |
Meta-learner |
元学习器 |
Metric learning |
度量学习 |
Momentum |
动量 |
Multi-class classification |
多分类 |
Multi-layer feedforward neural networks |
多层前馈神经网络 |
Multilayer Perceptron/MLP |
多层感知器 |
Multimodal learning |
多模态学习 |
Letter N
Naive bayes |
朴素贝叶斯 |
Non-convex optimization |
非凸优化 |
Nonlinear model |
非线性模型 |
Norm |
范数 |
Normalization |
归一化 |
Letter O
Objective function |
目标函数 |
Oblique decision tree |
斜决策树 |
Occam’s razor |
奥卡姆剃刀 |
One shot learning |
一次性学习 |
Output layer |
输出层 |
Overfitting |
过拟合/过配 |
Oversampling |
过采样 |
Letter P
Parameter |
参数 |
Parameter estimation |
参数估计 |
Parameter tuning |
调参 |
Particle Swarm Optimization/PSO |
粒子群优化算法 |
Perceptron |
感知机 |
Plug and Play Generative Network |
即插即用生成网络 |
Pooling |
池化 |
Positive class |
正类 |
Positive definite matrix |
正定矩阵 |
Precision |
查准率/准确率 |
Principal component analysis/PCA |
主成分分析 |
Prior |
先验 |
Probability Graphical Model |
概率图模型 |
Pruning |
剪枝 |
Letter Q
Quantized Neural Network |
量子化神经网络 |
Quantum computer |
量子计算机 |
Letter R
Radial Basis Function/RBF |
径向基函数 |
Random Forest Algorithm |
随机森林算法 |
Random walk |
随机漫步 |
Recall |
查全率/召回率 |
Rectified Linear Unit/ReLU |
线性修正单元 |
Recurrent Neural Network |
循环神经网络 |
Recursive neural network |
递归神经网络 |
Regression |
回归 |
Regularization |
正则化 |
Reinforcement learning/RL |
强化学习 |
Representation learning |
表征学习 |
Re-sampling |
重采样法 |
Rescaling |
再缩放 |
Residual Mapping |
残差映射 |
Residual Network |
残差网络 |
Restricted Boltzmann Machine/RBM |
受限玻尔兹曼机 |
Robustness |
稳健性/鲁棒性 |
Letter S
Saddle point |
鞍点 |
Sampling |
采样 |
Score function |
评分函数 |
Self-Driving |
自动驾驶 |
Sigmoid function |
Sigmoid 函数 |
Similarity measure |
相似度度量 |
Simulated annealing |
模拟退火 |
Singular Value Decomposition |
奇异值分解 |
Slack variables |
松弛变量 |
Smoothing |
平滑 |
Sparsity |
稀疏性 |
Specialization |
特化 |
State-of-the-art/SOTA |
最先进的 |
Statistical learning |
统计学习 |
Stochastic gradient descent |
随机梯度下降 |
Supervised learning |
监督学习 |
Support Vector Machine/SVM |
支持向量机 |
Letter T
Tensor |
张量 |
The least square method |
最小二乘法 |
Threshold |
阈值 |
Transfer learning |
迁移学习 |
True negative |
真负类 |
True positive |
真正类 |
Turing Machine |
图灵机 |
Letter U
Underfitting |
欠拟合/欠配 |
Undersampling |
欠采样 |
Unsupervised learning |
无监督学习/无导师学习 |
Upsampling |
上采样 |
Letter V
Vanilla |
原始的 |
Letter W
Wasserstein GAN/WGAN |
Wasserstein生成对抗网络 |
Weak learner |
弱学习器 |
Weight |
权重 |
Weight sharing |
权共享 |
Weighted voting |
加权投票法 |
Letter Z
Zero-shot learning |
零次学习 |