Click on the top " AI proper way ", select the "star" Public No.
Heavy dry goods, the first time served
Watermelon book "machine learning" is undoubtedly a must-read books of machine learning. This book serves as the field entry materials, on covering all aspects of machine learning the basics as much as possible. In order to make as many readers of this book to understand machine learning, the author tries to use as little mathematical knowledge. However, a small amount of probability, statistics, algebra, optimization, logic, knowledge seems inevitable.
For this book, I was very recommendable. But for many beginners or solid mathematical foundation is not particularly students who want to complete, clearly understand this book is not easy. Therefore, a good note for us to learn in this book, it helps a lot.
Today gave you recommend a watermelon book "machine learning" refined version of the notes. Author Vay-keen from Shenzhen University, will organize their own version of the full notes posted on GitHub. Immediate release notes address:
https://github.com/Vay-keen/Machine-learning-learning-notes
For this note, the authors say:
Zhou Zhihua "machine learning", also known as watermelon book is a more comprehensive books, the book describes in detail the different types of machine learning algorithms (eg: supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, integrated dimensionality reduction, feature selection, etc.), recorded with the idea of expanding the knowledge I understand that point in the learning process, and I hope the couple read the book watermelon help!
Notes directory
This note contains a total of 17 copies .md documents advantage cents a collection of photographs and documents formulas, clear. This 17 copies of the document covers the complete contents of chapter 16 of the book of watermelon. Specific notes Contents are as follows:
introduction
Performance Metrics
Hypothesis testing & Deviation & Variance
Linear model
Decision Tree
Neural Networks
Support Vector Machines
Bayesian classifier
EM algorithm
Integrated learning
Clustering
Learning and dimensionality reduction measures
Feature selection and sparse learning
Computing learning theory
Semi-supervised learning
Probabilistic graphical models
Reinforcement Learning
Notes content
Below, we excerpt some notes, for your reference.
1. Linear regression
Enter through the property is worth the return to a predicted value, using the features of the above generalized linear models, whether by a contact function, the predicted values into discrete values in order to classify it? Linear regression is the probability of study such problems. Introducing a logarithmic probability logarithmic probability function (logistic function), the prediction value between the projection 0-1, so a linear regression problem into a binary classification.
2. Neural Networks
In machine learning, the neural network is generally referred to as "neural network learning" is the intersection of two disciplines machine learning neural network. The so-called neural network, currently the most widely used definition of a "neural network is extensive parallel interconnection network consists of a simple adaptive unit consisting of its organization to simulate the interaction of biological nervous system of the real world objects made reaction".
Has been used since the "MP neuron model" is abstracted to this structure, also called "threshold logic units", which corresponds to the input section dendrites, each neuron receives the n-th transfer from other neurons input signals, which are connected by a weighted transmitted to the cell body, also called weights connection weights (connection weight). Cell body is divided into two parts, the first part of the calculation of the total input value (i.e., a weighted sum of the input, or a cumulative level of a signal), the first part calculates the difference value with the total input the neuron threshold value, then the activation function (Activation function) processing, generating an output transmitted from the axon to other neurons. MP neuron model shown below:
The author also gives BP neural network algorithm adjust the weights of the hidden layer to the output layer weight adjustment rules of derivation:
3. Support Vector Machine
SVM is a classic two-class model, the basic model is defined as the maximum interval feature space linear classifiers, optimization objective is to maximize its distance learning, support vector machine itself and therefore can be transformed into a convex quadratic programming solving problems.
4. Integrated Learning
As the name suggests, integrated learning (ensemble learning) refers to the learner more effectively combine to form a "learner committee", where each learner as members of the Commission and the exercise of voting rights, the Commission makes a final decision can be more Quartet ~ ... ~ benefit of all living beings, i.e. generalization performance is better than that of any of a learner.
Boosting the most famous family of algorithms, the most widely used is AdaBoost, AdaBoost using exponential loss function, updating the weights and sample distribution AdaBoost are carried out around the index to minimize the loss of function.
AdaBoost Algorithm entire process is as follows:
Above excerpt only a short note content, please note the complete self-review, I hope for your help.
Finally, once again put the watermelon open book notes link address:
https://github.com/Vay-keen/Machine-learning-learning-notes
Recommended Reading
(Click on the title to jump to read)
I find this article helpful? Please forward to more people
Watch AI Wealth starred , obtain the latest AI Dry
Dry latest AI, I was watching ❤️