Machine Learning:Introduction

Define

A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.

Example: playing checkers.

E = the experience of playing many games of checkers

T = the task of playing checkers.

P = the probability that the program will win the next game.

Machine Learning Algorithms

  • Supervised Learning

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.

           Regression Problem(predict housing prices)

               In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function.

           Classification Problem(predict of a breast cancer as malignant or benign)

               In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.

Example:

Problem 1: You’re running a company, and you want to develop learning algorithms to address each of two problems.You have a large inventory of identical items. You want to predict how many of these items will sell over the next 3 months.

Problem 2: You’d like software to examine individual customer accounts, and for each account decide if it has been hacked/compromised. Should you treat these as classification or as regression problems?

Treat problem 1 as a regression problem, problem 2 as a classification problem.

  • Unsupervised Learning

Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables.

We can derive this structure by clustering the data based on relationships among the variables in the data.

With unsupervised learning there is no feedback based on the prediction results.

Example:

Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.

Non-clustering: The "Cocktail Party Algorithm", allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).

  • Others:Reinforcement Learning,Recommender Systems

发布了25 篇原创文章 · 获赞 0 · 访问量 1500

猜你喜欢

转载自blog.csdn.net/u014681799/article/details/101918625