Linux system made my machine learning path even more powerful

I first came into contact with Linux when I was in high school. At that time, I basically didn’t understand anything. Every time I was free, I went to the Internet cafe to play games. Due to the limited pocket money, I thought about a lot of evil ways. I usually go to hacker forums. habit, I accidentally came into contact with linux. At that time, it felt amazing. I was in a black box, typing frantically on the keyboard. It was also at that time that I learned how to crack the Internet cafe management system. At that time, I was only exposed to linux, and I didn't go to in-depth study and research at all.

After I went to college, I also learned linux intermittently. When I used it, I looked at it a little bit, but I didn't study it in depth. The real contact with linux should start from the time of learning machine learning. The threshold for machine learning is a bit high, and it cannot be learned in a day or two. At that time, I read a lot of materials, but I couldn't understand them well, especially a lot of linear algebra and probability theory, which I forgot long after I finished learning them, so most of them were difficult to read, so I didn't study regularly. After a while, I found out that this is very inefficient. After reading a lot, I still didn't understand machine learning, so I decided to make up for the basic knowledge bit by bit from the beginning.

Starting with the most basic math first, I often ask myself: If I want to learn machine learning better, what should I do, but I don't know what I want to learn? Someone with experience told me: great question! My answer is: Consistently go through textbooks. So I am reviewing linear algebra and probability theory little by little. This is not a simple review of simple linear algebra and probability theory learned in college, but the linear algebra and probability theory that our programmers need. From this I understand: In recent years, in order to better data processing, especially large-scale data processing. People began to apply probability statistics in various disciplines. Whether it is data mining, automatic classification of documents, identification of illegal use or automatic screening of spam, as well as speech recognition and machine vision, these things need the theoretical support of probability theory. Through a period of study, I know the relationship between multiple random variables, probability distribution of discrete values, probability distribution of continuous values, covariance matrix, multivariate normal distribution, estimation and testing, pseudo-random numbers and other theoretical and practical codes written implementation.

Linear algebra and probability theory after I made up for about the same, I started to learn data statistics. The time to learn data statistics is relatively long, and the content is a little bit more. Finally, after I have mastered spss, I feel that I can get started in time. Finished studying statistics. This can be learned all the time, not all at once.

After learning all of these, I began to re-read some books that I didn't understand before. I saw that the theoretical knowledge was not so afraid of them, and I began to deduce it step by step until the result was established. Later, I actually did a project, which deepened my understanding before, let's take an ordinary project as an example. Capstone Project: Use a dataset to see if you can predict food ratings given all other attributes. Use three different machine learning techniques for this task and prove your preference. Also, build a classifier that predicts whether a review is "good" or "bad" - you should use a reasonable "good/bad" threshold. This will test your data-driven abilities, your strategies for analyzing larger datasets, your knowledge of machine learning techniques, and your ability to write analytical code in R. According to the later experience, it will only be compared to the code written in the book, and now I can really understand why it is done. Finally, let's talk about how to achieve it. To implement the deployment of your code, you must first master how the Linux system operates. I didn't study hard before. Take this opportunity. I took a serious look at linux. I searched a lot from the Internet. Including books and videos, I did a detailed comparison, and finally I chose the book "Linux should be learned like this". It turned me from a novice to a person who is proficient in operating linux.

Now let's talk about the operating system after learning linux.

Linux came out in 1991, but I was not born at that time. Compared with the windows system, the linux system has powerful functions. After getting started, the operation is very convenient and the structure is very clear. The biggest feature is open source, so that in just a few years developed so rapidly. Therefore, it has become the learning object of many scientific research institutions, students and teachers, and the school has opened linux tutorials, which shows the great influence of the linux system. Now linux system has become the most popular operating system today. Compared with windows, although windows is quick to use and easy to learn, for scientific researchers, many functions are very inconvenient to operate, and many graphics and simulation software can only be used in linux Or it can only run under the unix system, so many companies have launched a lot of linux software, which makes linux develop towards commercialization.

Having said so much, let’s summarize. To learn a new knowledge, it’s not just a matter of general comparison with others’ writing and reading. It is necessary to solve the problem from the root and find a book to study it from beginning to end. This is the kingly way. .

Provide the latest Linux technology tutorial books for free, and strive to do more and better for open source technology enthusiasts: http://www.linuxprobe.com/ 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324410635&siteId=291194637