Artificial intelligence system TensorFlow

What is deep learning?

Before machine learning became popular, they were all rule-based systems, so you need to understand phonetics to do speech, a lot of linguistic knowledge to do NLP, and a lot of chess masters to do deep blue. Later, after statistical methods became the mainstream, domain knowledge is no longer so important, but we still need some domain knowledge or experience to extract appropriate features (features). The quality of features often determines the success or failure of machine learning algorithms. For NLP, features are relatively easy to extract, because language itself is highly abstract; for Speech or Image, it is difficult for us humans to describe how we extract features. For example, when we identify a cat, we vaguely feel that the cat has two eyes, a nose and a long tail, and there is a certain spatial constraint between them. For example, the distance between the two eyes and the nose may be similar. But how do you define "eyes" in terms of pixels? If you think about it, it will be difficult. Of course, we have many methods of feature extraction, such as extracting edge contours and so on. But it seems that human learning does not need to be so complicated. We only need to show a few pictures of cats, and he can learn what a cat is. It seems that people can automatically "learn" features. You show him a few photos of cats, and then ask what the cat has. He may make a vague appointment to tell you what the cat has, or even the unique features of cats. Features leopard or tiger no.

 

 

One of the important reasons why deep learning is so popular recently is that it does not require (too much) feature extraction.

From the perspective of machine learning users, most of the things we did before were feature engineering, and then adjusted some parameters, generally to prevent overfitting. With deep learning, if we don't need to implement a CNN or LSTM, then we don't seem to have to do anything. ( Machine makes workers unemployed, and machine learning makes machine learning people unemployed! The ultimate purpose of artificial intelligence is to make humans unemployed? )

 

On November 9, 2015, Google released the artificial intelligence system TensorFlow and announced that it was open source. On the same day, the Geek Academy organized online TensorFlow Chinese document translation.

Machine learning, a type of artificial intelligence, allows software to interpret or predict future situations based on large amounts of data. Today, leading tech giants are investing heavily in machine learning. Facebook, Apple, Microsoft, and even domestic Baidu. Google is naturally among them. "TensorFlow" is Google's internal machine learning system for many years. Today, Google is making this system open source, and publishing the parameters of this system to industry engineers, academics, and technicians with a lot of programming ability, what does this mean?

To make an inappropriate analogy, Google’s treatment of the TensorFlow system today is somewhat similar to the company’s treatment of its mobile operating system, Android. If more data scientists start using Google's systems for machine learning research, it will help Google have more dominance over the growing machine learning industry.

In order to allow domestic technicians to quickly master this world-leading AI system in the shortest possible time, the Wiki team of Geek Academy initiated the Chinese collaborative translation of TensorFlow official documents. The translation and proofreading of 30 chapters is completed, and the Geek Academy Wiki platform is launched and available for download.

Jeff Dean, head of Google's TensorFlow project, wrote back for the Chinese translation project: "I am very excited to see TensorFlow translated into Chinese. One of the main reasons we open source TensorFlow is to enable people all over the world to learn from machine learning and artificial intelligence. Benefiting from intelligence, collaborative translation like this can make it easier for more people to access the TensorFlow project, and I look forward to the application of this project on a global scale in the future!"

 



 

TensorFlow is the second-generation artificial intelligence learning system developed by Google based on DistBelief. Its name - Google tensorflow graph , comes from its own operating principle. Tensor (tensor) means N-dimensional array, Flow (flow) means calculation based on data flow graph, TensorFlow is the calculation process of tensor flowing from one end of the image to the other end. TensorFlow is a system that transmits complex data structures to artificial intelligence neural networks for analysis and processing.

TensorFlow can be used in many machine deep learning fields such as speech recognition or image recognition. It has improved various aspects of the deep learning infrastructure DistBelief developed in 2011. It can be used in a small smartphone, as large as thousands of units. Data center servers run on various devices. TensorFlow will be completely open source and available to anyone.

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. TensorFlow also includes TensorBoard, a data visualization toolkit.

 

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

 

 

 

TensorFlow expresses high-level machine learning computations, greatly simplifying first-generation systems, and providing greater flexibility and scalability. One of the highlights of TensorFlow is its support for distributed computing on heterogeneous devices. It can automatically run models on various platforms, from mobile phones, single CPU/GPU to distributed systems consisting of hundreds or thousands of GPU cards.

From the current documentation, TensorFlow supports CNN, RNN and LSTM algorithms, which are currently the most popular deep neural network models in Image, Speech and NLP.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326849096&siteId=291194637