[] PyTorch depth study of learning and deep learning PyTorch notes

Chapter 1 PyTorch and depth of learning

Application of deep learning

  • Close to the level of the human image classification
  • Close to the level of human speech recognition
  • machine translation
  • Autonomous vehicles
  • Siri, Google Voice and Alexa is more accurate in recent years
  • Japanese farmers cucumber intelligent sorting
  • Lung Cancer Detection
  • Accuracy than human language translation
  • Meaning the image to read the picture

Nowadays depth study of the most popular applications and technology point of time appears

technology years
Neural Networks 1943
Back Propagation The early 1960s
Convolution neural network 1979
Recurrent Neural Networks 1980
Short and long term memory network 1997

In the past, it is called deep learning

1970s called cybernetics (cybernetics), 20 in the 1980s called connectionism (connectionism), now called deep learning or neural networks

Deep learning now popular reasons

  • Hardware availability
  • Data and algorithms
  • Depth learning framework

Hardware availability

You can come up with several orders of magnitude when large-scale mathematical operations (such as matrix multiplication) graphics processing unit (Graphic Processing Unit, GPU) on completion of millions or even billions of parameters.
GPU memory, NVIDIA's 1080ti about 11GB of memory, the price at around $ 700
a variety of cloud services such as AWS, Google Cloud, Floyd (This company offers the depth study into the optimization of GPU machine)

Data and algorithms

Text, images, video and audio data
in the fields of computer vision, ImageNet race played a significant role in a 1000 categories in 1.4 million picture data sets

Over the past success of the game in some algorithms have VGG, ResNet, Inception, DenseNet, now these algorithms have been applied in the industry to address various computer vision problems

Some other popular data sets:

  • MNIST
  • COCO data set
  • CIFAR
  • The Street View House Numbers
  • PASCAL VOC
  • Wikipedia dump
  • 20 Newsgroups
  • Penn Treebank
  • Kaggle

The development of a variety of different algorithms:
batch normalization, activation function, leaping connection (skip connection), short and long term memory network (LSTM), dropout, etc.

Depth learning framework

Early on, we need to have expertise in C ++ and CUDA to achieve deep learning algorithm
Now, with the open source deep learning framework, just with knowledge of scripting languages (such as Python) to
the industry popular deep learning framework: TensorFlow, Caffe2, Keras, Theano, PyTorch, Chainer, DyNet, MXNet and CNTK

PyTorch and most other deep learning framework, mainly used in two ways:

  • GPU-accelerated through the use of an alternative calculation NumPy similar operations;
  • Construction of the depth of the neural network

PyTorch simple to use, unlike most other popular static calculation using the depth map of the mining machine learning, PyTorch use dynamic calculations, so you can have greater flexibility when building complex architectures.

Python PyTorch heavy use of concepts such as classes, structures and cycle conditions, allow a user to construct a pure object-oriented manner the depth learning algorithm.

PyTorch initially mainly for research purposes because the building, which is not recommended for very high production environments delay requirements. However, with the name Open Neural NetWork Exchange (ONNX) new project, this situation is changing, the project focuses on the development of model deployment PyTorch to apply to the production of Caffe2 such a platform. The project was Facebook and Microsoft support.

Guess you like

Origin www.cnblogs.com/donoho/p/11074981.html