Machine Learning Notes - Deep Learning Based on C++ 1. Vector Operations

        Due to their scalability and flexibility, it is now rare to find projects that do not use TensorFlow, PyTorch, Paddle... (the ellipsis here refers to various mature deep learning libraries).

         Spending time writing machine learning algorithms from scratch (i.e. without any underlying framework) can seem like reinventing the wheel. However, it is not. Writing the algorithm ourselves gives us a clear and solid understanding of how the algorithm works and what the model is really doing.

        C++ used to be an old language, but it has grown tremendously in the past decade. One of the major changes is support for functional programming. However, several other improvements have been introduced to help us develop better, faster, and safer machine learning code. After all, the underlying layers of the various deep learning libraries mentioned above are basically based on C/C++.

        Our goal here is to write must-know deep learning algorithms based on C++, such as convolution, backpropagation, activation functions, optimizers, deep neural networks, and more.

1. Obtain the inner product of two vectors

                C++ includes a handy set of generic examples in the <numeric>and headers. <algorithm>Let's take a look at the basic methods of vector operations here.

#include <numeric>
#include <iostream>

int main()
{
    std::vector<double> X {1., 2., 3., 4., 5., 6.};
    std::vector<double> Y {1., 1., 0., 1., 0., 1.};
 
    auto result = std::inner_product(X.begin(), X.end(), Y.begin(), 0.0);
    std::cout << "Inner product of X and Y is " << 

Guess you like

Origin blog.csdn.net/bashendixie5/article/details/132129243