Neural network entry - 4 serial

4.4 CNeuralNet.h

(Neural network class header file)


In CNeuralNet.h file, we define the structure of artificial nerve cells, defines the structure of the structural layer of artificial neurons and artificial neural network itself. Let us first examine the artificial nerve cell structure.

4.4.1 SNeuron (Structure of nerve cells)

This is a very simple structure. Artificial neural cell structure must be a positive integer to record how many enter it, you still need a vector std: vector to represent its weight. Remember, each input neurons must have a corresponding weight.

Struct SNeuron 
   { 
    // 进入神经细胞的输入个数 
    int m_NumInputs; 
  
    // 为每一输入提供的权重 
    vector;double; m_vecWeight; 
  
    //构造函数 
    SNeuron(int NumInputs); 
  };

The following is the constructor form SNeuron structure:

 

SNeuron::SNeuron(int NumInputs): m_NumInputs(NumInputs+1) 
  { 
    // 我们要为偏移值也附加一个权重,因此输入数目上要 +1 
    for (int i=0; i<NumInputs+1; ++i) 
      { 
       // 把权重初始化为任意的值 
       m_vecWeight.push_back(RandomClamped()); 
      } 
   } 


It can be seen from the above, the constructor of the number of input neurons feed NumInputs as an argument, and creates a random weights for each input. Weight values between -1 and 1.

. . " What?" I hear you say. "This is more of a weight!" Yes, I'm glad to see you to notice this, because this is an additional weight is very important. But to explain why it was there that I have to introduce some more mathematical knowledge. Recall that you can remember, the stimulus value of all heavy input * weight of the sum of the product, and the output value of the excitation of nerve cells depends on whether the value exceeds a certain threshold value (t). This can be expressed by the following equation:

w1x1 + w2x2 + w3x3 +...+ wnxn >= t

Formula is the cell output 1 of the condition. Because the ownership of the network weights requires constant evolution (evolution), if the threshold data can also be evolution together, it would be very important. To do this is not difficult, you use a simple trick can make the weight threshold becomes form. From both sides of the above equation by subtracting each t, ​​to obtain:

w1x1 + w2x2 + w3x3 +...+ wnxn –t >= 0

This equation can be written in a form and back to, the following:

w1x1 + w2x2 + w3x3 +...+ wnxn + t *(–1) >= 0

This, I hope you have been able to see why the threshold t can always imagine to be multiplied by the input for the right-1 heavy. This particular weight is usually called offset (bias), which is why each neuron must increase the weight of a right reason for initialization. Now, when you evolution of a network, you do not have to consider the threshold issue because it has been built in the weight vector. How, a good idea, right? In order to finalize your heart absolutely new artificial neurons you have learned what it was like, please refer again at Figure 12.


Artificial neural cells 12 in FIG band offset.

4.4.2 SNeuronLayer (Neural cell layer)
structure of neural cell layer SNeuronLayer very simple; it defines a layer as shown by a dotted line 13 surrounding the nerve cells SNeuron thereof.

 

FIG 13 is a dotted box neural cell layer

The following is the definition of the source code level, it should not need any further explanation:

SNeuronLayer struct
{
// the number of nerve cells in the current layer using the
int m_NumNeurons;

// layer neurons
Vector <SNeuron> m_vecNeurons;

SNeuronLayer (NumNeurons int, int NumInputsPerNeuron);
};

4.4.3 CNeuralNet (neural network-based)

This is to create a neural network object class. Let's read what the definition of this class:

class CNeuralNet
{
private:
int m_NumInputs;

int m_NumOutputs;

int m_NumHiddenLayers;

int m_NeuronsPerHiddenLyr;

// store all neurons for each layer (output layer comprising a) a memory
vector <SNeuronLayer> m_vecLayers;

All private members easily understood by its name. Required by the class definition of the present number is the number of input and output, the number of hidden layers of several parameters, and the number of each of the other neurons in the hidden layer.

public:

CNeuralNet();

The ini file using the constructor to initialize all of the Private member variables, and then call CreateNet to create a network.

// Create a SNeurons network
void CreateNet ();

I will tell you immediately following the code for this function.

// obtain (read out) from the neural network weight
vector <double> GetWeights () const ;

As the network weights need evolution, it is necessary to create a method to return all the weight. These weights are based on real network type represented in vector form, we will right the real numbers of re-encoding into a genome. When I started talking about this project on genetic algorithm, I will give you exact instructions on how to encode the weight.

// returns the total number of heavy network weights
int GetNumberOfWeights () const;

// with the new weights instead of the original weights
void PutWeights (vector <double> &

weights); this function does work and function GetWeights done just the opposite. When executing the genetic algorithm generation, a new generation of weight must be re-inserted into the neural network. This task for us is PutWeight method.

// S-shaped response curve
inline double Sigmoid (double activation, double response);

When a known weight of product of all the input neurons * and when this method it is fed to the S-shaped activation functions.

// The one set of inputs to compute the output
vector <double> Update (vector < double> & inputs);

Update function this function I will soon be annotated.

}; // end class definition

4.4.3.1 CNeuralNet :: CreateNet (way to create a neural network)

I do not comment on them in the previous two methods CNeuralNet, because I intend to show them a more complete code for you. The first of these two methods is a method of creating a network CreateNet. It is the job of the collected cell layer SNeuronLayers neurons SNeurons together to make up the entire neural network, code:

void CNeuralNet :: CreateNet ()
{
// create a network of individual layers
IF (m_NumHiddenLayers> 0)
{
/ / Create the first hidden layer [Annotation]
m_vecLayers.push_back (SNeuronLayer (m_NeuronsPerHiddenLyr,
m_NumInputs));

for (int I = O; I <m_NumHiddenLayers-L; I ++)
{
m_vecLayers.push_back (SNeuronLayer (m_NeuronsPerHiddenLyr,
m_NeuronsPerHiddenLyr) );
}

// create output layer
m_vecLayers.push_back (SNeuronLayer (m_NumOutput, m_NeuronsPerHiddenLyr));
}

else // When no hidden layer, an output layer simply create
{
// create output layer
m_vecLayers.push_back (SNeuronLayer (m_NumOutputs, m_NumInputs));
}
}


CNeuralNet :: Update 4.4.3.2
(neural network update method)

Update function (update function) is regarded as the main workshop on neural network. Here, the data input of the network is passed in a double-precision vector input std :: vector data format. Update function cycles through each layer to process heavy weight * input multiplied by the sum, then the resulting sum value as an incentive, each neuron is calculated by a function of the output S-shaped, as we mentioned earlier last few as discussed in the page. Update function also returns a double-precision vector std :: vector, which is corresponding to all of the output of the artificial neural network.

. . Please take your own two minutes or less the same time to familiarize yourself with the following codes Update function, which allows you to correctly understand other things we continue to talk about:

Vector <Double> :: CNeuralNet the Update (Vector <Double> Inputs &)
{
// save the output from each layer resulting
vector <double> outputs;

int cWeight = 0;

// first check the number entered is correct
IF (inputs.size () = m_NumInputs!)
{
// If not, it returns a null vector
return Outputs;
}

// for each layer, ...
for (int 0 = I; I <m_NumHiddenLayers +. 1; I ++)
{
IF (I> O)
{
inputs Outputs =;
}
outputs.clear ();

cWeight = 0;

// for each neuron, seeking input corresponding weight * the sum of the products. And the sum of the S-shaped throw function to calculate the output
for (int J = 0; J <m_vecLayers [I] .m_NumNeurons; J ++)
{
Double netinput = 0;

int = NumInputs m_vecLayers [I] .m_vecNeurons [J] .m_NumInputs;

// for each weight
for (int K = O; ++ K; K <NumInputs-L)
{
sum * // calculate the weight of the product input.
netinput + = m_vecLayers [i] .m_vecNeurons [j] .m_vecWeight [k] *
Inputs [cWeight ++];
}

// Add offset
netinput m_vecLayers = + [I] .m_vecNeurons [J] .m_vecWeight [-NumInputs. 1] *
CParams :: DBIAS;

Do not forget the last one right weight for each nerve cell weight vector weight is actually offset value, which we have already explained, we always set it become -1. I have included the offset value in the ini file, you can do the article up and down around it, examine its network functionality you create have any effect. However, this value is usually should not be changed.

// output each layer, when we had them, we have to save them. But substituting Σ adding together the
// total excitation for a first S-shaped function by filtration, to obtain an output
outputs.push_back (the Sigmoid (netinput, CParams :: dActivationResponse)); cWeight = 0:
}
}

return outputs;
}

----------------
[Annotation] If more hidden layers are allowed, followed by a for loop that is able to create the rest of the hidden layer.

Reproduced in: https: //my.oschina.net/dake/blog/196825

Guess you like

Origin blog.csdn.net/weixin_34290000/article/details/91586224