2019/09/05

--- --- restore content begins

Over, that something, and today only to find I did normalized data when the slide down to the bottom line has done a normalized, and I wonder that strange why the loss is so small, broken heart, and they have again .

 

Completed today

 

Today, the loss function for a moment, then change back, in return from the MAE and MSE

 

And then add the L2 regularization of things to go each layer

 

The most important thing is to modify the formula to calculate the accuracy of

 

I started using a

 

\[{\frac{{{ \left| {\mathop{{y}}\nolimits_{{predict}}\text{ }-\text{ }\mathop{{y}}\nolimits_{{actual}}} \right| }}}{{\mathop{{y}}\nolimits_{{actual}}}}}\] 

 

I then later found denominator, that is the sentence, in fact, is less than 1, 0.x and the like, it is possible to calculate the accuracy of the error will be larger, the average was lifted high

 

So I changed it

 

\[{\frac{{{ \left| {\mathop{{{y}}}\nolimits_{{predict}}\text{ }-\text{ }\mathop{{{y}}}\nolimits_{{actual}}} \right| }}}{{\mathop{{y}}\nolimits_{{max\text{ }}}-\text{ }\mathop{{y}}\nolimits_{{min}}}}}\] 

 

Here is ymin ymax and representatives of different sections of the lower limit on the conviction, the amount of general sentenced to 0--3 years, a large amount of sentence 3--10 years, a huge amount of non-sentenced 10-

 

Mainly on the different parameters of the model to assess

 

Here there is no loss, respectively, and adding acc graph regularization model training each obtained after one million

 

64X32

 

 

 

 

 

 

 

32X32

 

 

 

 

 

 

 

 

Of course, these are just two models, can be seen all over fitted

 

Tomorrow Plan

 

Test model parameters, plus L2 regularization term to see results

 

Impressions today

 

Without fire, these days the tests were in vain

Guess you like

Origin www.cnblogs.com/I-AM-DUMBASS/p/11469056.html