How does VMD determine the number of decomposition layers (2): determined by sample entropy (SE)

Regarding the introduction of VMD, I won’t go into details here. It has been explained in detail before. If you have any questions, you can go to other bloggers on CSDN to find answers, because everyone’s questions are different. understanding.

This article is about how to determine the number of decomposition layers of VMD through sample entropy. Then the following briefly introduces the sample entropy.

Entropy was originally a thermodynamic concept, a measure used to describe the degree of chaos (disorder) of a thermodynamic system.

In the early 1990s, the approximate entropy (APEN, Aproximate Entropy) proposed by Pincus mainly measures the probability of generating a new pattern in the signal from the perspective of measuring the complexity of the time series. The greater the probability of generating a new pattern, the more complex the sequence is. The larger the value, the larger the corresponding approximate entropy. Approximate entropy has been successfully applied to the analysis of physiological time series, such as heart rate signal, blood pressure signal, male sex hormone secretion curve and other time series complexity research. Sample Entropy (Sample Entropy) is a new measure of time series complexity proposed by Richman and Moornan [12]. Compared with the approximate entropy algorithm, the sample entropy is improved algorithmically: compared with the approximate entropy, the sample entropy calculates the logarithm of the sum. Sample entropy aims to reduce the error of approximate entropy, which has a closer consistency with the known random part. Sample entropy is a method similar to the current approximate entropy but with better accuracy. Compared with approximate entropy, sample entropy has two advantages: first, sample entropy does not include the comparison of its own data segment, it is the exact value of the negative average natural logarithm of conditional probability, so the calculation of sample entropy does not depend on the data length; Second, the sample entropy has better consistency. That is, if one time series has a higher value than another time series, it will also have a higher value for other sum values.

The above is an introduction to sample entropy, just take a look.

The following describes how the SE determines the number of decomposition layers of the VMD.

The more complex the time series, the larger the calculated value of SE, and vice versa. Therefore, after applying VMD to decompose the signal, calculate the SE value of each subsequence, and the sequence with the smallest SE is the trend item of the decomposed sequence.

When the decomposition number K is small, the signal decomposition may be insufficient, and other interference items may be mixed into the trend item, resulting in a larger SE value. When taking an appropriate K value, the SE of the trend item becomes smaller. Afterwards, with the increase of decomposition times K, SE tends to be stable gradually. Therefore, the turning point at which SE tends to be stable is taken as the number of decompositions of VMD to avoid excessive decomposition.

A hybrid prediction model for forecasting wind energy resources icon-default.png?t=M4ADhttps://link.springer.com/article/10.1007/s11356-020-08452-6 directly on the code:

 Using my own data, I get the following results:

In fact, the result is not satisfactory. Because there is not a very stable point for our reference. Let's look at the calculation results of the original text.

 

 The method proposed by the authors, using their own data, draws the results. It can be seen from the figure that there is a relatively stable trend, and it is also good to select the specific number of layers for decomposition.

Let's investigate the reason here. In fact, there is no problem with the author's principle, and there is no problem with our code compilation. The key is that everyone's data is different . From the graph I made, it can be seen that there is a clear downward trend, and the last few points also tend to be flat, which is roughly similar to the graph in the original text. My maximum number of decomposition layers is 15 layers. In other SCI literature, some authors will divide it into 20 layers when performing VMD decomposition. Of course, the decomposition methods are different.

So, in summary, the point I want to express is that the method may be different due to different data, but there is no problem with the principle and code. You can refer to it for study. After all, this is a method published in SCI journals.

In addition, some students may have doubts about whether the decomposed subsequence has the smallest SE of the first IMF or the smallest of the last IMF. This is indeterminate, it may be the first or last IMF, or it may be an intermediate IMF. Don't have this fixed mindset.

This is the code file that the file contains

 

Guess you like

Origin blog.csdn.net/weixin_46062179/article/details/124776537