Talk Granger causality (Granger Causality) - Chapter One wild fire, in spring

Talk Granger causality (Granger Causality) - Chapter One wild fire, in spring

At 6:10 on July 9, 2017, the first division of Comrade Hu Sanqing - author of a new causality, the author of implanted brain electrodes in epilepsy therapy, IEEE Senior Member, due to lung cancer treatment failed Cancer Hospital in Hangzhou He died at the age of 50 years. Grace Meng Yu and teacher for several years, once when he heard the first division riding a crane Seogwi of grief. I weary heart, then decided to pass the first division of the road, to comfort the soul in heaven and teacher. So, the first division cover to rest in peace now!


 

Granger causality as a way interaction between time sequence relationships can be measured, the past ten years has become widely accepted. Whether Economics [1], meteorological science [2], neuroscience [3] are widely used, although the latter two (meteorological and neuroscience) even Granger themselves against (the opposition Grand Granger Jay causality used in other fields other than economics, which is the subject of this article called "Wild Fire") [4]. Given the author never had more than half points in meteorology achievements, we are not alone make. But fortunately, after decades of hard neuroscientists "washing", they have found their 'legitimate' use of Granger causality of reason [5] (Anil Seth is the Royal Academy of Sciences, the author of the most important 'surface washing strongest king', i.e. the subject herein so-called "spring"). In addition to the Granger causality proposed by Clive Granger himself, there are several variants produced around Granger causality method, this article will also variants of these categories, to make some brief introduction. Of course, it is precisely because Mr. Hu believes should follow Sir Granger papers, so it created a 'new causal relationships' specifically address the causal relationship neuroscience use problem, but it is something. This article is about the will focus on the following aspects:
Framework of this article:
Chapter One wild fire, in spring
A: Granger causality what is?
Two: What Granger causality principle relationship?
Three: What solving Granger causality standardization process?
Chapter two stars cause and effect, you can start a prairie fire
Four: Granger causality variants, principles and applications?
Five: Authentic Granger causality how to use code to achieve? (MATLAB version)
The third chapter Hu causation (new causality) (Innovation by Sanqing Hu)
A: Granger causality what is?
     It called general verbal Granger causality stands for "Granger causality test" (Granger causality test), see here, can you Tell me big sigh, "Ah! The original Granger causality statistics a method of verifying the hypothesis!. " Then the "cause and effect" word again we talk about it? This relationship between time series from start to start. Granger is the definition of "cause and effect" of. Now we assume that there is a time sequence X, which is different from the sampling time point of X { . 1 , X 2 , X . 3 , ......, X n- } composed of. Like X, we assume that the time series Y, which form such as X, Y by a { . 1 , Y 2 , Y . 3 , ......, Y n- } composed. Now we use past predict future X, X, such as using X . 1 ~ X NJ (X, which is in the past) predict X NJ +. 1 ~ X n- process (which is the next X), generating a prediction error [delta] . 1 (please specific methods and specific methods ignore the generated predicted error), this error is then regarded as the first result we get. And then by using X and Y together past predicted future X, such as X with { . 1 ~ X NJ (This is the last X) | Y . 1 ~ Y NJ (Y which is in the past) predict} X NJ +. 1 ~ X n- (this is the next X), the process of generating a prediction error [delta] 2 (please ignore the "X and Y how common?" question), then this error is regarded as the second result we get. If [delta] . 1 is less than Delta] 2, X and Y that is the prediction error is smaller than their combined prediction error X, then Y must be because prediction has helped to X, so it reduces the prediction error. In this case, we have said that Y Granger causality X.
Two: What Granger causality principle relationship?
     In this section, we want to focus on resolving several issues that appear on one. That is, 1). Predict what is achieved? ; 2) what is the error generated? ; 3) X and Y are jointly predicted how common before explaining these issues?. There is a very important concept - regression problems, must be pre-bedding, or else, these three issues can not start. To facilitate understanding, we express this problem in two-dimensional space, of course, this problem can also be extended to higher dimensional space. Of course, if someone Tell me ensure that they have a clear regression problem, you can skip ahead to the part after the regression finish.

    Autoregression begin to explain the problem:
     The presence of a series of two-dimensional space is assumed that the set of points {X = S . 1 , X 2 , X . 3 , ......, X n-  }. As shown in Figure 1. Now we want to find a line, this line better through each point in the point set S. These points include a hidden variable T. Thus, when find this line, we actually found a representative of this two-dimensional space line shows the function x = f (t).
figure 1
So how do you find this line on the road?
STEP1, you need to assume that this line is derived from a function of order m. If your order is assumed to be 3, then this is a function of the form F = X (T) = A 0 + A . 1 T . 1 + A 2 T 2 + A 3 T 3  function. Each point in the set S must be satisfied (or to suit) x = f (t) in this equation. Take the first point [T . 1 , X . 1 ] For chestnut: X . 1  = F (T) = A 0 + A . 1 T . 1 . 1 + A 2 T . 1 2 + A . 3 T . 1 . 3 . Of course, we know that this line y = f (x) through each perfect point is almost impossible, so we allow error exists. As long as y = f (x) + ε can, where [epsilon] represents the error term, and [epsilon] normal distribution (sometimes called white Gaussian noise, Gaussian distribution).
STEP2, attempts to determine the coefficient A 0 ~ A m satisfy the requirements of STEP1, set S to a point such that all points satisfies x = f (t) + ε . In general, solving the coefficient A 0 ~ A m  The most common method is the least squares method. DETAILED least squares method steps will not be described herein. Generally can be solved by matlab function polyfit, the following code. Solution 2 results.
X=[4,3.5,3.6,2.1,4,6,5.7,5,4];
a = polyfit(1:9,X,3);
T_new = 0:0.1:10;
X_new = polyval(a,T_new);
plot(1:9,X,'r*');hold on;
plot(T_new,X_new,'b-');

In the above code, it is polyfit fitting parameters, which is responsible for a change in time: 9 and a parameter argument X solved out, attention, where a is an array, consisting of a0 ~ a3. The function polyval T_new solving X_new parameter array and a set point. Drawn [T_new, X_new] 2 is blue curve in FIG.

 
figure 2
Observant students may be able to find, in Figures 1 and 2, T Timeline range is not the same. In Figure 1, the T-axis range is 1 to 9; in FIG. 2, the x-axis range of 1 to 10. This excess out of the X- 10 can actually be considered in accordance with our existing datasets [the X- 1-9 future data] predicted from [the X- 10 ].
总之,自回归问题的一般形式就是给出一系列点集[T,X],然后想办法找到一个函数X = f(T) 。我们要想法找到一系列参数a,使得函数代表的这条线尽可能经过点集。
自回归问题讲解结束。

讲完了自回归问题,以下两个问题就可以得到完美的解答:
同本文的开始相同,现在我们假定拥有两个时间序列X:{x 1,x 2,x 3,……,x n }下标代表时间T从1到n,n是实数。
1).预测究竟是怎样实现的?
读到这里时,我假设各位看官已经弄清楚了我在上面所述的回归问题。但是格兰杰因果关系中的回归问题同原问题的原理相同但过程有所不同。且听我慢慢道来他们的不同之处:
不同之处一:在回归的原问题中,我们以时间轴T为横坐标尽力求解自变量X。试图找到时间轴T同自变量X之间的关系。即X = f(T)。比如我们在上式中举出的例子里X = f(T) = a 0+a 1T 1+a 2T 2+a 3T 3(请注意:这个问题实际上是三阶问题,'阶' 是指除了a0之外,还有几个参数a。例子中的回归问题其实是一个非线性回归(次数为一才是线性的))。在回归问题的末尾,我们说过,回归问题的本质是要找到一系列的参数a,因此,我们在例子中举到的3阶形式,存在的具体形式如下:
X = f(T) = a 0+a 1t 1 1+a 2t 1 2+a 3t 1 3(式一)······普通回归问题(三阶一次问题(线性回归))
在格兰杰因果关系中,自回归模型就是类似于这种的一阶形式。但是它不再是T映射到X上的函数,而是过去的X映射到现在的X上的函数,即
X_now = f(X_past),在考虑过去三个点的情况下,这个问题就变成了
xt =a 1x t-1+a 2x t-2 +a 3x t-3(式二)······格兰杰回归问题(注意到没有常数项)
既然在式一中,知晓T和X,进而求解一些列参数a不成问题,那么在这里,知晓时间序列上的X,进而求解一系列参数a也不是问题。
不同之处二:
既然是回归问题就要涉及到回归阶数order的选择问题。通过不同之处一,我们知晓,在格兰杰因果关系检验中,阶数的作用更像是在指明采用过去的多少个点对当前点的点求解回归问题。因此阶数order有了一个崭新的名字:lag(迟延,迟滞)。在阐述回归问题的时候,我们假设迟滞lag=3,然后使用最小二乘法拟合了点集S。实际上,为了简化最小二乘拟合的问题,笔者武断的设置了迟滞lag=3而并没有采取适当地方法对lag应有的值做出选择。在格兰杰因果关系的计算过程中,针对阶数的选择方法大致可以分为三类:I. Akaike information criterion(赤池信息准则,简称AIC);II. Bayes information criterion(贝叶斯信息准则,简称BIC);III Rule of thumb(凭经验选择lag)。虽然最后一种给人感觉似乎不正规,但实际上,最后一种方法也是获得专家认可的(详见Anil Seth的论文对格兰杰因果关系的描述 )。为了保证讲解的条理清晰性,这三种lag选择方法的具体计算流程这里不再详表,而是放到第五章代码实现中直接给出。这里请把阶数的获得方法当做一个黑盒过程。我们通过某种阶数选择方法得到迟延lag = m(在这里注意一个隐含条件,迟延m一定小于数据段长度n,且m是实数),那么针对这个迟延lag,预测流程即为:
利用时间范围T:1~lag上[X1~lag]的点预测[lag+1]上的点[Xlag+1],这里Xplag+1是通过预测得出来的Xlag+1 ,Xlag+1-Xplag+1产生误差ε1
利用时间范围T:2~lag+1上[X2~lag+1]的点预测[lag+2]上的点[Xlag+2],这里Xplag+2是通过预测得出来的Xlag+2 ,Xlag+2-Xplag+2产生误差ε2
利用时间范围T:3~lag+2上[X3~lag+2]的点预测[lag+3]上的点[Xlag+3],这里Xplag+3是通过预测得出来的Xlag+3 ,Xlag+3-Xplag+3产生误差ε3
…………
利用时间范围T:n-lag~n-1上[Xn-lag~n-1]的点预测[n]上的点[Xn ],这里Xpn是通过预测得出来的Xn ,Xn-Xpn产生误差εn-lag
以上的过程可以看作是一个预测窗函数不断滑动实现的,如图三:
图3
 
到这里,我们便知道,所谓的预测,就是通过最小二乘法注意预测紧挨着滑动窗口之后的那个点的值。与此同时产生预测值的预测误差ε
2).误差究竟是怎样产生的?
在上一节里,我们了解了预测的来源。本节里,我们解释总体误差的产生方式。从第一个问题我们可以得知,对一个长度为n的数据段,我们需要进行n-lag次预测,每一次预测都会产生一个误差项。而在文章的一开始,我们提到过对一个长度为n的数据段,我们只需要得出一个误差δ 1而不是n-lag个误差。那么总体误差δ 1和这n-lag个误差究竟有什么关系呢。答案是,δ 1 是自回归误差ε 1 ~ε n-lag的无偏估计。他们之间的关系表述:
 
3).X和Y是怎样共同联合预测的 ? 

联合回归问题讲解开始:
有了自回归问题的基础,相信理解联合回归问题不会是一件难事。为了不失一般性,我们仍然从高阶高次的联合回归讲起,然后回到高阶一次的形式。与自回归问题不同,现在我们假定拥有三个时间序列X:{x 1,x 2,x 3,……,x n },Y:{y 1,y 2,y 3,……,y n },Z:{z 1,z 2,z 3,……,z n } 下标代表时间T从1到n,n是实数。问题从自回归问题的X = f(T)变成了Z = f(X,Y)。我们试图找到自变量X,Y同另一个自变量Z的关系。在限定问题为2次的情况下,针对点集中的第一个点,我们的问题要求解的具体形式为:z 1 = a 1x 1 2+a 2y 1 2+a 3x 1+a 4y 1+a 5x 1y 1+a 6+ε,拟合函数的目的就是要求出参数
a 1~a 6。具体的求解过程不再细说。求解可以通过matlab函数regress实现。该问题在一次状态下的方程为
z 1 = a 1x 1 2+a 2y 1 2+a 3x 1+a 4y 1+a 5x 1y 1+a 6+ε (式三) ······联合回归问题
在格兰杰因果关系中,我们将状态方程转化为X_new = f(X_past,Y_past),假设lag = 3 ,则具体的形式是
xt = a 1x t-1+a 2x t-2+a 3x t-3+a 4y t-1+a 5y t-2+ε(式四)······格兰杰联合回归问题(注意到没有常数项)
注意到,式四中y的lag不一定等于x的lag。(备注:在我们漫谈格兰杰因果关系的最后一章所推荐的 GCCN工具箱(Anil Seith著,matlab版)里,y的lag等于x的lag。)
联合回归问题讲解结束。

这里,我们给出格兰杰因果关系下的一个一般性的回归公式:
 
 
式五即为式二的一般化形式,而式六就是式四的一般化形式。
最后我们采用union-regression替代autoregression方法重复‘不同之处二’一节中叙述过的过程:
利用时间范围T:1~lag上[X1~lag ,Y 1~lag ]的点预测[lag+1]上的点[Xlag+1],这里Xplag+1是通过预测得出来的Xlag+1 ,Xlag+1-Xplag+1产生误差ε1
利用时间范围T:2~lag+1上[X2~lag+1  ,Y 2~lag+1 ]的点预测[lag+2]上的点[Xlag+2],这里Xplag+2是通过预测得出来的Xlag+2 ,Xlag+2-Xplag+2产生误差ε2
利用时间范围T:3~lag+2上[X2~lag+2 ,Y 2~lag+2 ]的点预测[lag+3]上的点[Xlag+3],这里Xplag+3是通过预测得出来的Xlag+3 ,Xlag+3-Xplag+3产生误差ε3
…………
利用时间范围T:n-lag~n-1上[Xn-lag~n-1  ,Y n-lag~n-1 ]的点预测[n]上的点[Xn ],这里Xpn是通过预测得出来的Xn ,Xn-Xpn产生误差εn-lag
在得到一系列的误差后,再一次利用前文中提到的无偏估计方法求解联合回归产生的无偏误差估计δ 2
 
最后,一切仿佛又回到了开始:
假如δ 21则认为,变量Y对变量X有格兰杰因果关系。否则则Y对变量X没有格兰杰因果关系。看到这一步,我们已经弄懂了格兰杰因果关系的绝大部分基本原理。大部分看官可以长吁一口气了。然而,事情真的完了吗?答案是:没有完成,还差最后一步。在得到δ 2和δ 1的值并比较大小之后,我们必须做自回归和联合回归误差的F检验,否则无法判断δ 2和δ 1值的大小是否有意义。因此,千万不要忘记最后一步F_test(δ 2,δ 1 )。至于为何要做F检验和F检验的具体方法,请参见笔者之前写的另一篇博文。我确信,如果你是一名研究人员,那篇博文值得一读。
 
三:求解格兰杰因果关系的标准化流程是什么 ?
在讲过了格兰杰因果关系的核心思想后,我们就要开始了解一下格兰杰因果关系的标准化流程[6]。可能有人会发出疑问:“我们不是已经在上面介绍了格兰杰因果关系处理的基本流程了吗?这里的标准化过程又是指什么?”这里的标准化流程是指为了求解时间序列间的因果关系而对时间序列数据所做一些基本预处理过程。而这个过程主要是为了保证格兰杰因果关系检验结果的有效性。下面给出格兰杰因果关系的标准化流程的框架图,每一步的原因以及相关的知识点放在之后解释。
图4
 
首先,普及一个概念。 什么叫做平稳时间序列?
粗略的讲,平稳时间序列需要满足以下两个条件
1)时间序列没有变化趋势(含周期性变化趋势),换言之,它不可以是那种看起来有明显增加或减少的线条。像以下两种就绝非平稳:
 
2)在时间序列中,任意取出若干段数据并对该若干段数据求方差。则方差间不应该有显著性差异。
STEP1:数据去趋势化
由于格兰杰因果关系要求所处理的时间序列必须是平稳时间序列,所以必须要去趋势化。一个去除了趋势化的数据应该如下图所示。可以注意到,该时间序列围绕y=3轴上下波动。
 
去趋势化的方法有很多,这里不再详细介绍,第五节提供的matlab工具箱会有相关工具。
STEP2:数据去均值
格兰杰因果关系要求时间序列数据必须围绕y=0轴波动而不是像上图中围绕y=3波动,一个去过均值的时间序列应该如下图所示
 
在讲解STEP3之前,率先普及一个概念
平稳时间序列里不含有单位根,非平稳时间序列里一定含有单位根。
STEP3:ADF检验和KPSS检验
ADF检验全称 Augmented-Dickey-Fuller test(增广迪肯富乐检验),KPSS检验全称 Kwiatkowski-Phillips-Schmidt-Shin test。这两种检验均称为单位根检验。如果对时间序列数据做单位根检验得出的结论是该时间序列有单位根,则可以肯定该序列一定不是平稳的。
这里产生了四个问题:
1)我们不是经过STEP1和STEP2了吗,为何序列仍是不平稳的?
答:STEP1和STEP2并不保证序列的平稳性,只能使序列看似平稳。因此在完成STEP1和STEP2后,必须进行STEP3以便找到时间序列平稳数学上的证据。
2)如果时间序列经过STEP1和STEP2后仍然检验不平稳该怎么办?
答:问得好!解决这个问题正是STEP3的任务。
3)ADF检验和KPSS检验有什么区别?
KPSS 是右侧单边检验,原假设是序列是平稳的(不存在单位根)。
ADF是双边检验,原假设是序列是非平稳的(存在单位根)。
4)ADF检验和KPSS检验结果不一致怎么办?
不得不说,这事儿常发生。笔者在网上也好好搜索了一番(2017年7月25日搜索)。网友给出的结论基本分为两派:
A."看情况自己选"
B."有一个通过就算通过"
STEP4:一阶差分
正如同STEP3中所提到的一样,只有当经过STEP1和STEP2后仍然检验不平稳时才会对时间序列数据做差分。那么怎么做差分呢?
给定时间序列X{x 1,x 2,x 3,……,x n }。则定义差分过的时间序列X_diff为{x 2-x 1, x 3-x 2, x 4-x 3, ......, x n-x n-1}。说白了就是后项减前项,这就是差分。差分带来的唯一问题是经过差分后的时间序列长度会缩减1个单位。另一个问题是,如果时间序列仍然不是平稳的该怎么办?答案是再差分一次知道平稳为止。根据笔者的经验,一般差分一次就足够了,很少有出现差分两次的。
STEP5:通过AIC准则求阶数lag
AIC信息准则的公式是:AIC=2*lag+nln(RSS/n)。其中,RSS是拟合的残差和(就是之前的误差无偏估计量), n是数据段的长度。我们的目标是搜索在1~n的范围内搜索lag,使AIC的值取得最小。而那个取得AIC最小值的lag就是我们想要的阶数lag
STEP6:杜尔滨怀特讯检验
英文名称为Durbin-Watson test。相必大家还记得我们采用最小二乘法求回归问题时曾假设回归后理论值与实际值的误差为ε,且ε服从正态分布。实际上,误差服从正态分布是使用最小二乘法求解回归问题的先决条件。而杜尔滨怀特讯检验的目的就是检测回归完成后的残差是否服从正态分布。如果不服从则该段数据不满足使用最小二乘法的先决条件,也就不满足求解格兰杰因果关系的基础。
STEP7:一致性检验
当对时间序列的数据值点完成回归以后。并不能确定回归得出的理论值和实际值是否来自于同一分布。此时,应采用一致性检验。如果一致性检验的结论表明理论值和实际值差距较大,则回归结果失败,需要重新确定回归。
 
到这里,格兰杰因果关系的基本原理部分相信大家已经明白了。如果还有不明白的问题以及其他需求,请在站内给笔者留言。笔者将尽快给予相应的回复。后续部分笔者也将抽时间尽快完成。除此之外,本文中如有其它不妥之处,希望诸位看官不吝赐教,小生愿洗耳恭听。
 
[1] Testing for Linear and Nonlinear Granger Causality in the Stock Price-Volume Relation
[2] Spatial-temporal causal modeling for climate change attribution
[3] Mapping directed influence over the brain using Granger causality and fMRI
[4] 维基百科,“格兰杰因果关系 ”词条中格兰杰原话:“ Of course, many ridiculous papers appeared”
[5] Causality connectivity of evolved neural networks during behavior
[6] RESTING-STATE BRAIN NETWORKS REVEALED BY GRANGER CAUSAL CONNECTIVITY IN FROGS

Guess you like

Origin www.cnblogs.com/think90/p/11515286.html