Summary of Stationarity Test Methods for Time Series

When we have a new time series data, how to judge whether it is stationary?

The time series stationarity test methods can be divided into three categories:

  1. Graphical Analysis Methods

  2. Simple Statistical Methods

  3. Hypothesis Testing Methods

1. Graphical analysis method

The graphical analysis method is the most basic, simplest and most direct method, that is, drawing graphs and making judgments with the naked eye.

Time series data can be directly visualized, and statistical characteristics of time series can also be visualized.

Visualize data

Visualizing the data is to draw a line graph of the time series to see whether the curve fluctuates around a certain value (to judge whether the mean value is stable), to see if the fluctuation range of the curve changes greatly (to judge whether the variance is stable), and to see the frequency of fluctuations in different time periods of the curve [~Compact degree] does not change much (judging whether the covariance is stable), so as to judge whether the time series is stationary.

Let’s draw a few pictures below, and let’s intuitively judge which ones are stable and which ones are non-stationary.

import numpy as np
import pandas as pd
import akshare as ak
from matplotlib import pyplot as plt

np.random.seed(123)

# -------------- 准备数据 --------------
# 白噪声
white_noise = np.random.standard_normal(size=1000)

# 随机游走
x = np.random.standard_normal(size=1000)
random_walk = np.cumsum(x)

# GDP
df = ak.macro_china_gdp()
df = df.set_index('季度')
df.index = pd.to_datetime(df.index)
gdp = df['国内生产总值-绝对值'][::-1].astype('float')

# GDP DIFF
gdp_diff = gdp.diff(4).dropna()


# -------------- 绘制图形 --------------
fig, ax = plt.subplots(2, 2)

ax[0][0].plot(white_noise)
ax[0][0].set_title('white_noise')
ax[0][1].plot(random_walk)
ax[0][1].set_title('random_walk')

ax[1][0].plot(gdp)
ax[1][0].set_title('gdp')
ax[1][1].plot(gdp_diff)
ax[1][1].set_title('gdp_diff')

plt.show()

691d5f3f71fb11e43e2a9933421a25ec.pnga. White noise, the curve fluctuates up and down around the 0 value, and the fluctuation range is consistent before and after, which is a stable sequence.
b. Random walk, the curve has no definite trend, the mean and variance fluctuate greatly, and the sequence is non-stationary.
c. The trend of GDP data is rising, the mean value increases with time, and the series is non-stationary.
d. The data after the seasonal difference of GDP, the curve fluctuates roughly on a horizontal line, and the fluctuation range changes little before and after, which can be considered stable.

Visualize statistical features

Visualizing statistical features refers to drawing autocorrelation and partial autocorrelation diagrams of time series, and judging whether the sequence is stable or not based on the performance of the autocorrelation diagram.

Autocorrelation, also called serial correlation, is the degree of correlation of a signal with itself at different points in time, or with delayed copies of itself -- or lags -- as a function of delay. The autocorrelation coefficients obtained at different lag periods are called autocorrelation plots.

(There is a default assumption here that the sequence is stationary, and the autocorrelation of a stationary sequence is only related to the time interval k and does not change with the change of time t, so the autocorrelation function can be called a function of delay (k))

Stationary series usually have short-term correlation. For stationary time series, the autocorrelation coefficient tends to degenerate rapidly to zero (the shorter the lag period, the higher the correlation, when the lag period is 0, the correlation is 1); for non-stationary time series Data, degradation will occur more slowly, or there will be changes such as first decrease and then increase, or periodic fluctuations.

add71a80da42b0248ab06af329116fe2.png

The formula for calculating the autocorrelation is to split the series into two series of equal length according to the lag period k, and calculate the correlation of the two series to obtain the autocorrelation when the lag period is k. Example:






402 Payment Required


402 Payment Required


import statsmodels.api as sm
X = [2,3,4,3,8,7]
print(sm.tsa.stattools.acf(X, nlags=1, adjusted=True))

> [1, 0.3559322]
where the first element is the autocorrelation when the lag is 0, and the second element is the autocorrelation when the lag is 1

When the lag k autocorrelation coefficient is obtained from the ACF, it is not actually a simple correlation between X(t) and X(tk).

Because X(t) is also affected by the intermediate k-1 random variables X(t-1), X(t-2), ..., X(t-k+1), and these k-1 random variables All random variables have a correlation with X(tk), so the autocorrelation coefficient is actually doped with the influence of other variables on X(t) and X(tk).

After eliminating the interference of the intermediate k-1 random variables X(t-1), X(t-2), ..., X(t-k+1), the effect of X(tk) on X(t) The degree of correlation is called the partial autocorrelation coefficient. The partial autocorrelation coefficients obtained at different lag periods are called partial autocorrelation plots. (The calculation of the partial autocorrelation coefficient is more complicated, and will be introduced in detail later)

Let's take a look at a few practical cases (the data in the above figure will look at their autocorrelation diagram and partial autocorrelation diagram):

# 数据生成过程在第一个代码块中
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf

fig, ax = plt.subplots(4, 2)
fig.subplots_adjust(hspace=0.5)

plot_acf(white_noise, ax=ax[0][0])
ax[0][0].set_title('ACF(white_noise)')
plot_pacf(white_noise, ax=ax[0][1])
ax[0][1].set_title('PACF(white_noise)')

plot_acf(random_walk, ax=ax[1][0])
ax[1][0].set_title('ACF(random_walk)')
plot_pacf(random_walk, ax=ax[1][1])
ax[1][1].set_title('PACF(random_walk)')

plot_acf(gdp, ax=ax[2][0])
ax[2][0].set_title('ACF(gdp)')
plot_pacf(gdp, ax=ax[2][1])
ax[2][1].set_title('PACF(gdp)')

plot_acf(gdp_diff, ax=ax[3][0])
ax[3][0].set_title('ACF(gdp_diff)')
plot_pacf(gdp_diff, ax=ax[3][1])
ax[3][1].set_title('PACF(gdp_diff)')

plt.show()

26f5f793e6860ef94836e1562f6e08b6.png(1) The autocorrelation coefficient of white noise quickly decays to near 0, which is an obvious stationary sequence. When the lag period is 0, the autocorrelation coefficient and partial autocorrelation coefficient are actually the correlation between the sequence itself and itself, so it is 1; when the lag period is 1, the autocorrelation coefficient is 0, indicating that white noise has no autocorrelation.
(2) In the random walk, the autocorrelation coefficient decreases very slowly, so it is a non-stationary sequence; from the partial autocorrelation coefficient, we can see that the random walk is only related to the previous item.
(3) It can also be seen in the autocorrelation diagram of the GDP data that there is a certain periodicity. The autocorrelation coefficients such as lags 4, 8, and 12 are relatively large and decrease slowly. After the difference, the decrease has a certain effect. is stable. Like visual data, intuitive judgment is highly subjective, but it allows us to have a more intuitive understanding of the data.

2. Simple statistical methods

The method of calculating statistics is only as a supplement, and you can understand it. There are two conditions for wide stability: the mean and variance are unchanged. We can see it intuitively in the visual data. In fact, we can also calculate it in detail.

Very interesting logic, directly split the sequence into two sequences before and after, calculate the mean and variance of the two sequences respectively, and compare them to see if the difference is obvious. (In fact, many time series anomaly tests are also based on this idea. If the distribution before and after is consistent, there is no anomaly, otherwise there is an anomaly or mutation)

Let's calculate the mean and variance of white noise and random walk sequences in different time periods:

import numpy as np

np.random.seed(123)

white_noise = np.random.standard_normal(size=1000)

x = np.random.standard_normal(size=1000)
random_walk = np.cumsum(x)

def describe(X):
    split = int(len(X) / 2)
    X1, X2 = X[0:split], X[split:]
    mean1, mean2 = X1.mean(), X2.mean()
    var1, var2 = X1.var(), X2.var()
    print('mean1=%f, mean2=%f' % (mean1, mean2))
    print('variance1=%f, variance2=%f' % (var1, var2))

print('white noise sample')
describe(white_noise)

print('random walk sample')
describe(random_walk)

white noise sample:
mean1=-0.038644, mean2=-0.040484
variance1=1.006416, variance2=0.996734

random walk sample:
mean1=5.506570, mean2=8.490356
variance1=53.911003, variance2=126.866920

The white noise sequence mean and variance are slightly different, but roughly On the same horizontal line;
the mean and variance of the random walk sequence are quite different, so it is a non-stationary sequence.

3. Hypothesis Testing Methods

The current mainstream hypothesis testing method for stationarity is the unit root test, which tests whether there is a unit root in the sequence.

Before introducing the inspection method, it is necessary to understand some related supplementary knowledge, so that the understanding of the subsequent inspection method will be clearer.

Preliminary knowledge

Identify trends

If the time series has a "definite trend", such as

where, it is a definite trend.

There is a time variable in the expectation, which changes with time, so it is not a stationary time series.

The differencing process of a series with a definite trend is overdifference:

overdifference not only reduces the sample size of the series, but also increases the variance of the series. The differenced series will have autocorrelation, but this autocorrelation is meaningless.

Therefore, the "backward" method should be used to obtain a stationary series, as follows:
But how to determine, trend fitting. After eliminating the time trend in the sequence, it is a stationary sequence, so such a sequence is called a "trend stationary" sequence or a "regression stationary" sequence.

random trend

Another factor that causes time series to be non-stationary is "stochastic trend". For example, the random walk model:
, in which any fluctuation pair from white noise

has a permanent impact, and the most important thing is that its influence does not decay with time, which is called the "random trend" of this model.


Random walk model with drift term:
, where is a constant
   

In addition to being continuously influenced by random trends, it is also influenced by a constant term.


The first-order difference of random walk or random walk with drift term above can eliminate the influence of random trend and obtain a stationary sequence, so it is called "differential stationary" sequence.

d-order single integer

A stationary time series is called "Integrated of order zero", denoted as .
If the first-order difference of the time series is stationary, it is called "Integrated of order one", recorded as, also called "unit root process".

More generally, if the order difference of the time series is a stationary process, it is called "Integrated of order d", denoted as .

what is a unit root

Consider the following basic model:
, where white noise
is similar to the random walk model, but with an extra coefficient.

We know that random walks are non-stationary, but then, the sequence becomes stationary.

At that time, with the increase of , it will eventually converge and be stable in the long run.
At that time, it was an irregular and non-stationary random walk process;
at that time, it was a non-stationary process with explosive growth.

When it is equal to 1, it is what we call the root of unit.

It may be clearer to draw a comparison of the following

import numpy as np
from matplotlib import pyplot as plt

np.random.seed(123)

def simulate(beta):
    y = np.random.standard_normal(size=1000)
    for i in range(1, len(y)):
        y[i] = beta * y[i - 1] + y[i]
    return y

plt.figure(figsize=(20, 4))
for i, beta in enumerate([0.9, 1.0, 1.1]):
    plt.subplot(1, 3, i+1)
    plt.plot(simulate(beta))
    plt.title('beta: {}'.format(beta))
plt.show()
ac370c7f3dcc806965e02b5e52252b02.png

The first-order difference equation
       
               
                       
is a first-order stochastic difference equation (because it contains random terms)
without random terms is a deterministic difference equation (but not homogeneous, because it contains a constant term)
is the corresponding homogeneous difference equation

According to the difference equation theory, the stability of The stability of and is the same, and whether it is stable depends on whether it is stable, so to judge whether a difference equation is stable, just look at whether its corresponding homogeneous difference equation has a stable general solution.

The characteristic equation corresponding to the above homogeneous difference equation is: , called the characteristic root of the difference equation.

The process will be stationary only if the roots of the characteristic equation lie within the unit circle of the complex plane. If the root falls exactly on the circle, it is called the unit root , and it is a non-stationary process, such as the case of random walk. If it falls outside the circle, it is an exploding non-stationary process.

The difference equation is written in the form of
a lag operator as follows,

the corresponding characteristic equation (inverse characteristic equation) of the lag operator is:
, which is called the characteristic root of the autoregressive lag operator polynomial.

Obviously, the eigenvalue λ of the difference equation and the eigenvalue z of the autoregressive lag operator polynomial are reciprocals of each other.
It is stationary when both are less than 1 (the roots of the characteristic equations are all inside the unit circle), and the corresponding ones are all greater than 1 (the roots of the inverse characteristic equations are all outside the circle).



The homogeneous difference equation corresponding to the more general n-order difference


equation is: the characteristic equation of the homogeneous difference equation is: the

roots are stationary in the unit circle.

The lag operator form (inverse characteristic equation) is:

the roots are stationary outside the unit circle.

Generally, it is more complicated to directly calculate the roots of higher-order difference equations, and some simple rules can be used to check the stability of higher-order difference equations.

In the nth order difference equation, the necessary condition for all eigenvalues ​​to lie within the unit circle is:

In the nth order difference equation, the sufficient condition for all eigenvalues ​​to lie within the unit circle is:

If, at least one of the eigenvalues ​​is equal to 1.

A time series with one or more characteristic roots equal to 1 is called a unit root process .

The unit root test is to test whether each characteristic root of the characteristic equation of the difference equation is less than 1, or there is a situation equal to 1. There is no case where the test is greater than 1, because when the root is greater than 1, it is an explosive divergence sequence, which basically does not exist in daily data.


Testing method

DF test

ADF test (Augmented Dickey-Fuller Testing) is one of the most commonly used unit root test methods. It is used to test whether the sequence has a unit root to determine whether the sequence is stationary. The ADF test is an enhanced version of the DF test. Before introducing the ADF, let's take a look at the DF test.

Dickey and Fuller (1979) roughly classified them into three categories based on the basic characteristics of non-stationary series and proposed DF test:

(1) When the basic trend of the series shows an irregular rise or fall and repeats, it is classified as a drift-free autoregressive process;
(2) When the basic trend of the series shows an obvious increase or decrease with time and the trend is not too steep , which is classified as an autoregressive process with a drift term;
(3) When the basic trend of the sequence increases rapidly with time, it is classified as a regression process with a trend term.

The corresponding test regression formula is:
(i) Autoregressive process without drift term:
(ii) Autoregressive process with drift term:

402 Payment Required


(iii) Autoregressive process with drift term and trend term:
where is a constant term, is a time trend term, and is white noise without autocorrelation.
  • Null hypothesis (there is a unit root and the time series is non-stationary)

  • Alternative hypothesis (there is no unit root, the time series is stationary - stationary without intercept and trend/stationary with intercept/stationary with intercept and trend)

If the test statistic is greater than the critical value (the p value is greater than the significance level), the null hypothesis cannot be rejected, and the series is non-stationary;
if the test statistic is less than the critical value (the p value is less than the significance level), the null hypothesis is rejected and the series is considered stationary of.

The following figure is the flow chart of the unit root test seen in the network for reference (according to this process, you can determine what type of stationary the sequence is, and even if it is not stationary, you can know what type of non-stationary sequence it is):

82c72d0b90d9397528835f99b07d7870.png
Unit root test process

ADF test

The test formula of DF is a first-order autoregressive process. In order to be suitable for the stationarity test of high-order autoregressive processes, Dickey et al. modified the DF test in 1984 and introduced a higher-order lag term. The test regression correction is:

8879ff20606bb36023b4e32471e0525b.png

Assuming the same conditions:

  • Null hypothesis (there is a unit root and the time series is non-stationary)

  • Alternative hypothesis (there is no unit root, the time series is stationary - stationary without intercept and trend/stationary with intercept/stationary with intercept and trend)

The inspection process is the same as the DF inspection. To strictly judge whether the series is broadly stationary, you can directly test whether it does not contain the intercept term and the trend term is stationary; if the null hypothesis cannot be rejected (for example, p>0.05), the sequence is non-stationary. stable. Non-stationary and non-trend stable, you can use the first-order difference and other stabilization methods before testing. If the trend is stable, it is not suitable to use the difference method to stabilize if you are trapped in excessive differences.

Generate a trend stationary series:

import numpy as np
from matplotlib import pyplot as plt

np.random.seed(123)

y = np.random.standard_normal(size=100)
for i in range(1, len(y)):
    y[i] = 1 + 0.1*i + y[i]

plt.figure(figsize=(12, 6))
plt.plot(y)
plt.show()
fe8196f22fbd33ea8275ac4c094a85ae.png

Check for stationarity:

from arch.unitroot import ADF
adf = ADF(y)
# print(adf.pvalue)
print(adf.summary().as_text())

adf = ADF(y)
adf.trend = 'ct'
print(adf.summary().as_text())
08ae622a8d796607f376fb1efd05931f.png

Description:
The ADF test in the arch package can specify trend as
'n' (excluding intercept term and time trend term)
'c' (including intercept term)
'ct' (including intercept term and time trend term)
'ctt' (including intercept term, time trend term and quadratic time trend term)
correspond to tests of different stationary types respectively. (The lags defaults to the smallest AIC)

in the first text output above, if trend is not specified, the default is to test whether the intercept term is stable, the significance level is p=0.836>0.05, the null hypothesis is not rejected, and it is non-stationary;
the second above In each text output, specify trend to test whether the intercept term and the time trend term are stable, the significance level is p=0.000<0.05, and the null hypothesis is rejected, so the trend term is stable.

Let's see if the data before and after the seasonal difference of GDP is stable:

# 数据在第一个代码块中
from arch.unitroot import ADF
adf = ADF(gdp)
print(adf.summary().as_text())

adf = ADF(gdp_diff)
print(adf.summary().as_text())
563ae9b0446c2d0bbcb91f5a5518eb8e.png

It can be seen that the p value before the difference is 0.998>0.05, the null hypothesis cannot be rejected, and the data is non-stationary; the p value after the difference is 0.003 < 0.05, so the null hypothesis can be rejected at the 5% significance level, and the data after the difference is stable of.

# 数据在第一个代码块中
from arch.unitroot import ADF
adf = ADF(gdp)
adf.trend = 'ct'
print(adf.summary().as_text())
21044caf83c1354f67133d41deb1f8c1.png

The specified test stationary type is stationary with intercept term and time trend term, and the p-value is 0.693>0.05. Also, the null hypothesis cannot be rejected, so the trend is not stationary before the difference.

PP inspection

Phillips and Perron (1988) proposed a nonparametric test method, mainly to solve the potential serial correlation and heteroscedasticity problems in residual terms, and the asymptotic distribution and critical value of the test statistic are the same as the ADF test. It also appeared earlier, with the same assumptions and similar usage, and can be used as a supplement to the ADF test.

  • Null hypothesis (there is a unit root and the time series is non-stationary)

  • Alternative hypothesis (there is no unit root, the time series is stationary - stationary without intercept and trend/stationary with intercept/stationary with intercept and trend)

Also construct a trend stationary series, look at the PP test results:

import numpy as np
from arch.unitroot import PhillipsPerron

np.random.seed(123)

y = np.random.standard_normal(size=100)
for i in range(1, len(y)):
    y[i] = 1 + 0.1*i + y[i]


pp = PhillipsPerron(y)
print(pp.summary().as_text())

pp = PhillipsPerron(y)
pp.trend = 'ct'
print(pp.summary().as_text())
092a70a4267011c4d88e5f3db75b4b7c.png

If trend is not specified as the default test for whether it is a stationary process with an intercept term, the p-value of the test result is 0.055>0.05, and the corresponding test statistic is -2.825 greater than the critical value -2.89 at the 5% significance level, so 5% significance However, the test statistic is less than the critical value -2.58 at the 10% significance level, so the null hypothesis can be rejected at the 10% significance level, and it is considered to be a stationary series.

Specify trend='ct' to test whether it is a stationary process with an intercept term and a time trend term. The p value of the test result is 0.000<0.05, so the trend is stable; in fact, the test statistic is -10.009 less than 1% significance level The critical value of -4.05, so it is stationary even at the 1% significance level.

Based on the above test results, it can be determined that the series is trend-stationary.

DF-GLS inspection

The DF-GLS test is a unit root test method proposed by Elliott, Rothenberg, and Stock in 1996. The full name is Dickey-Fuller Test with GLS Detredding, that is, "the test for removing the trend using the generalized least squares method". It is currently the most effective. unit root test.

The DF-GLS test uses the generalized least squares method. First, perform a "quasi-difference" on the data to be tested, and then use the quasi-difference data to detrend the original sequence, and then use the model form of the ADF test to detrend the data. A unit root test is performed, but at this time the ADF test model no longer contains constant terms or time trend variables.

  • Null hypothesis: The series has a unit root (the time series is non-stationary)

  • Alternative Hypothesis: The series does not have a unit root (time series is stationary or trend stationary)

Also construct a trend stationary series to see the test effect:

import numpy as np
from arch.unitroot import DFGLS

np.random.seed(123)

y = np.random.standard_normal(size=100)
for i in range(1, len(y)):
    y[i] = 1 + 0.1*i + y[i]

dfgls = DFGLS(y)
print(dfgls.summary().as_text())

dfgls = DFGLS(y)
dfgls.trend = 'ct'
print(dfgls.summary().as_text())
a882dd157166a6378bc17056115f43de.png

If trend is not specified, the null hypothesis cannot be rejected, and it is not stationary; when trend='ct' is specified, the p value is less than 0.05, the null hypothesis is rejected, and the intercept term and the time trend are stable.

Let's construct a non-stationary series with a unit root to see the test results:

import numpy as np
from arch.unitroot import DFGLS

np.random.seed(123)

y = np.random.standard_normal(size=100)
for i in range(1, len(y)):
    y[i] = 0.1 + y[i-1] + y[i]

dfgls = DFGLS(y)
print(dfgls.summary().as_text())

dfgls = DFGLS(y)
dfgls.trend = 'ct'
print(dfgls.summary().as_text())
76d6d481287c9c46e85380a023cd8450.png

One p-value is 0.645 and the other is 0.347, both greater than 0.05/0.1. If the test type is not specified, all of them fail to pass the test, so the series is a non-stationary series. (DF-GLS test trend can only be specified as 'c' or 'ct')

KPSS inspection

Another well-known test for the existence of the unit root is the KPSS test proposed by Kwiatkowski, Phillips, and Shin in 1992. Compared with the above three test methods, the biggest difference is that its null hypothesis is a stationary sequence or a trend stationary sequence, while the alternative hypothesis is that there is a unit root.

  • Null hypothesis: The series does not have a unit root (time series is stationary or trend stationary)

  • Alternative hypothesis: The series has a unit root (the time series is non-stationary)

import numpy as np
from arch.unitroot import KPSS

np.random.seed(123)

y = np.random.standard_normal(size=100)
for i in range(1, len(y)):
    y[i] = 0.1 + y[i-1] + y[i]

kpss = KPSS(y)
print(kpss.summary().as_text())

kpss = KPSS(y)
kpss.trend = 'ct'
print(kpss.summary().as_text())
95d0ef2b979d70e150c616f387d3a79c.png

Note that the null hypothesis in the KPSS test is that there is no unit root. The p value under the default test trend type is 0.000, the null hypothesis is rejected, there is a unit root, and the series is non-stationary. After specifying trend='ct', the p value is 0.115>0.05, the null hypothesis is not rejected, the series trend is considered to be stable, and the test is wrong. None of the above tests can 100% guarantee the correctness of the test. The PP test can be considered as a supplement to the ADF test. The KPSS test can also be used together with other tests. When both are considered to be stable or the trend is stable, it is determined to be stable.

In addition to the above test methods, there are Zivot-Andrews test, Variance Ratio test and other test methods.

The above code implementation uses the arch package in Python, and a common package statsmodels also implements the unit root test method, and the results are the same.

Method/Model Package/Module (function/class)
Augmented Dickey-Fuller test statsmodels.tsa.stattools (adfuller)
arch.unitroot (ADF)
Phillip-Perron test arch.unitroot (PhillipsPerron)
Dickey-Fuller GLS Test arch.unitroot (DFGLS)
KPSS test statsmodels.tsa.stattools (kpss)
arch.unitroot (KPSS)
Life-Andrew test statsmodels.tsa.stattools (zivot_andrews)
arch.unitroot (ZivotAndrews)
Variance Ratio test arch.unitroot (VarianceRatio)

Reference link
[1] http://course.sdu.edu.cn/G2S/eWebEditor/uploadfile/20140525165255371.pdf
[2] https://max.book118.com/html/2016/0518/43276093.shtm
[3 ] https://doc.mbalib.com/view/ef1783f2fa1892f6ad016281ed743d78.html
[4] https://www.stata.com/manuals13/tsdfgls.pdf
[5] https://zhuanlan.zhihu.com/p/50553021
[6] https://arch.readthedocs.io/en/latest/index.html

Recommended reading:

My 2022 Internet School Recruitment Sharing

My 2021 Summary

Talking about the difference between algorithm post and development post

Internet school recruitment research and development salary summary

For time series, everything you can do.

What is the spatiotemporal sequence problem? Which models are mainly used for such problems? What are the main applications?

Public number: AI snail car

Stay humble, stay disciplined, stay progressive

6c4dd27ea57b5c2567fa3f7adfa794df.png

Send [Snail] to get a copy of "Hands-on AI Project" (AI Snail Car)

Send [1222] to get a good leetcode brushing note

Send [AI Four Classics] to get four classic AI e-books

Guess you like

Origin blog.csdn.net/qq_33431368/article/details/123650014