愉快的学习就从翻译开始吧_10-Time Series Forecasting with the Long Short-Term Memory Network in Python

Tutorial Extensions/教程扩展

There are many extensions to this tutorial that we may consider.

我们可以考虑很多关于本教程的扩展

Perhaps you could explore some of these yourself and post your discoveries in the comments below.

也许你可以自己发现这些(扩展)中的某些并在下面的评论中发表你的发现。

  • Multi-Step Forecast. The experimental setup could be changed to predict the next n-time steps rather than the next single time step. This would also permit a larger batch size and faster training. Note that we are basically performing a type of 12 one-step forecast in this tutorial given the model is not updated, although new observations are available and are used as input variables.
    多步预测。可以改变实验设置来预测以后n步,而不是下一个单时间步。这也将允许更大的批量和更快的培训。请注意,由于模型没有更新,我们基本上是在本教程中执行一种类型的一步预测,尽管新的观察结果可用并用作输入变量。
  • Tune LSTM model. The model was not tuned; instead, the configuration was found with some quick trial and error. I believe much better results could be achieved by tuning at least the number of neurons and number of training epochs. I also think early stopping via a callback might be useful during training.
    调整LSTM模型。模型没有调整;相反,配置发现有一些快速的试验和错误。我相信通过调整至少神经元的数量和训练时期的数量可以获得更好的结果。我也认为通过回调提前停止在训练期间可能会有用(回调提前停止是啥?)。
  • Seed State Experiments. It is not clear whether seeding the system prior to forecasting by predicting all of the training data is beneficial. It seems like a good idea in theory, but this needs to be demonstrated. Also, perhaps other methods of seeding the model prior to forecasting would be beneficial.
    种子状态实验。目前还不清楚在预测所有培训数据之前播种系统是否有益。理论上这似乎是个好主意,但这需要证明。另外,在预测之前播种模型的其他方法也许是有益的(绕死了,不知道在作者脑子里这里的seed是不是就是初始化,我理解应该是随机种子的问题,如果seed不指定,初始化方法是根据系统时间生成随机数,根据上一节的结果看,初始化的不同RMSE差别也是较大的,如果指定seed则,随机数就固定了下来,那么就有可能刚好碰到一个得到较好RMSE的,或者是更差的就固定了下来,就本例子来看这显然是不好的,另外初始化对最终结果影响如此之大是不应该的,问题出在样本数量太少了,模型训练次数再多都没法发现数据间更好的关联性)。
  • Update Model. The model could be updated in each time step of the walk-forward validation. Experiments are needed to determine if it would be better to refit the model from scratch or update the weights with a few more training epochs including the new sample.
    更新模型,模型可以在走向验证的每个时间步中更新,需要实验来确定这是否比包含新样本在内的更多的培纪元后更新权重会更好(也就是动态方法和静态方法的问题,我倾向于动态方法,因为通常情况下,与当前越近的数据对预测产生的英雄 应该越大。)
  • Input Time Steps. The LSTM input supports multiple time steps for a sample. Experiments are needed to see if including lag observations as time steps provides any benefit.
    输入时间步骤。 LSTM输入支持样本的多个时间步。 需要进行实验以确定是否将时滞的观测值包括在内作为时间步能提供一些好处(不懂,这都什么呀,首先,一个样本,如何用多时间步?如果一个样本只有一个数据,显然是不能用多时间步的吧!如果一个样本有多个数据,怎么来应用于多时间步,要分割这些数据吗?这是不是又造成逻辑上的混乱?滞后的观察值,这又是什么鬼东西!WF!)
  • Input Lag Features. Lag observations may be included as input features. Experiments are needed to see if including lag features provide any benefit, not unlike an AR(k) linear model.
    输入滞后特征。 滞后观察值可以被包括作为输入特征。 需要实验来观察包括滞后特征是否能提供任何好处,与AR(k)线性模型不同(Autoregressive Model,自回归模型,k是什么意思呢?)。
  • Input Error Series. An error series may be constructed (forecast error from a persistence model) and used as an additional input feature, not unlike an MA(k) linear model. Experiments are needed to see if this provides any benefit.
    输入错误系列。 可以构造一个错误序列(来自持久性模型的预测误差)并用作附加输入特征,与MA(k)线性模型不同。 需要进行实验来看看这是否能带来任何好处(MA(k) linear model这又是什么东东)。
  • Learn Non-Stationary. The LSTM network may be able to learn the trend in the data and make reasonable predictions. Experiments are needed to see if temporal dependent structures, like trends and seasonality, left in data can be learned and effectively predicted by LSTMs.
    学习非固定。 LSTM网络可能能够了解数据的趋势并做出合理的预测。 需要进行实验来查看是否可以通过LSTM学习和有效预测数据中留下的时间相关结构,如趋势和季节性。
  • Contrast Stateless. Stateful LSTMs were used in this tutorial. The results should be compared with stateless LSTM configurations.
    对比无状态。 本教程中使用了有状态的LSTM。 结果应该与无状态的LSTM配置进行比较。
  • Statistical Significance. The multiple repeats experimental protocol can be extended further to include statistical significance tests to demonstrate whether the difference between populations of RMSE results with different configurations are statistically significant.
    统计学意义。 多重复实验方案可进一步扩展以包括统计显着性检验以证明具有不同配置的RMSE结果集之间的差异是否具有统计显着性。

猜你喜欢

转载自blog.csdn.net/dreamscape9999/article/details/80673438