Multivariate input univariate multi-step output time series forecast using LSTM in KERAS - What approach?
up vote
0
down vote
favorite
I have been trying to do time series forecasting for the sales of a product / $ using multiple features doing a LSTM model in Keras. I am still new with LSTM modelling - I might do something wrong.
I have hourly data for about a year and a half resulting in 11904 time-steps.
The features consist of several numerical weather data, and a couple of one hot encoded features like day of the week, hour of the day. In the end I want to forecast the next n hours of the product sales using the last m hours of every feature (including the sales). I am giving a small example how I prepared the data. My time-series is structured like this: "T" representing the value I am interested in and "a" a feature time-series. I have 41 features at the moment.
dataset=array
([['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6'],
['T7', 'a7'],
['T8', 'a8'],
['T9', 'a9']], dtype='<U2')
I am using a sliding window approach since I only have one time-series.
In the example I am using 5 previous time-steps to predict 3 ahead.
My X looks like this:
X=array
([[['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4']],
[['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5']],
[['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6']]], dtype='<U2')
and my target like this:
y=array
([['T5', 'T6', 'T7'],
['T6', 'T7', 'T8'],
['T7', 'T8', 'T9']], dtype='<U2')
Q1: Is this a viable approach? I read that it is not necessary for LSTM to prepare the data like this since it is able to form the windows itself.
Right now I want to forecast the next 24 hours, using the last 72 hours.
For my network the shape of X and y look like this:
trainX.shape=(11699, 72, 41) ##### my test was very small
trainY.shape=(11699, 24)
And the LSTM model itself:
n_features=41
neurons=50
look_ahead=24
model = Sequential()
model.add(LSTM(input_dim=n_features, output_dim=neurons))
model.add(Dropout(.2))
model.add(Dense(look_ahead))
model.compile(loss='mse', optimizer='rmsprop')
hist=model.fit(trainX, trainY, epochs=20,shuffle=False)
I think I do get reasonable results (a dip between 0 and 0600, maximum at 1800-2000 etc) but I have some trouble understanding if the model really does what I try to make it do.
Q2 My understanding is that I feed in the last 72 time-steps for 41 features and it puts out one time-step with 24 features - since my training data is structured like this. Which I then interpret as 24 time-steps. Does the model learn the sequential aspect?
Q3 By dividing my training data in 72 hour chunks I lose every saisonality / pattern longer than 72 hours?
Q4 Strictly speaking, am I even putting in equal samples? 72 hours in July have completly other weather feautures than say in December.
Q5 I am only normalizing/scaling (MinMaxScaler) my data so far. If some weather data or my sales are not stationary would I need to make them that beforehand?
Q6 What would I need to change to make it learn longer patterns?
I read that too many time-steps are not good either.
I found something about stateful LSTM but have not figured them out yet - is that another viable approach? Would I change the input to (1,11699,41) there?
Thanks for everyone who can shed some light.
keras time-series lstm forecasting multi-step
add a comment |
up vote
0
down vote
favorite
I have been trying to do time series forecasting for the sales of a product / $ using multiple features doing a LSTM model in Keras. I am still new with LSTM modelling - I might do something wrong.
I have hourly data for about a year and a half resulting in 11904 time-steps.
The features consist of several numerical weather data, and a couple of one hot encoded features like day of the week, hour of the day. In the end I want to forecast the next n hours of the product sales using the last m hours of every feature (including the sales). I am giving a small example how I prepared the data. My time-series is structured like this: "T" representing the value I am interested in and "a" a feature time-series. I have 41 features at the moment.
dataset=array
([['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6'],
['T7', 'a7'],
['T8', 'a8'],
['T9', 'a9']], dtype='<U2')
I am using a sliding window approach since I only have one time-series.
In the example I am using 5 previous time-steps to predict 3 ahead.
My X looks like this:
X=array
([[['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4']],
[['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5']],
[['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6']]], dtype='<U2')
and my target like this:
y=array
([['T5', 'T6', 'T7'],
['T6', 'T7', 'T8'],
['T7', 'T8', 'T9']], dtype='<U2')
Q1: Is this a viable approach? I read that it is not necessary for LSTM to prepare the data like this since it is able to form the windows itself.
Right now I want to forecast the next 24 hours, using the last 72 hours.
For my network the shape of X and y look like this:
trainX.shape=(11699, 72, 41) ##### my test was very small
trainY.shape=(11699, 24)
And the LSTM model itself:
n_features=41
neurons=50
look_ahead=24
model = Sequential()
model.add(LSTM(input_dim=n_features, output_dim=neurons))
model.add(Dropout(.2))
model.add(Dense(look_ahead))
model.compile(loss='mse', optimizer='rmsprop')
hist=model.fit(trainX, trainY, epochs=20,shuffle=False)
I think I do get reasonable results (a dip between 0 and 0600, maximum at 1800-2000 etc) but I have some trouble understanding if the model really does what I try to make it do.
Q2 My understanding is that I feed in the last 72 time-steps for 41 features and it puts out one time-step with 24 features - since my training data is structured like this. Which I then interpret as 24 time-steps. Does the model learn the sequential aspect?
Q3 By dividing my training data in 72 hour chunks I lose every saisonality / pattern longer than 72 hours?
Q4 Strictly speaking, am I even putting in equal samples? 72 hours in July have completly other weather feautures than say in December.
Q5 I am only normalizing/scaling (MinMaxScaler) my data so far. If some weather data or my sales are not stationary would I need to make them that beforehand?
Q6 What would I need to change to make it learn longer patterns?
I read that too many time-steps are not good either.
I found something about stateful LSTM but have not figured them out yet - is that another viable approach? Would I change the input to (1,11699,41) there?
Thanks for everyone who can shed some light.
keras time-series lstm forecasting multi-step
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have been trying to do time series forecasting for the sales of a product / $ using multiple features doing a LSTM model in Keras. I am still new with LSTM modelling - I might do something wrong.
I have hourly data for about a year and a half resulting in 11904 time-steps.
The features consist of several numerical weather data, and a couple of one hot encoded features like day of the week, hour of the day. In the end I want to forecast the next n hours of the product sales using the last m hours of every feature (including the sales). I am giving a small example how I prepared the data. My time-series is structured like this: "T" representing the value I am interested in and "a" a feature time-series. I have 41 features at the moment.
dataset=array
([['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6'],
['T7', 'a7'],
['T8', 'a8'],
['T9', 'a9']], dtype='<U2')
I am using a sliding window approach since I only have one time-series.
In the example I am using 5 previous time-steps to predict 3 ahead.
My X looks like this:
X=array
([[['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4']],
[['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5']],
[['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6']]], dtype='<U2')
and my target like this:
y=array
([['T5', 'T6', 'T7'],
['T6', 'T7', 'T8'],
['T7', 'T8', 'T9']], dtype='<U2')
Q1: Is this a viable approach? I read that it is not necessary for LSTM to prepare the data like this since it is able to form the windows itself.
Right now I want to forecast the next 24 hours, using the last 72 hours.
For my network the shape of X and y look like this:
trainX.shape=(11699, 72, 41) ##### my test was very small
trainY.shape=(11699, 24)
And the LSTM model itself:
n_features=41
neurons=50
look_ahead=24
model = Sequential()
model.add(LSTM(input_dim=n_features, output_dim=neurons))
model.add(Dropout(.2))
model.add(Dense(look_ahead))
model.compile(loss='mse', optimizer='rmsprop')
hist=model.fit(trainX, trainY, epochs=20,shuffle=False)
I think I do get reasonable results (a dip between 0 and 0600, maximum at 1800-2000 etc) but I have some trouble understanding if the model really does what I try to make it do.
Q2 My understanding is that I feed in the last 72 time-steps for 41 features and it puts out one time-step with 24 features - since my training data is structured like this. Which I then interpret as 24 time-steps. Does the model learn the sequential aspect?
Q3 By dividing my training data in 72 hour chunks I lose every saisonality / pattern longer than 72 hours?
Q4 Strictly speaking, am I even putting in equal samples? 72 hours in July have completly other weather feautures than say in December.
Q5 I am only normalizing/scaling (MinMaxScaler) my data so far. If some weather data or my sales are not stationary would I need to make them that beforehand?
Q6 What would I need to change to make it learn longer patterns?
I read that too many time-steps are not good either.
I found something about stateful LSTM but have not figured them out yet - is that another viable approach? Would I change the input to (1,11699,41) there?
Thanks for everyone who can shed some light.
keras time-series lstm forecasting multi-step
I have been trying to do time series forecasting for the sales of a product / $ using multiple features doing a LSTM model in Keras. I am still new with LSTM modelling - I might do something wrong.
I have hourly data for about a year and a half resulting in 11904 time-steps.
The features consist of several numerical weather data, and a couple of one hot encoded features like day of the week, hour of the day. In the end I want to forecast the next n hours of the product sales using the last m hours of every feature (including the sales). I am giving a small example how I prepared the data. My time-series is structured like this: "T" representing the value I am interested in and "a" a feature time-series. I have 41 features at the moment.
dataset=array
([['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6'],
['T7', 'a7'],
['T8', 'a8'],
['T9', 'a9']], dtype='<U2')
I am using a sliding window approach since I only have one time-series.
In the example I am using 5 previous time-steps to predict 3 ahead.
My X looks like this:
X=array
([[['T0', 'a0'],
['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4']],
[['T1', 'a1'],
['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5']],
[['T2', 'a2'],
['T3', 'a3'],
['T4', 'a4'],
['T5', 'a5'],
['T6', 'a6']]], dtype='<U2')
and my target like this:
y=array
([['T5', 'T6', 'T7'],
['T6', 'T7', 'T8'],
['T7', 'T8', 'T9']], dtype='<U2')
Q1: Is this a viable approach? I read that it is not necessary for LSTM to prepare the data like this since it is able to form the windows itself.
Right now I want to forecast the next 24 hours, using the last 72 hours.
For my network the shape of X and y look like this:
trainX.shape=(11699, 72, 41) ##### my test was very small
trainY.shape=(11699, 24)
And the LSTM model itself:
n_features=41
neurons=50
look_ahead=24
model = Sequential()
model.add(LSTM(input_dim=n_features, output_dim=neurons))
model.add(Dropout(.2))
model.add(Dense(look_ahead))
model.compile(loss='mse', optimizer='rmsprop')
hist=model.fit(trainX, trainY, epochs=20,shuffle=False)
I think I do get reasonable results (a dip between 0 and 0600, maximum at 1800-2000 etc) but I have some trouble understanding if the model really does what I try to make it do.
Q2 My understanding is that I feed in the last 72 time-steps for 41 features and it puts out one time-step with 24 features - since my training data is structured like this. Which I then interpret as 24 time-steps. Does the model learn the sequential aspect?
Q3 By dividing my training data in 72 hour chunks I lose every saisonality / pattern longer than 72 hours?
Q4 Strictly speaking, am I even putting in equal samples? 72 hours in July have completly other weather feautures than say in December.
Q5 I am only normalizing/scaling (MinMaxScaler) my data so far. If some weather data or my sales are not stationary would I need to make them that beforehand?
Q6 What would I need to change to make it learn longer patterns?
I read that too many time-steps are not good either.
I found something about stateful LSTM but have not figured them out yet - is that another viable approach? Would I change the input to (1,11699,41) there?
Thanks for everyone who can shed some light.
keras time-series lstm forecasting multi-step
keras time-series lstm forecasting multi-step
asked Nov 22 at 16:07
Folanir
11
11
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53434732%2fmultivariate-input-univariate-multi-step-output-time-series-forecast-using-lstm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown