Only one hour! Using RNN to realize the accurate prediction of stock price (the dryest dry goods in the whole network, it is recommended to collect!) | Day 9

Article Directory

I. Introduction

Today is the 9th day, we will start the RNN series and complete the prediction of the stock opening price. The final R2 can be reached 0.72. I will also update the CNN series later.

My environment:

  • Language environment: Python3.6.5
  • Compiler: jupyter notebook
  • Deep learning environment: TensorFlow 2.4.1

Highlights from the past issue:

From the column: [100 cases of deep learning]

To reprint, please contact me through the left contact (available on the computer side), remarks: CSDN reprint

Second, what is RNN

The structure of the traditional neural network is relatively simple: input layer-hidden layer-output layer

Insert picture description here

The biggest difference between RNN and traditional neural networks is that each time the previous output results are brought to the next hidden layer for training together. As shown below:

Insert picture description here

Here is a specific case to see how RNN works:

The user said "what time is it?", our neural network will first divide this sentence into five basic units (four words + a question mark)

Insert picture description here

Then, enter the five basic units into the RNN network in order, and first use "what" as the input of the RNN to get the output01

Insert picture description here

Then, input "time" into the RNN network in order to get the output 02.

In this process, we can see that when "time" is input, the output of the previous "what" will also 02affect the output (half of the hidden layer is black).

By analogy, we can see that the results produced by all the previous inputs have an impact on the subsequent output (you can see that the circle contains all the previous colors)

When the neural network judges the intent, only the output of the last layer is needed 05, as shown in the following figure:

3. Preparation

1. Set up the GPU

If you are using a CPU, you can comment out this part of the code.

import tensorflow as tf

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    tf.config.experimental.set_memory_growth(gpus[0], True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpus[0]],"GPU")

2. Load data

import os,math
from tensorflow.keras.layers import Dropout, Dense, SimpleRNN
from sklearn.preprocessing   import MinMaxScaler
from sklearn                 import metrics
import numpy             as np
import pandas            as pd
import tensorflow        as tf
import matplotlib.pyplot as plt
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号
data = pd.read_csv('./datasets/SH600519.csv')  # 读取股票文件

data
Unnamed: 0dateopenclosehighlowvolumecode
0742010-04-2688.70287.38189.07287.362107036.13600519
1752010-04-2787.35584.84187.35584.68158234.48600519
2762010-04-2884.23584.31885.12883.59726,287.43600519
3772010-04-2984.59285.67186.31584.59234501.20600519
4782010-04-3083.87182.34083.87181.52385,566.70600519
...........................
242124952020-04-201221.0001227.3001231.5001216.80024,239.00600519
242224962020-04-211221.0201200.0001223.9901193.00029224.00600519
242324972020-04-221206.0001244.5001249.5001202.22044035.00600519
242424982020-04-231250.0001252.2601265.6801,247.77026899.00600519
242524992020-04-241248.0001250.5601259.8901235.18019122.00600519

2426 rows × 8 columns

"""
前(2426-300=2126)天的开盘价作为训练集,表格从0开始计数,2:3 是提取[2:3)列,前闭后开,故提取出C列开盘价
后300天的开盘价作为测试集
"""
training_set = data.iloc[0:2426 - 300, 2:3].values  
test_set = data.iloc[2426 - 300:, 2:3].values  

Fourth, data preprocessing

1. Normalization

sc           = MinMaxScaler(feature_range=(0, 1))
training_set = sc.fit_transform(training_set)
test_set     = sc.transform(test_set) 

2. Set up the test set and training set

x_train = []
y_train = []

x_test = []
y_test = []

"""
使用前60天的开盘价作为输入特征x_train
    第61天的开盘价作为输入标签y_train
    
for循环共构建2426-300-60=2066组训练数据。
       共构建300-60=260组测试数据
"""
for i in range(60, len(training_set)):
    x_train.append(training_set[i - 60:i, 0])
    y_train.append(training_set[i, 0])
    
for i in range(60, len(test_set)):
    x_test.append(test_set[i - 60:i, 0])
    y_test.append(test_set[i, 0])
    
# 对训练集进行打乱
np.random.seed(7)
np.random.shuffle(x_train)
np.random.seed(7)
np.random.shuffle(y_train)
tf.random.set_seed(7)
"""
将训练数据调整为数组(array)

调整后的形状:
x_train:(2066, 60, 1)
y_train:(2066,)
x_test :(240, 60, 1)
y_test :(240,)
"""
x_train, y_train = np.array(x_train), np.array(y_train) # x_train形状为:(2066, 60, 1)
x_test,  y_test  = np.array(x_test),  np.array(y_test)

"""
输入要求:[送入样本数, 循环核时间展开步数, 每个时间步输入特征个数]
"""
x_train = np.reshape(x_train, (x_train.shape[0], 60, 1))
x_test  = np.reshape(x_test,  (x_test.shape[0], 60, 1))

Five, build the model

model = tf.keras.Sequential([
    SimpleRNN(80, return_sequences=True), #布尔值。是返回输出序列中的最后一个输出,还是全部序列。
    Dropout(0.2),                         #防止过拟合
    SimpleRNN(80),
    Dropout(0.2),
    Dense(1)
])

Six, activation model

# 该应用只观测loss数值,不观测准确率,所以删去metrics选项,一会在每个epoch迭代显示时只显示loss值
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
              loss='mean_squared_error')  # 损失函数用均方误差

Seven, training model

history = model.fit(x_train, y_train, 
                    batch_size=64, 
                    epochs=20, 
                    validation_data=(x_test, y_test), 
                    validation_freq=1)                  #测试的epoch间隔数

model.summary()
Epoch 1/20
33/33 [==============================] - 6s 123ms/step - loss: 0.1809 - val_loss: 0.0310
Epoch 2/20
33/33 [==============================] - 3s 105ms/step - loss: 0.0257 - val_loss: 0.0721
Epoch 3/20
33/33 [==============================] - 3s 85ms/step - loss: 0.0165 - val_loss: 0.0059
Epoch 4/20
33/33 [==============================] - 3s 85ms/step - loss: 0.0097 - val_loss: 0.0111
Epoch 5/20
33/33 [==============================] - 3s 90ms/step - loss: 0.0099 - val_loss: 0.0139
Epoch 6/20
33/33 [==============================] - 3s 105ms/step - loss: 0.0067 - val_loss: 0.0167
           ...................
Epoch 16/20
33/33 [==============================] - 3s 95ms/step - loss: 0.0035 - val_loss: 0.0149
Epoch 17/20
33/33 [==============================] - 4s 111ms/step - loss: 0.0028 - val_loss: 0.0111
Epoch 18/20
33/33 [==============================] - 4s 110ms/step - loss: 0.0029 - val_loss: 0.0061
Epoch 19/20
33/33 [==============================] - 3s 104ms/step - loss: 0.0027 - val_loss: 0.0110
Epoch 20/20
33/33 [==============================] - 3s 90ms/step - loss: 0.0028 - val_loss: 0.0037
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
simple_rnn (SimpleRNN)       (None, 60, 80)            6560      
_________________________________________________________________
dropout (Dropout)            (None, 60, 80)            0         
_________________________________________________________________
simple_rnn_1 (SimpleRNN)     (None, 80)                12880     
_________________________________________________________________
dropout_1 (Dropout)          (None, 80)                0         
_________________________________________________________________
dense (Dense)                (None, 1)                 81        
=================================================================
Total params: 19,521
Trainable params: 19,521
Non-trainable params: 0
_________________________________________________________________

8. Results visualization

1. Draw a loss graph

plt.plot(history.history['loss']    , label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss by K同学啊')
plt.legend()
plt.show()

2. Forecast

predicted_stock_price = model.predict(x_test)                       # 测试集输入模型进行预测
predicted_stock_price = sc.inverse_transform(predicted_stock_price) # 对预测数据还原---从(0,1)反归一化到原始范围
real_stock_price = sc.inverse_transform(test_set[60:])              # 对真实数据还原---从(0,1)反归一化到原始范围

# 画出真实数据和预测数据的对比曲线
plt.plot(real_stock_price, color='red', label='Stock Price')
plt.plot(predicted_stock_price, color='blue', label='Predicted Stock Price')
plt.title('Stock Price Prediction by K同学啊')
plt.xlabel('Time')
plt.ylabel('Stock Price')
plt.legend()
plt.show()
Insert picture description here

3. Evaluation

"""
MSE  :均方误差    ----->  预测值减真实值求平方后求均值
RMSE :均方根误差  ----->  对均方误差开方
MAE  :平均绝对误差----->  预测值减真实值求绝对值后求均值
R2   :决定系数,可以简单理解为反映模型拟合优度的重要的统计量

详细介绍可以参考文章:https://blog.csdn.net/qq_38251616/article/details/107997435
"""
MSE   = metrics.mean_squared_error(predicted_stock_price, real_stock_price)
RMSE  = metrics.mean_squared_error(predicted_stock_price, real_stock_price)**0.5
MAE   = metrics.mean_absolute_error(predicted_stock_price, real_stock_price)
R2    = metrics.r2_score(predicted_stock_price, real_stock_price)

print('均方误差: %.5f' % MSE)
print('均方根误差: %.5f' % RMSE)
print('平均绝对误差: %.5f' % MAE)
print('R2: %.5f' % R2)
均方误差: 1833.92534
均方根误差: 42.82435
平均绝对误差: 36.23424
R2: 0.72347

Highlights from the past issue:

From the column: [100 cases of deep learning]


Students who need data can leave an email in the comments . If it takes a long time, you can find my contact information on the left side of the article (available on the computer side) or the message area.

Follow me, give me a like, add a favorite, send me a hot search, thank you all!